The digital forensic investigation field faces continual challenges due to rapid technological advancements, the widespread use of digital devices, and the exponential growth in stored data. Protecting data privacy has emerged as a critical concern, particularly as traditional forensic techniques grant investigators unrestricted access to potentially sensitive data. While existing research addresses either investigative effectiveness or data privacy, a comprehensive solution that balances both aspects remains elusive. This study introduces a novel digital forensic framework that employs case information, case profiles, and expert knowledge to automate analysis. Machine learning techniques are utilized to identify relevant evidence while prioritizing data privacy. The framework also enhances validation procedures, fostering transparency, and incorporates secure logging mechanisms for increased accountability.
{"title":"Designing an automated, privacy preserving, and efficient Digital Forensic Framework","authors":"Dhwaniket Kamble, M. Salunke","doi":"10.32629/jai.v7i5.1270","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1270","url":null,"abstract":"The digital forensic investigation field faces continual challenges due to rapid technological advancements, the widespread use of digital devices, and the exponential growth in stored data. Protecting data privacy has emerged as a critical concern, particularly as traditional forensic techniques grant investigators unrestricted access to potentially sensitive data. While existing research addresses either investigative effectiveness or data privacy, a comprehensive solution that balances both aspects remains elusive. This study introduces a novel digital forensic framework that employs case information, case profiles, and expert knowledge to automate analysis. Machine learning techniques are utilized to identify relevant evidence while prioritizing data privacy. The framework also enhances validation procedures, fostering transparency, and incorporates secure logging mechanisms for increased accountability.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"2019 36","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140246091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The viability of every insurance company depends on risk assessment of new life policy proposals. Machine learning techniques are increasingly shown to double case processing speed, reducing manual evaluation time. The underwriter evaluates the risk in several ways, including financial and medical evaluations and category classification based on customer data and other factors like previous insurance information, clinical history, and financial data. This research examines different academics’ publications on risk prediction while offering a new insurance policy to an applicant. Multiple machine learning models developed by researchers have been extensively investigated. The researchers’ model evaluation criteria were analyzed to understand and discover study gaps. The article additionally analyses how researchers found an accurate machine-learning model. This report also analyses various scholars’ future work proposals to identify what could possibly be modified for further research. This study details the measures used by other academics to evaluate machine learning models. This study describes the criteria used by other scholars to evaluate machine learning models. The criteria used by investigators to assess the produced models were carefully evaluated to understand and spot any untapped potential for advancement. Researchers’ methods for finding an accurate machine-learning model are also examined in this article. In addition, this study analyses several researchers’ future work proposals to discover what may be changed for further research. Using previous academics’ work, this review suggests ways to enhance insurance manual procedures.
{"title":"Evaluation of risk level assessment strategies in life Insurance: A review of the literature","authors":"Vijayakumar Varadarajan, Vijaya Kumar Kakumanu","doi":"10.32629/jai.v7i5.1147","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1147","url":null,"abstract":"The viability of every insurance company depends on risk assessment of new life policy proposals. Machine learning techniques are increasingly shown to double case processing speed, reducing manual evaluation time. The underwriter evaluates the risk in several ways, including financial and medical evaluations and category classification based on customer data and other factors like previous insurance information, clinical history, and financial data. This research examines different academics’ publications on risk prediction while offering a new insurance policy to an applicant. Multiple machine learning models developed by researchers have been extensively investigated. The researchers’ model evaluation criteria were analyzed to understand and discover study gaps. The article additionally analyses how researchers found an accurate machine-learning model. This report also analyses various scholars’ future work proposals to identify what could possibly be modified for further research. This study details the measures used by other academics to evaluate machine learning models. This study describes the criteria used by other scholars to evaluate machine learning models. The criteria used by investigators to assess the produced models were carefully evaluated to understand and spot any untapped potential for advancement. Researchers’ methods for finding an accurate machine-learning model are also examined in this article. In addition, this study analyses several researchers’ future work proposals to discover what may be changed for further research. Using previous academics’ work, this review suggests ways to enhance insurance manual procedures.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140245491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seema Aswani, Kabita Choudhary, Sujala Shetty, Nasheen Nur
Learning how to read research papers is a skill. The researcher must go through many published articles during the research. It is a challenging and tedious task to go through numerous published articles. The research process would sped up by automatic summarization of scientific publications, which would aid researchers in their investigation. However automatic text summarization of scientific research articles is difficult due to its distinct structure. Various text summarization approaches have been proposed for research article summarization in the past. After the invention of transformer architecture, it has created a big shift in Natural Language Processing. The models based on transformers are able to achieve state-of-the-art results in text summarization. This paper provides a brief review of transformer-based approaches used for text summarization of scientific research articles along with the available corpus and evaluation methods that can be used to assess the model generated summary. The paper also discusses the future direction and limitations in this field.
{"title":"Automatic text summarization of scientific articles using transformers—A brief review","authors":"Seema Aswani, Kabita Choudhary, Sujala Shetty, Nasheen Nur","doi":"10.32629/jai.v7i5.1331","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1331","url":null,"abstract":"Learning how to read research papers is a skill. The researcher must go through many published articles during the research. It is a challenging and tedious task to go through numerous published articles. The research process would sped up by automatic summarization of scientific publications, which would aid researchers in their investigation. However automatic text summarization of scientific research articles is difficult due to its distinct structure. Various text summarization approaches have been proposed for research article summarization in the past. After the invention of transformer architecture, it has created a big shift in Natural Language Processing. The models based on transformers are able to achieve state-of-the-art results in text summarization. This paper provides a brief review of transformer-based approaches used for text summarization of scientific research articles along with the available corpus and evaluation methods that can be used to assess the model generated summary. The paper also discusses the future direction and limitations in this field.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140245817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, there are problems in the governance of urban public safety, such as a single entity, outdated governance concepts, and immature governance technologies. This article combines big data analysis technology and utilizes intelligent emergency mechanisms to conduct in-depth research on governance strategies to enhance the resilience of urban public safety to disasters. This article first integrates big data analysis technologies, such as the Internet of Things and cloud computing, into UPS (urban public safety) and then builds a UPS system based on this. Combining the entropy-weighted dispersion clustering method, evaluate the values of urban public safety indicators. In order to verify the effectiveness of the intelligent emergency mechanism based on big data analysis, this article conducted experimental analysis on it. Under the intelligent emergency mechanism algorithm, the average seismic compliance rate of buildings in various cities has reached 88.57%. The conclusion indicates that an intelligent emergency mechanism based on big data analysis can enhance the adaptability of urban public safety governance strategies, improve the seismic and fire warning monitoring capabilities of urban buildings, reduce the occurrence of traffic accidents, and provide more guarantees for urban fire safety.
{"title":"Research on elastic governance strategy of urban public safety based on entropy weighted discrete clustering method","authors":"Ninggui Duan, Lin Yuan","doi":"10.32629/jai.v7i5.1298","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1298","url":null,"abstract":"Currently, there are problems in the governance of urban public safety, such as a single entity, outdated governance concepts, and immature governance technologies. This article combines big data analysis technology and utilizes intelligent emergency mechanisms to conduct in-depth research on governance strategies to enhance the resilience of urban public safety to disasters. This article first integrates big data analysis technologies, such as the Internet of Things and cloud computing, into UPS (urban public safety) and then builds a UPS system based on this. Combining the entropy-weighted dispersion clustering method, evaluate the values of urban public safety indicators. In order to verify the effectiveness of the intelligent emergency mechanism based on big data analysis, this article conducted experimental analysis on it. Under the intelligent emergency mechanism algorithm, the average seismic compliance rate of buildings in various cities has reached 88.57%. The conclusion indicates that an intelligent emergency mechanism based on big data analysis can enhance the adaptability of urban public safety governance strategies, improve the seismic and fire warning monitoring capabilities of urban buildings, reduce the occurrence of traffic accidents, and provide more guarantees for urban fire safety.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140245984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the purpose of this research, an original technique to assess football matches is described. The strategy makes use of a set of innovative algorithms for Strategic Analysis (SA) and Visual Recognition (VR). The approach, as mentioned above, has been designed around a virtual reality (VR) platform that is centered on YOLOv5 and successfully monitors the actions of both players and the ball in real-time. With the guidance of Markov Chain Models (MCM), the resulting information is processed and evaluated in order to find correlations in player location and actions. This enables an in-depth comprehension of the tactics and plans the team’s management executes. One of the most significant components of the research project is the exploration of multiple approximation techniques with the aim of enhancing frame analysis performance. Furthermore, threshold scaling was executed in order to attain maximum accuracy in detection, and an approach for Steady-State Analysis (SSA) is being created in order to analyze the long-term strategic positions of teammates. This complete method can run on sophisticated knowledge of in-game tactics, and it also serves as a tool for trainers and players who want to increase the effectiveness of the teams they coach and counteract strategies used by the opposing team.
{"title":"Video analysis and data-driven tactical optimization of sports football matches: Visual recognition and strategy analysis algorithm","authors":"Biao Jin","doi":"10.32629/jai.v7i5.1581","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1581","url":null,"abstract":"For the purpose of this research, an original technique to assess football matches is described. The strategy makes use of a set of innovative algorithms for Strategic Analysis (SA) and Visual Recognition (VR). The approach, as mentioned above, has been designed around a virtual reality (VR) platform that is centered on YOLOv5 and successfully monitors the actions of both players and the ball in real-time. With the guidance of Markov Chain Models (MCM), the resulting information is processed and evaluated in order to find correlations in player location and actions. This enables an in-depth comprehension of the tactics and plans the team’s management executes. One of the most significant components of the research project is the exploration of multiple approximation techniques with the aim of enhancing frame analysis performance. Furthermore, threshold scaling was executed in order to attain maximum accuracy in detection, and an approach for Steady-State Analysis (SSA) is being created in order to analyze the long-term strategic positions of teammates. This complete method can run on sophisticated knowledge of in-game tactics, and it also serves as a tool for trainers and players who want to increase the effectiveness of the teams they coach and counteract strategies used by the opposing team.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140395425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The precise segmentation of lung lesions in computed tomography (CT) scans holds paramount importance for lung cancer research, offering invaluable information for clinical diagnosis and treatment. Nevertheless, achieving efficient detection and segmentation with acceptable accuracy proves to be challenging due to the heterogeneity of lung nodules. This paper presents a novel model-based hybrid variational level set method (VLSM) tailored for lung cancer detection. Initially, the VLSM introduces a scale-adaptive fast level-set image segmentation algorithm to address the inefficiency of low gray scale image segmentation. This algorithm simplifies the (Local Intensity Clustering) LIC model and devises a new energy functional based on the region-based pressure function. The improved multi-scale mean filter approximates the image’s offset field, effectively reducing gray-scale inhomogeneity and eliminating the influence of scale parameter selection on segmentation. Experimental results demonstrate that the proposed VLSM algorithm accurately segments images with both gray-scale inhomogeneity and noise, showcasing robustness against various noise types. This enhanced algorithm proves advantageous for addressing real-world image segmentation problems and nodules detection challenges.
{"title":"Model-based hybrid variational level set method applied to lung cancer detection","authors":"Wang Jing, Liew Siau Chuin, A. Aziz","doi":"10.32629/jai.v7i5.921","DOIUrl":"https://doi.org/10.32629/jai.v7i5.921","url":null,"abstract":"The precise segmentation of lung lesions in computed tomography (CT) scans holds paramount importance for lung cancer research, offering invaluable information for clinical diagnosis and treatment. Nevertheless, achieving efficient detection and segmentation with acceptable accuracy proves to be challenging due to the heterogeneity of lung nodules. This paper presents a novel model-based hybrid variational level set method (VLSM) tailored for lung cancer detection. Initially, the VLSM introduces a scale-adaptive fast level-set image segmentation algorithm to address the inefficiency of low gray scale image segmentation. This algorithm simplifies the (Local Intensity Clustering) LIC model and devises a new energy functional based on the region-based pressure function. The improved multi-scale mean filter approximates the image’s offset field, effectively reducing gray-scale inhomogeneity and eliminating the influence of scale parameter selection on segmentation. Experimental results demonstrate that the proposed VLSM algorithm accurately segments images with both gray-scale inhomogeneity and noise, showcasing robustness against various noise types. This enhanced algorithm proves advantageous for addressing real-world image segmentation problems and nodules detection challenges.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"95 8s2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140395504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apple orchards are of significant importance in the global agricultural sector, but they are vulnerable to a range of diseases that have the potential to cause diminished crop productivity and financial hardships. This manuscript investigates the utilization of machine learning methodologies, such as Logistic Regression, Neural Networks, and Random Forest, to classify three prevalent apple diseases: Blotch, Normal, and Rot Scab. The performance of these models is assessed using several assessment criteria, and confusion matrices are presented to aid in the prompt and precise detection of these diseases. This supports the implementation of efficient disease control strategies in apple orchards. By utilizing these ML models for the detection and treatment of diseases, not only augment agricultural productivity but also make a valuable contribution to sustainable agricultural practices by diminishing the necessity for excessive pesticide application. The experimental results indicates that Logistic Regression reflects the best performance as compared to other machine learning models taken into consideration using the different parameters. it obtained 90.6% of AUC and 65.7% of classification accuracy as compared to NN and Random Forest, which has achieved, 89.3%, 65.1%, 80.9% and 52.2.%, respectively.
苹果园在全球农业领域具有举足轻重的地位,但它们很容易受到一系列病害的侵袭,这些病害有可能导致作物产量下降和经济困难。本手稿研究了如何利用逻辑回归、神经网络和随机森林等机器学习方法对三种流行的苹果病害进行分类:斑点病、正常病和腐烂疮痂病。使用多个评估标准对这些模型的性能进行了评估,并提出了混淆矩阵,以帮助及时、准确地检测这些病害。这有助于在苹果园中实施高效的病害控制策略。利用这些 ML 模型检测和治疗病害,不仅能提高农业生产率,还能减少过量施用杀虫剂的必要性,为可持续农业实践做出宝贵贡献。实验结果表明,与使用不同参数的其他机器学习模型相比,逻辑回归的性能最佳。与 NN 和随机森林相比,逻辑回归的 AUC 和分类准确率分别为 90.6%、65.7%、80.9% 和 52.2%。
{"title":"Advancements in apple disease classification: Machine learning models, IoT integration, and future prospects","authors":"Amit Kumar, Neha Sharma, Rahul Chauhan, Kamalpreet Kaur Gurna, Abhineet Anand, Meenakshi Awasthi","doi":"10.32629/jai.v7i5.1323","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1323","url":null,"abstract":"Apple orchards are of significant importance in the global agricultural sector, but they are vulnerable to a range of diseases that have the potential to cause diminished crop productivity and financial hardships. This manuscript investigates the utilization of machine learning methodologies, such as Logistic Regression, Neural Networks, and Random Forest, to classify three prevalent apple diseases: Blotch, Normal, and Rot Scab. The performance of these models is assessed using several assessment criteria, and confusion matrices are presented to aid in the prompt and precise detection of these diseases. This supports the implementation of efficient disease control strategies in apple orchards. By utilizing these ML models for the detection and treatment of diseases, not only augment agricultural productivity but also make a valuable contribution to sustainable agricultural practices by diminishing the necessity for excessive pesticide application. The experimental results indicates that Logistic Regression reflects the best performance as compared to other machine learning models taken into consideration using the different parameters. it obtained 90.6% of AUC and 65.7% of classification accuracy as compared to NN and Random Forest, which has achieved, 89.3%, 65.1%, 80.9% and 52.2.%, respectively.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"41 S188","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140395055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain tumor has been a severe problem for a few decades ago. With the advancement in medical technologies, a brain tumor can be treated if observed earlier. This paper aims to segment and classify the tumor regions from Magnetic Resonance Imaging (MRI). The work consists of two steps. In step1, the 3D MRI images are pre-processed by the Salient Object Detection method to improve efficiency. In step2, the improved 3D-Res2UNet segments the tumor regions. The segmented tumors are partitioned into two classes using a Support Vector Machine (SVM) classifier. The method is tested using BRATS 2017 and 2018 datasets and obtained 87.1% and 99.2% dice score for BRATS 2017 and 2018, respectively. The performance of the proposed method is better compared to most recent methods.
{"title":"Segmentation of tumor regions using 3D-UNet in magnetic resonance imaging","authors":"Divya Mohan, Ulagamuthalvi Venugopal, Nisha Joseph, Kulanthaivel Govindarajan","doi":"10.32629/jai.v7i5.1058","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1058","url":null,"abstract":"Brain tumor has been a severe problem for a few decades ago. With the advancement in medical technologies, a brain tumor can be treated if observed earlier. This paper aims to segment and classify the tumor regions from Magnetic Resonance Imaging (MRI). The work consists of two steps. In step1, the 3D MRI images are pre-processed by the Salient Object Detection method to improve efficiency. In step2, the improved 3D-Res2UNet segments the tumor regions. The segmented tumors are partitioned into two classes using a Support Vector Machine (SVM) classifier. The method is tested using BRATS 2017 and 2018 datasets and obtained 87.1% and 99.2% dice score for BRATS 2017 and 2018, respectively. The performance of the proposed method is better compared to most recent methods.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"31 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140396708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kennedy E. Ketebu, Gregory O. Onwodi, K. Ukhurebor, Benjamin Maxwell Eneche, Nana Kojo Yaah-Nyakko
Despite numerous breakthroughs in creating and applying new and current approaches to malware detection and classification, the number of malware attacks on computer systems and networks is increasing. Malware authors are continually changing their operations and activities with tools or methodologies, making it tough to categorize and detect malware. Malware detection methods such as static or dynamic detection, although useful, have had challenges detecting zero-day malware and polymorphic malware. Even though machine learning techniques have been applied in this area, deep neural network models using image visualization have proven to be very effective in malware detection and classification, presenting better accuracy results. Hence, this article intends to conduct a survey showing recent works by researchers and their techniques used for malware detection and classification using convolutional neural network (CNN) models highlighting strengths, and identifying areas of potential limitations such as size of datasets and features extraction. Furthermore, a review of relevant research publications on the subject is offered, which also highlights the limitations of models and dataset availability, along with a full tabular comparison of their accuracy in malware detection and classification. Consequently, this review study will contribute to the advancement and serve as a basis for future research in the field of developing CNN models for malware detection and classification.
{"title":"A recent survey of image-based malware classification using convolution neural network","authors":"Kennedy E. Ketebu, Gregory O. Onwodi, K. Ukhurebor, Benjamin Maxwell Eneche, Nana Kojo Yaah-Nyakko","doi":"10.32629/jai.v7i5.1287","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1287","url":null,"abstract":"Despite numerous breakthroughs in creating and applying new and current approaches to malware detection and classification, the number of malware attacks on computer systems and networks is increasing. Malware authors are continually changing their operations and activities with tools or methodologies, making it tough to categorize and detect malware. Malware detection methods such as static or dynamic detection, although useful, have had challenges detecting zero-day malware and polymorphic malware. Even though machine learning techniques have been applied in this area, deep neural network models using image visualization have proven to be very effective in malware detection and classification, presenting better accuracy results. Hence, this article intends to conduct a survey showing recent works by researchers and their techniques used for malware detection and classification using convolutional neural network (CNN) models highlighting strengths, and identifying areas of potential limitations such as size of datasets and features extraction. Furthermore, a review of relevant research publications on the subject is offered, which also highlights the limitations of models and dataset availability, along with a full tabular comparison of their accuracy in malware detection and classification. Consequently, this review study will contribute to the advancement and serve as a basis for future research in the field of developing CNN models for malware detection and classification.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"42 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140259134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shailesh Patil, Ravindra Apare, Ravindra Borhade, P. Mahalle
Learning disabilities in children occur in early childhood age. These disabilities include dyslexia, dysgraphia, dyscalculia, ADHD, etc. These children face difficulty in academic progress in life. Difficulties include reading, writing, and spelling words, despite these students possessing normal or above-average intelligence. The learning gap between these students and others increases with time. As a result, these students become less motivated, find it difficult to progress in life, and struggle with employment opportunities. Children with these symptoms often have emotional consequences, including frustration and low self-esteem. These disabilities range around 10 to 15% of the total population, which is considerably high. There is an immense need for early diagnosis to provide them with remedial education and special care. Researchers have proposed a diverse range of approaches to detect learning disorders like dyslexia, one of the most common learning disorders. These approaches include the detection of LD using eye tracking, electroencephalography (EEG) scan, detection using handwritten text, the use of a gaming approach, audiovisual approaches, etc. This paper critically analyses recent contributions of intelligent technique-based dyslexia prediction and provides a comparison. Among the mentioned techniques, it is found that detection using eye tracking, EEG, and MRI are costly, complex, and non-scalable. In contrast, detection using handwritten text and a gaming approach is scalable and cost-effective. A character-based approach is presented as word formation is difficult for children for whom English is a second language. Also, in early childhood, children make fewer mistakes in character writing. An experimental setup for handwritten text-based detection is done using the CNN model, and future opportunities for learning disabilities detection are discussed in this paper.
{"title":"Intelligent approaches for early prediction of learning disabilities in children using learning patterns: A survey and discussion","authors":"Shailesh Patil, Ravindra Apare, Ravindra Borhade, P. Mahalle","doi":"10.32629/jai.v7i5.1329","DOIUrl":"https://doi.org/10.32629/jai.v7i5.1329","url":null,"abstract":"Learning disabilities in children occur in early childhood age. These disabilities include dyslexia, dysgraphia, dyscalculia, ADHD, etc. These children face difficulty in academic progress in life. Difficulties include reading, writing, and spelling words, despite these students possessing normal or above-average intelligence. The learning gap between these students and others increases with time. As a result, these students become less motivated, find it difficult to progress in life, and struggle with employment opportunities. Children with these symptoms often have emotional consequences, including frustration and low self-esteem. These disabilities range around 10 to 15% of the total population, which is considerably high. There is an immense need for early diagnosis to provide them with remedial education and special care. Researchers have proposed a diverse range of approaches to detect learning disorders like dyslexia, one of the most common learning disorders. These approaches include the detection of LD using eye tracking, electroencephalography (EEG) scan, detection using handwritten text, the use of a gaming approach, audiovisual approaches, etc. This paper critically analyses recent contributions of intelligent technique-based dyslexia prediction and provides a comparison. Among the mentioned techniques, it is found that detection using eye tracking, EEG, and MRI are costly, complex, and non-scalable. In contrast, detection using handwritten text and a gaming approach is scalable and cost-effective. A character-based approach is presented as word formation is difficult for children for whom English is a second language. Also, in early childhood, children make fewer mistakes in character writing. An experimental setup for handwritten text-based detection is done using the CNN model, and future opportunities for learning disabilities detection are discussed in this paper.","PeriodicalId":508223,"journal":{"name":"Journal of Autonomous Intelligence","volume":"33 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140259651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}