This paper provides a comprehensive survey of anomaly detection for the Internet of Things (IoT). Anomaly detection poses numerous challenges in IoT, with broad applications, including intrusion detection, fraud monitoring, cybersecurity, industrial automation, etc. Intensive attention has been received by network security analytics and researchers, particularly on anomaly detection in the network, deliberately crucial in network security. It is of critical importance to detect network anomalies timely. Due to various issues and resource-constrained features, conventional anomaly detection strategies cannot be implemented in the IoT. Hence, this paper attempts to highlight various recent techniques to detect anomalies in IoT and its applications. We also present anomalies at multiple layers of the IoT architecture. In addition, we discuss multiple computing platforms and highlight various challenges of anomaly detection. Finally, the potential future directions of the methods are suggested, leading to various open research issues to be analyzed afterward. With this survey, we hope that readers can get a better understanding of anomaly detection, as well as research trends in this domain.
{"title":"Recent advances in anomaly detection in Internet of Things: Status, challenges, and perspectives","authors":"Deepak Adhikari , Wei Jiang , Jinyu Zhan , Danda B. Rawat , Asmita Bhattarai","doi":"10.1016/j.cosrev.2024.100665","DOIUrl":"10.1016/j.cosrev.2024.100665","url":null,"abstract":"<div><p>This paper provides a comprehensive survey of anomaly detection for the Internet of Things (IoT). Anomaly detection poses numerous challenges in IoT, with broad applications, including intrusion detection, fraud monitoring, cybersecurity, industrial automation, etc. Intensive attention has been received by network security analytics and researchers, particularly on anomaly detection in the network, deliberately crucial in network security. It is of critical importance to detect network anomalies timely. Due to various issues and resource-constrained features, conventional anomaly detection strategies cannot be implemented in the IoT. Hence, this paper attempts to highlight various recent techniques to detect anomalies in IoT and its applications. We also present anomalies at multiple layers of the IoT architecture. In addition, we discuss multiple computing platforms and highlight various challenges of anomaly detection. Finally, the potential future directions of the methods are suggested, leading to various open research issues to be analyzed afterward. With this survey, we hope that readers can get a better understanding of anomaly detection, as well as research trends in this domain.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100665"},"PeriodicalIF":13.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.cosrev.2024.100660
Sucharitha Isukapalli, Satish Narayana Srirama
Fault tolerance is becoming increasingly important for upcoming exascale systems, supporting distributed data processing, due to the expected decrease in the Mean Time Between Failures (MTBF). To ensure the availability, reliability, dependability, and performance of the system, addressing the fault tolerance challenge is crucial. It aims to keep the distributed system running at a reduced capacity while avoiding complete data loss, even in the presence of faults, with minimal impact on system performance. This comprehensive survey aims to provide a detailed understanding of the importance of fault tolerance in distributed systems, including a classification of faults, errors, failures, and fault-tolerant techniques (reactive, proactive, and predictive). We collected a corpus of 490 papers published from 2014 to 2023 by searching in Scopus, IEEE Xplore, Springer, and ACM digital library databases. After a systematic review, 17 reactive models, 17 proactive models, and 14 predictive models were shortlisted and compared. A taxonomy of ideas behind the proposed models was also created for each of these categories of fault-tolerant solutions. Additionally, it examines how fault tolerance capability is incorporated into popular big data processing tools such as Apache Hadoop, Spark, and Flink. Finally, promising future research directions in this domain are discussed.
{"title":"A systematic survey on fault-tolerant solutions for distributed data analytics: Taxonomy, comparison, and future directions","authors":"Sucharitha Isukapalli, Satish Narayana Srirama","doi":"10.1016/j.cosrev.2024.100660","DOIUrl":"10.1016/j.cosrev.2024.100660","url":null,"abstract":"<div><p>Fault tolerance is becoming increasingly important for upcoming exascale systems, supporting distributed data processing, due to the expected decrease in the Mean Time Between Failures (MTBF). To ensure the availability, reliability, dependability, and performance of the system, addressing the fault tolerance challenge is crucial. It aims to keep the distributed system running at a reduced capacity while avoiding complete data loss, even in the presence of faults, with minimal impact on system performance. This comprehensive survey aims to provide a detailed understanding of the importance of fault tolerance in distributed systems, including a classification of faults, errors, failures, and fault-tolerant techniques (reactive, proactive, and predictive). We collected a corpus of 490 papers published from 2014 to 2023 by searching in Scopus, IEEE Xplore, Springer, and ACM digital library databases. After a systematic review, 17 reactive models, 17 proactive models, and 14 predictive models were shortlisted and compared. A taxonomy of ideas behind the proposed models was also created for each of these categories of fault-tolerant solutions. Additionally, it examines how fault tolerance capability is incorporated into popular big data processing tools such as Apache Hadoop, Spark, and Flink. Finally, promising future research directions in this domain are discussed.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100660"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.cosrev.2024.100658
Vinod Kumar , Ravi Shankar Singh , Medara Rambabu , Yaman Dua
Hyperspectral image (HSI) classification is a significant topic of discussion in real-world applications. The prevalence of these applications stems from the precise spectral information offered by each pixelś data in hyperspectral imaging (HS). Classical machine learning (ML) methods face challenges in precise object classification with HSI data complexity. The intrinsic non-linear relationship between spectral information and materials complicates the task. Deep learning (DL) has proven to be a robust feature extractor in computer vision, effectively addressing nonlinear challenges. This validation drives its integration into HSI classification, which proves to be highly effective. This review compares DL approaches to HSI classification, highlighting its superiority over classical ML algorithms. Subsequently, a framework is constructed to analyze current advances in DL-based HSI classification, categorizing studies based on a network using only spectral features, spatial features, or both spectral–spatial features. Moreover, we have explained a few recent advanced DL models. Additionally, the study acknowledges that DL demands a substantial number of labeled training instances. However, obtaining such a large dataset for the HSI classification framework proves to be time and cost-intensive. So, we also explain the DL methodologies, which work well with the limited training data availability. Consequently, the survey introduces techniques aimed at enhancing the generalization performance of DL procedures, offering guidance for the future.
{"title":"Deep learning for hyperspectral image classification: A survey","authors":"Vinod Kumar , Ravi Shankar Singh , Medara Rambabu , Yaman Dua","doi":"10.1016/j.cosrev.2024.100658","DOIUrl":"10.1016/j.cosrev.2024.100658","url":null,"abstract":"<div><p>Hyperspectral image (HSI) classification is a significant topic of discussion in real-world applications. The prevalence of these applications stems from the precise spectral information offered by each pixelś data in hyperspectral imaging (HS). Classical machine learning (ML) methods face challenges in precise object classification with HSI data complexity. The intrinsic non-linear relationship between spectral information and materials complicates the task. Deep learning (DL) has proven to be a robust feature extractor in computer vision, effectively addressing nonlinear challenges. This validation drives its integration into HSI classification, which proves to be highly effective. This review compares DL approaches to HSI classification, highlighting its superiority over classical ML algorithms. Subsequently, a framework is constructed to analyze current advances in DL-based HSI classification, categorizing studies based on a network using only spectral features, spatial features, or both spectral–spatial features. Moreover, we have explained a few recent advanced DL models. Additionally, the study acknowledges that DL demands a substantial number of labeled training instances. However, obtaining such a large dataset for the HSI classification framework proves to be time and cost-intensive. So, we also explain the DL methodologies, which work well with the limited training data availability. Consequently, the survey introduces techniques aimed at enhancing the generalization performance of DL procedures, offering guidance for the future.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100658"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141891665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.cosrev.2024.100657
V.H. Pereira-Ferrero , T.G. Lewis , L.P. Valem , L.G.P. Ferrero , D.C.G. Pedronette , L.J. Latecki
Despite the advances in machine learning techniques, similarity assessment among multimedia data remains a challenging task of broad interest in computer science. Substantial progress has been achieved in acquiring meaningful data representations, but how to compare them, plays a pivotal role in machine learning and retrieval tasks. Traditional pairwise measures are widely used, yet unsupervised affinity learning approaches have emerged as a valuable solution for enhancing retrieval effectiveness. These methods leverage the dataset manifold to encode contextual information, refining initial similarity/dissimilarity measures through post-processing. In other words, measuring the similarity between data objects within the context of other data objects is often more effective. This survey provides a comprehensive discussion about unsupervised post-processing methods, addressing the historical development and proposing an organization of the area, with a specific emphasis on image retrieval. A systematic review was conducted contributing to a formal understanding of the field. Additionally, an experimental study is presented to evaluate the potential of such methods in improving retrieval results, focusing on recent features extracted from Convolutional Neural Networks (CNNs) and Transformer models, in 8 distinct datasets, and over 329.877 images analyzed. State-of-the-art comparison for Flowers, Corel5k, and ALOI datasets, the Rank Flow Embedding method outperformed all state-of-art approaches, achieving 99.65%, 96.79%, and 97.73%, respectively.
{"title":"Unsupervised affinity learning based on manifold analysis for image retrieval: A survey","authors":"V.H. Pereira-Ferrero , T.G. Lewis , L.P. Valem , L.G.P. Ferrero , D.C.G. Pedronette , L.J. Latecki","doi":"10.1016/j.cosrev.2024.100657","DOIUrl":"10.1016/j.cosrev.2024.100657","url":null,"abstract":"<div><p>Despite the advances in machine learning techniques, similarity assessment among multimedia data remains a challenging task of broad interest in computer science. Substantial progress has been achieved in acquiring meaningful data representations, but how to compare them, plays a pivotal role in machine learning and retrieval tasks. Traditional pairwise measures are widely used, yet unsupervised affinity learning approaches have emerged as a valuable solution for enhancing retrieval effectiveness. These methods leverage the dataset manifold to encode contextual information, refining initial similarity/dissimilarity measures through post-processing. In other words, measuring the similarity between data objects within the context of other data objects is often more effective. This survey provides a comprehensive discussion about unsupervised post-processing methods, addressing the historical development and proposing an organization of the area, with a specific emphasis on image retrieval. A systematic review was conducted contributing to a formal understanding of the field. Additionally, an experimental study is presented to evaluate the potential of such methods in improving retrieval results, focusing on recent features extracted from Convolutional Neural Networks (CNNs) and Transformer models, in 8 distinct datasets, and over 329.877 images analyzed. State-of-the-art comparison for Flowers, Corel5k, and ALOI datasets, the Rank Flow Embedding method outperformed all state-of-art approaches, achieving 99.65%, 96.79%, and 97.73%, respectively.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100657"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141891666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.cosrev.2024.100662
Khalid M. Hosny, Amal Magdi, Osama ElKomy, Hanaa M. Hamza
Lately, a lot of attention has been paid to securing the ownership rights of digital images. The expanding usage of the Internet causes several problems, including data piracy and data tampering. Image watermarking is a typical method of protecting an image's copyright. Robust watermarking for digital images is a process of embedding watermarks on the cover image and extracting them correctly under different attacks. The embedded watermark might be either visible or invisible. Deep learning extracts image features using neural networks, which are highly effective in feature extraction. Watermarking techniques that utilize deep learning have gained a lot of interest due to their remarkable ability to extract features. This article offers an overview of digital image watermarking and deep learning. This article will discuss several research articles on digital image watermarking in deep-learning environments.
{"title":"Digital image watermarking using deep learning: A survey","authors":"Khalid M. Hosny, Amal Magdi, Osama ElKomy, Hanaa M. Hamza","doi":"10.1016/j.cosrev.2024.100662","DOIUrl":"10.1016/j.cosrev.2024.100662","url":null,"abstract":"<div><p>Lately, a lot of attention has been paid to securing the ownership rights of digital images. The expanding usage of the Internet causes several problems, including data piracy and data tampering. Image watermarking is a typical method of protecting an image's copyright. Robust watermarking for digital images is a process of embedding watermarks on the cover image and extracting them correctly under different attacks. The embedded watermark might be either visible or invisible. Deep learning extracts image features using neural networks, which are highly effective in feature extraction. Watermarking techniques that utilize deep learning have gained a lot of interest due to their remarkable ability to extract features. This article offers an overview of digital image watermarking and deep learning. This article will discuss several research articles on digital image watermarking in deep-learning environments.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100662"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of cloud computing has made a global impact by providing on-demand services, elasticity, scalability, and flexibility, hence delivering cost-effective resources to end users in pay-as-you-go manner. However, securing cloud services against vulnerabilities, threats, and modern attacks remains a major concern. Application layer attacks are particularly problematic because they can cause significant damage and are often difficult to detect, as malicious traffic can be indistinguishable from normal traffic flows. Moreover, preventing Distributed Denial of Service (DDoS) attacks is challenging due to its high impact on physical computer resources and network bandwidth. This study examines new variations of DDoS attacks within the broader context of cyber threats and utilizes Artificial Intelligence (AI)-based approaches to detect and prevent such modern attacks. The conducted investigation determines that the current detection methods predominantly employ collectively, hybrid, and single Machine Learning (ML)/Deep Learning (DL) techniques. Further, the analysis of diverse DDoS attacks and their related defensive strategies is vital in safeguarding cloud infrastructure against the detrimental consequences of DDoS attacks. This article offers a comprehensive classification of the various types of cloud DDoS attacks, along with an in-depth analysis of the characterization, detection, prevention, and mitigation strategies employed. The article presents, an in-depth analysis of crucial performance measures used to assess different defence systems and their effectiveness in a cloud computing environment. This article aims to encourage cloud security researchers to devise efficient defence strategies against diverse DDoS attacks. The survey identifies and elucidates the research gaps and obstacles, while also providing an overview of potential future research areas.
{"title":"A comprehensive review of vulnerabilities and AI-enabled defense against DDoS attacks for securing cloud services","authors":"Surendra Kumar , Mridula Dwivedi , Mohit Kumar , Sukhpal Singh Gill","doi":"10.1016/j.cosrev.2024.100661","DOIUrl":"10.1016/j.cosrev.2024.100661","url":null,"abstract":"<div><p>The advent of cloud computing has made a global impact by providing on-demand services, elasticity, scalability, and flexibility, hence delivering cost-effective resources to end users in pay-as-you-go manner. However, securing cloud services against vulnerabilities, threats, and modern attacks remains a major concern. Application layer attacks are particularly problematic because they can cause significant damage and are often difficult to detect, as malicious traffic can be indistinguishable from normal traffic flows. Moreover, preventing Distributed Denial of Service (DDoS) attacks is challenging due to its high impact on physical computer resources and network bandwidth. This study examines new variations of DDoS attacks within the broader context of cyber threats and utilizes Artificial Intelligence (AI)-based approaches to detect and prevent such modern attacks. The conducted investigation determines that the current detection methods predominantly employ collectively, hybrid, and single Machine Learning (ML)/Deep Learning (DL) techniques. Further, the analysis of diverse DDoS attacks and their related defensive strategies is vital in safeguarding cloud infrastructure against the detrimental consequences of DDoS attacks. This article offers a comprehensive classification of the various types of cloud DDoS attacks, along with an in-depth analysis of the characterization, detection, prevention, and mitigation strategies employed. The article presents, an in-depth analysis of crucial performance measures used to assess different defence systems and their effectiveness in a cloud computing environment. This article aims to encourage cloud security researchers to devise efficient defence strategies against diverse DDoS attacks. The survey identifies and elucidates the research gaps and obstacles, while also providing an overview of potential future research areas.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100661"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.cosrev.2024.100663
Nicolas Bousquet , Amer E. Mouawad , Naomi Nishimura , Sebastian Siebertz
A graph vertex-subset problem defines which subsets of the vertices of an input graph are feasible solutions. We view a feasible solution as a set of tokens placed on the vertices of the graph. A reconfiguration variant of a vertex-subset problem asks, given two feasible solutions of size , whether it is possible to transform one into the other by a sequence of token slides (along edges of the graph) or token jumps (between arbitrary vertices of the graph) such that each intermediate set remains a feasible solution of size . Many algorithmic questions present themselves in the form of reconfiguration problems: Given the description of an initial system state and the description of a target state, is it possible to transform the system from its initial state into the target one while preserving certain properties of the system in the process? Such questions have received a substantial amount of attention under the so-called combinatorial reconfiguration framework. We consider reconfiguration variants of three fundamental underlying graph vertex-subset problems, namely Independent Set, Dominating Set, and Connected Dominating Set. We survey both older and more recent work on the parameterized complexity of all three problems when parameterized by the number of tokens . The emphasis will be on positive results and the most common techniques for the design of fixed-parameter tractable algorithms.
{"title":"A survey on the parameterized complexity of reconfiguration problems","authors":"Nicolas Bousquet , Amer E. Mouawad , Naomi Nishimura , Sebastian Siebertz","doi":"10.1016/j.cosrev.2024.100663","DOIUrl":"10.1016/j.cosrev.2024.100663","url":null,"abstract":"<div><p>A graph vertex-subset problem defines which subsets of the vertices of an input graph are feasible solutions. We view a feasible solution as a set of tokens placed on the vertices of the graph. A reconfiguration variant of a vertex-subset problem asks, given two feasible solutions of size <span><math><mi>k</mi></math></span>, whether it is possible to transform one into the other by a sequence of token slides (along edges of the graph) or token jumps (between arbitrary vertices of the graph) such that each intermediate set remains a feasible solution of size <span><math><mi>k</mi></math></span>. Many algorithmic questions present themselves in the form of reconfiguration problems: Given the description of an initial system state and the description of a target state, is it possible to transform the system from its initial state into the target one while preserving certain properties of the system in the process? Such questions have received a substantial amount of attention under the so-called combinatorial reconfiguration framework. We consider reconfiguration variants of three fundamental underlying graph vertex-subset problems, namely <span>Independent Set</span>, <span>Dominating Set</span>, and <span>Connected Dominating Set</span>. We survey both older and more recent work on the parameterized complexity of all three problems when parameterized by the number of tokens <span><math><mi>k</mi></math></span>. The emphasis will be on positive results and the most common techniques for the design of fixed-parameter tractable algorithms.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100663"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-17DOI: 10.1016/j.cosrev.2024.100653
Xueyang Wang, Nan Cao, Qing Chen, Shixiong Cao
Virtual humans have become a hot research topic in recent years due to the development of AI technology and computer graphics. In this survey, we provide a comprehensive review of the interaction design of 3D virtual humans. We first categorize the interac- tion design of virtual humans into speech, eye, facial expressions, and posture interactions. Then we describe the combination of different modalities of virtual humans in the multimodal interaction design section. We also summarize the applications of intelli- gent virtual humans in the fields of education, healthcare, and work assistance. The final part of the paper discusses the remaining challenges and opportunities in virtual human interaction design, along with future directions in this field. This paper hopes to help researchers quickly understand the characteristics of various modal interactions in the process of designing intelligent virtual humans and provide design guidance and suggestions.
{"title":"The interaction design of 3D virtual humans: A survey","authors":"Xueyang Wang, Nan Cao, Qing Chen, Shixiong Cao","doi":"10.1016/j.cosrev.2024.100653","DOIUrl":"10.1016/j.cosrev.2024.100653","url":null,"abstract":"<div><p>Virtual humans have become a hot research topic in recent years due to the development of AI technology and computer graphics. In this survey, we provide a comprehensive review of the interaction design of 3D virtual humans. We first categorize the interac- tion design of virtual humans into speech, eye, facial expressions, and posture interactions. Then we describe the combination of different modalities of virtual humans in the multimodal interaction design section. We also summarize the applications of intelli- gent virtual humans in the fields of education, healthcare, and work assistance. The final part of the paper discusses the remaining challenges and opportunities in virtual human interaction design, along with future directions in this field. This paper hopes to help researchers quickly understand the characteristics of various modal interactions in the process of designing intelligent virtual humans and provide design guidance and suggestions.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100653"},"PeriodicalIF":13.3,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-05DOI: 10.1016/j.cosrev.2024.100651
Inam Ullah , Deepak Adhikari , Habib Khan , M. Shahid Anwar , Shabir Ahmad , Xiaoshan Bai
Mobile Robots (MRs) and their applications are undergoing massive development, requiring a diversity of autonomous or self-directed robots to fulfill numerous objectives and responsibilities. Integrating MRs with the Intelligent Internet of Things (IIoT) not only makes robots innovative, trackable, and powerful but also generates numerous threats and challenges in multiple applications. The IIoT combines intelligent techniques, including artificial intelligence and machine learning, with the Internet of Things (IoT). The location information (localization) of the MRs triggers innumerable domains. To fully accomplish the potential of localization, Mobile Robot Localization (MRL) algorithms need to be integrated with complementary technologies, such as MR classification, indoor localization mapping solutions, three-dimensional localization, etc. Thus, this paper endeavors to comprehensively review different methodologies and technologies for MRL, emphasizing intelligent architecture, indoor and outdoor methodologies, concepts, and security-related issues. Additionally, we highlight the diverse MRL applications where information about localization is challenging and present the various computing platforms. Finally, discussions on several challenges regarding navigation path planning, localization, obstacle avoidance, security, localization problem categories, etc., and potential future perspectives on MRL techniques and applications are highlighted.
{"title":"Mobile robot localization: Current challenges and future prospective","authors":"Inam Ullah , Deepak Adhikari , Habib Khan , M. Shahid Anwar , Shabir Ahmad , Xiaoshan Bai","doi":"10.1016/j.cosrev.2024.100651","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100651","url":null,"abstract":"<div><p>Mobile Robots (MRs) and their applications are undergoing massive development, requiring a diversity of autonomous or self-directed robots to fulfill numerous objectives and responsibilities. Integrating MRs with the Intelligent Internet of Things (IIoT) not only makes robots innovative, trackable, and powerful but also generates numerous threats and challenges in multiple applications. The IIoT combines intelligent techniques, including artificial intelligence and machine learning, with the Internet of Things (IoT). The location information (localization) of the MRs triggers innumerable domains. To fully accomplish the potential of localization, Mobile Robot Localization (MRL) algorithms need to be integrated with complementary technologies, such as MR classification, indoor localization mapping solutions, three-dimensional localization, etc. Thus, this paper endeavors to comprehensively review different methodologies and technologies for MRL, emphasizing intelligent architecture, indoor and outdoor methodologies, concepts, and security-related issues. Additionally, we highlight the diverse MRL applications where information about localization is challenging and present the various computing platforms. Finally, discussions on several challenges regarding navigation path planning, localization, obstacle avoidance, security, localization problem categories, etc., and potential future perspectives on MRL techniques and applications are highlighted.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100651"},"PeriodicalIF":13.3,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1016/j.cosrev.2024.100652
Ankit Thakkar, Kinjal Chaudhari
Stock market is one of the attractive domains for researchers as well as academicians. It represents highly complex non-linear fluctuating market behaviours where traders, investors, and organizers look forward to reliable future predictions of the market indices. Such prediction problems can be computationally addressed using various machine learning, deep learning, sentiment analysis, as well as mining approaches. However, the internal parameters configuration can play an important role in the prediction performance; also, feature selection is a crucial task. Therefore, to optimize such approaches, the evolutionary computation-based algorithms can be integrated in several ways. In this article, we systematically conduct a focused survey on genetic algorithm (GA) and its applications for stock market prediction; GAs are known for their parallel search mechanism to solve complex real-world problems; various genetic perspectives are also integrated with machine learning and deep learning methods to address financial forecasting. Thus, we aim to analyse the potential extensibility and adaptability of GAs for stock market prediction. We review stock price and stock trend prediction, as well as portfolio optimization, approaches over the recent years (2013–2022) to signify the state-of-the-art of GA-based optimization in financial markets. We broaden our discussion by briefly reviewing other genetic perspectives and their applications for stock market forecasting. We balance our survey with the consideration of competitiveness and complementation of GAs, followed by highlighting the challenges and potential future research directions of applying GAs for stock market prediction.
股票市场是吸引研究人员和学者的领域之一。它代表着高度复杂的非线性波动市场行为,交易者、投资者和组织者都期待着对市场指数的未来做出可靠预测。此类预测问题可以通过各种机器学习、深度学习、情感分析以及挖掘方法来计算解决。然而,内部参数配置对预测性能起着重要作用;同时,特征选择也是一项关键任务。因此,为了优化这些方法,可以通过多种方式整合基于进化计算的算法。在本文中,我们系统地对遗传算法(GA)及其在股市预测中的应用进行了重点调查;GA 以其并行搜索机制解决复杂的实际问题而著称;各种遗传观点还与机器学习和深度学习方法相结合,以解决金融预测问题。因此,我们旨在分析 GA 在股市预测方面的潜在扩展性和适应性。我们回顾了近年来(2013-2022 年)的股价和股票走势预测以及投资组合优化方法,以说明基于遗传算法的优化在金融市场中的最新进展。我们简要回顾了其他遗传学观点及其在股市预测中的应用,从而拓宽了我们的讨论范围。我们通过考虑遗传算法的竞争力和互补性来平衡我们的调查,随后强调了将遗传算法应用于股市预测的挑战和潜在的未来研究方向。
{"title":"Applicability of genetic algorithms for stock market prediction: A systematic survey of the last decade","authors":"Ankit Thakkar, Kinjal Chaudhari","doi":"10.1016/j.cosrev.2024.100652","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100652","url":null,"abstract":"<div><p>Stock market is one of the attractive domains for researchers as well as academicians. It represents highly complex non-linear fluctuating market behaviours where traders, investors, and organizers look forward to reliable future predictions of the market indices. Such prediction problems can be computationally addressed using various machine learning, deep learning, sentiment analysis, as well as mining approaches. However, the internal parameters configuration can play an important role in the prediction performance; also, feature selection is a crucial task. Therefore, to optimize such approaches, the evolutionary computation-based algorithms can be integrated in several ways. In this article, we systematically conduct a focused survey on genetic algorithm (GA) and its applications for stock market prediction; GAs are known for their parallel search mechanism to solve complex real-world problems; various genetic perspectives are also integrated with machine learning and deep learning methods to address financial forecasting. Thus, we aim to analyse the potential extensibility and adaptability of GAs for stock market prediction. We review stock price and stock trend prediction, as well as portfolio optimization, approaches over the recent years (2013–2022) to signify the state-of-the-art of GA-based optimization in financial markets. We broaden our discussion by briefly reviewing other genetic perspectives and their applications for stock market forecasting. We balance our survey with the consideration of competitiveness and complementation of GAs, followed by highlighting the challenges and potential future research directions of applying GAs for stock market prediction.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100652"},"PeriodicalIF":13.3,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}