Pub Date : 2025-01-03DOI: 10.1016/j.cosrev.2024.100719
Jagdeep Rahul, Lakhan Dev Sharma
Cardiovascular diseases (CVDs) are a global health concern, demanding advanced healthcare solutions. Accurate identification of CVDs via electrocardiogram (ECG) analysis is complex. Artificial Intelligence (AI) offers potential in improving diagnostic accuracy and uncovering new associations between ECG patterns and heart health risks. This paper reviews AI's historical evolution in CVD diagnosis, focusing on recent ECG analysis advancements and discussing societal implications and future research directions. AI has transformed medical decision-making, progressing from rule-based systems to modern machine learning (ML) and deep learning (DL) methods. By utilizing extensive datasets and advanced neural networks, AI models excel in detecting and categorizing cardiac arrhythmias. However, AI's effectiveness depends on access to large labeled datasets and collaboration within the biomedical community. AI-driven ECG analysis holds promise for revolutionizing cardiovascular care, enabling faster, more accurate diagnostics, and personalized medicine. Key challenges in cardiac arrhythmia classification with AI encompass data quality, class imbalance, and seamless integration with clinical workflows. Addressing these challenges is imperative for realizing the full potential of AI in cardiac care and ensuring accurate diagnosis.
{"title":"Advancements in AI for cardiac arrhythmia detection: A comprehensive overview","authors":"Jagdeep Rahul, Lakhan Dev Sharma","doi":"10.1016/j.cosrev.2024.100719","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100719","url":null,"abstract":"Cardiovascular diseases (CVDs) are a global health concern, demanding advanced healthcare solutions. Accurate identification of CVDs via electrocardiogram (ECG) analysis is complex. Artificial Intelligence (AI) offers potential in improving diagnostic accuracy and uncovering new associations between ECG patterns and heart health risks. This paper reviews AI's historical evolution in CVD diagnosis, focusing on recent ECG analysis advancements and discussing societal implications and future research directions. AI has transformed medical decision-making, progressing from rule-based systems to modern machine learning (ML) and deep learning (DL) methods. By utilizing extensive datasets and advanced neural networks, AI models excel in detecting and categorizing cardiac arrhythmias. However, AI's effectiveness depends on access to large labeled datasets and collaboration within the biomedical community. AI-driven ECG analysis holds promise for revolutionizing cardiovascular care, enabling faster, more accurate diagnostics, and personalized medicine. Key challenges in cardiac arrhythmia classification with AI encompass data quality, class imbalance, and seamless integration with clinical workflows. Addressing these challenges is imperative for realizing the full potential of AI in cardiac care and ensuring accurate diagnosis.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"72 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyberattacks have increased radically over the last years, while the exploitation of Artificial Intelligence (AI) leads to the implementation of even smarter attacks which subsequently require solutions that will efficiently confront them. This need is indulged by incorporating Federated Intrusion Detection Systems (FIDS), which have been widely employed in multiple scenarios involving communication in cyber–physical systems. These include, but are not limited to, the Internet of Things (IoT) devices, Industrial IoT (IIoT), healthcare systems (Internet of Medical Things/IoMT), Internet of Vehicles (IoV), Smart Manufacturing (SM), Supervisory Control and Data Acquisition (SCADA) systems, Multi-access Edge Computing (MEC) devices, among others. Tackling the challenge of cyberthreats in all the aforementioned scenarios is of utmost importance for assuring the safety and continuous functionality of the operations, crucial for maintaining proper procedures in all Critical Infrastructures (CIs). For this purpose, pertinent knowledge of the current status in state-of-the-art (SOTA) federated intrusion detection methods is mandatory, towards encompassing while simultaneously evolving them in order to timely detect and mitigate cyberattack incidents. In this study, we address this challenge and provide the readers with an overview of FL implementations regarding Intrusion Detection in several CIs. Additionally, the distinct communication protocols, attack types and datasets utilized are thoroughly discussed. Finally, the latest Machine Learning (ML) and Deep Learning (DL) frameworks and libraries to implement such methods are also provided.
{"title":"A comprehensive survey of Federated Intrusion Detection Systems: Techniques, challenges and solutions","authors":"Ioannis Makris, Aikaterini Karampasi, Panagiotis Radoglou-Grammatikis, Nikolaos Episkopos, Eider Iturbe, Erkuden Rios, Nikos Piperigkos, Aris Lalos, Christos Xenakis, Thomas Lagkas, Vasileios Argyriou, Panagiotis Sarigiannidis","doi":"10.1016/j.cosrev.2024.100717","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100717","url":null,"abstract":"Cyberattacks have increased radically over the last years, while the exploitation of Artificial Intelligence (AI) leads to the implementation of even smarter attacks which subsequently require solutions that will efficiently confront them. This need is indulged by incorporating Federated Intrusion Detection Systems (FIDS), which have been widely employed in multiple scenarios involving communication in cyber–physical systems. These include, but are not limited to, the Internet of Things (IoT) devices, Industrial IoT (IIoT), healthcare systems (Internet of Medical Things/IoMT), Internet of Vehicles (IoV), Smart Manufacturing (SM), Supervisory Control and Data Acquisition (SCADA) systems, Multi-access Edge Computing (MEC) devices, among others. Tackling the challenge of cyberthreats in all the aforementioned scenarios is of utmost importance for assuring the safety and continuous functionality of the operations, crucial for maintaining proper procedures in all Critical Infrastructures (CIs). For this purpose, pertinent knowledge of the current status in state-of-the-art (SOTA) federated intrusion detection methods is mandatory, towards encompassing while simultaneously evolving them in order to timely detect and mitigate cyberattack incidents. In this study, we address this challenge and provide the readers with an overview of FL implementations regarding Intrusion Detection in several CIs. Additionally, the distinct communication protocols, attack types and datasets utilized are thoroughly discussed. Finally, the latest Machine Learning (ML) and Deep Learning (DL) frameworks and libraries to implement such methods are also provided.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"24 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge graph embedding (KGE) is a hot topic in the field of Knowledge graphs (KG). It aims to transform KG entities and relations into vector representations, facilitating their manipulation in various application tasks and real-world scenarios. So far, numerous models have been developed in KGE to perform KG embedding. However, several challenges must be addressed when designing effective KGE models. The most discussed challenges in the literature include scalability (KGs contain millions of entities and relations), incompleteness (missing links), the complexity of relations (symmetries, inversion, composition, etc.), and the sparsity of some entities and relations. The purpose of this paper is to provide a comprehensive overview of KGE models. We begin with a theoretical analysis and comparison of the existing methods proposed so far for generating KGE, which we have classified into four categories. We then conducted experiments using four benchmark datasets to compare the efficacy, efficiency, inductiveness, the electricity and the CO2 emission of five state-of-the-art methods in the link prediction task, providing a comprehensive analysis of the most commonly used benchmarks in the literature.
{"title":"Knowledge graph representation learning: A comprehensive and experimental overview","authors":"Dorsaf Sellami, Wissem Inoubli, Imed Riadh Farah, Sabeur Aridhi","doi":"10.1016/j.cosrev.2024.100716","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100716","url":null,"abstract":"Knowledge graph embedding (KGE) is a hot topic in the field of Knowledge graphs (KG). It aims to transform KG entities and relations into vector representations, facilitating their manipulation in various application tasks and real-world scenarios. So far, numerous models have been developed in KGE to perform KG embedding. However, several challenges must be addressed when designing effective KGE models. The most discussed challenges in the literature include scalability (KGs contain millions of entities and relations), incompleteness (missing links), the complexity of relations (symmetries, inversion, composition, etc.), and the sparsity of some entities and relations. The purpose of this paper is to provide a comprehensive overview of KGE models. We begin with a theoretical analysis and comparison of the existing methods proposed so far for generating KGE, which we have classified into four categories. We then conducted experiments using four benchmark datasets to compare the efficacy, efficiency, inductiveness, the electricity and the CO<mml:math altimg=\"si1.svg\" display=\"inline\"><mml:msub><mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math> emission of five state-of-the-art methods in the link prediction task, providing a comprehensive analysis of the most commonly used benchmarks in the literature.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"81 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142867650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1016/j.cosrev.2024.100698
Ines Akaichi, Sabrina Kirrane
The sharing of data and digital assets in a decentralized settling is entangled with various legislative challenges, including, but not limited to, the need to adhere to legal requirements with respect to privacy and copyright. In order to provide more control to data and digital asset owners, usage control could be used to make sure that consumers handle data according to privacy, licenses, regulatory requirements, among others. However, considering that many of the existing usage control frameworks were designed to cater for different use cases (e.g., networking, operating systems, and industry 4.0), there is a need to better understand the existing proposals and how they compare to one another. In this paper, we provide a holistic overview of existing usage control frameworks and their support for a broad set of requirements. We systematically collect requirements that are routinely used to guide the development of usage control solutions, which are classified according to three broad dimensions: specification, enforcement, and system. We use these requirements to conduct a qualitative comparison of the most prominent usage control frameworks found in the literature. Finally, we identify existing gaps, challenges, and opportunities in the field of usage control in general, and in decentralized environments in particular.
{"title":"A comprehensive review of usage control frameworks","authors":"Ines Akaichi, Sabrina Kirrane","doi":"10.1016/j.cosrev.2024.100698","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100698","url":null,"abstract":"The sharing of data and digital assets in a decentralized settling is entangled with various legislative challenges, including, but not limited to, the need to adhere to legal requirements with respect to privacy and copyright. In order to provide more control to data and digital asset owners, usage control could be used to make sure that consumers handle data according to privacy, licenses, regulatory requirements, among others. However, considering that many of the existing usage control frameworks were designed to cater for different use cases (e.g., networking, operating systems, and industry 4.0), there is a need to better understand the existing proposals and how they compare to one another. In this paper, we provide a holistic overview of existing usage control frameworks and their support for a broad set of requirements. We systematically collect requirements that are routinely used to guide the development of usage control solutions, which are classified according to three broad dimensions: <ce:italic>specification</ce:italic>, <ce:italic>enforcement</ce:italic>, and <ce:italic>system</ce:italic>. We use these requirements to conduct a qualitative comparison of the most prominent usage control frameworks found in the literature. Finally, we identify existing gaps, challenges, and opportunities in the field of usage control in general, and in decentralized environments in particular.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"106 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142816523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-06DOI: 10.1016/j.cosrev.2024.100696
Fran Casino, Peio Lopez-Iturri, Constantinos Patsakis
As society’s dependence on Information and Communication Technologies (ICTs) grows, providing efficient and resourceful services entails many complexities that require, among others, scalable systems that are provided with intelligent and automated management. In parallel, the different components of cloud computing are continuously evolving to enhance their capabilities towards leveraging the next generation of ICTs. Due to the substantial investment in resources required to provide efficient services, suitable research and experimentation platforms to test and validate cloud technologies before releasing them into operational versions are crucial to delivering sound systems with sustainable cost/benefit ratios. In this article, we review the current state of the art by analysing cloud testbeds devoted to studying the capabilities of the cloud continuum. Instead of recalling a component-wise or architectural discussion of these systems, this article explores the full spectrum of the cloud continuum testbeds and their features, providing a taxonomy that can be practically used as an entry point to identify each testbed’s scope. Moreover, we extract the challenges found in the literature to deliver a profound discussion, correlating the analysed testbeds and their features. Our findings highlight the main gaps and potential roadmaps to provide effective testbeds considering the next generation of ICTs.
{"title":"Cloud continuum testbeds and next-generation ICTs: Trends, challenges, and perspectives","authors":"Fran Casino, Peio Lopez-Iturri, Constantinos Patsakis","doi":"10.1016/j.cosrev.2024.100696","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100696","url":null,"abstract":"As society’s dependence on Information and Communication Technologies (ICTs) grows, providing efficient and resourceful services entails many complexities that require, among others, scalable systems that are provided with intelligent and automated management. In parallel, the different components of cloud computing are continuously evolving to enhance their capabilities towards leveraging the next generation of ICTs. Due to the substantial investment in resources required to provide efficient services, suitable research and experimentation platforms to test and validate cloud technologies before releasing them into operational versions are crucial to delivering sound systems with sustainable cost/benefit ratios. In this article, we review the current state of the art by analysing cloud testbeds devoted to studying the capabilities of the cloud continuum. Instead of recalling a component-wise or architectural discussion of these systems, this article explores the full spectrum of the cloud continuum testbeds and their features, providing a taxonomy that can be practically used as an entry point to identify each testbed’s scope. Moreover, we extract the challenges found in the literature to deliver a profound discussion, correlating the analysed testbeds and their features. Our findings highlight the main gaps and potential roadmaps to provide effective testbeds considering the next generation of ICTs.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"10 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-06DOI: 10.1016/j.cosrev.2024.100697
Praveer Dubey, Mohit Kumar
The emergence of the Internet of Things (IoT) signifies a transformative wave of innovation, establishing a network of devices designed to enrich everyday experiences. Developing intelligent and secure IoT applications without compromising user privacy and the transparency of model decisions causes a significant challenge. Federated Learning (FL) serves as a innovative solution, encouraging collaborative learning across a wide range of devices and ensures the protection of user data and builds trust in the process. However, challenges remain, including data variability, potential security vulnerabilities within FL, and the necessity for transparency in decentralized models. Moreover, the lack of clarity associated with traditional AI models raises issues regarding transparency, trust and fairness in IoT applications. The survey examines the integration of Explainable AI (XAI) and FL within the Next Generation IoT framework. It provides a thorough analysis of how XAI techniques can elucidate the mechanisms of FL models, addressing challenges such as communication overhead, data heterogeneity and privacy-preserving explanation methods. The survey brings attention to the benefits of FL, including secure data sharing, effective modeling of heterogeneous data and improved communication and interoperability. Additionally, it presents mathematical formulations of the challenges in FL and discusses potential solutions aimed at enhancing the resilience and scalability of IoT implementations. Eventually, convergence of XAI and FL enhances interpretability and promotes the development of trustworthy and transparent AI systems, establishing a strong foundation for impactful applications in the ever evolving Next-Generation IoT landscape.
{"title":"Integrating Explainable AI with Federated Learning for Next-Generation IoT: A comprehensive review and prospective insights","authors":"Praveer Dubey, Mohit Kumar","doi":"10.1016/j.cosrev.2024.100697","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100697","url":null,"abstract":"The emergence of the Internet of Things (IoT) signifies a transformative wave of innovation, establishing a network of devices designed to enrich everyday experiences. Developing intelligent and secure IoT applications without compromising user privacy and the transparency of model decisions causes a significant challenge. Federated Learning (FL) serves as a innovative solution, encouraging collaborative learning across a wide range of devices and ensures the protection of user data and builds trust in the process. However, challenges remain, including data variability, potential security vulnerabilities within FL, and the necessity for transparency in decentralized models. Moreover, the lack of clarity associated with traditional AI models raises issues regarding transparency, trust and fairness in IoT applications. The survey examines the integration of Explainable AI (XAI) and FL within the Next Generation IoT framework. It provides a thorough analysis of how XAI techniques can elucidate the mechanisms of FL models, addressing challenges such as communication overhead, data heterogeneity and privacy-preserving explanation methods. The survey brings attention to the benefits of FL, including secure data sharing, effective modeling of heterogeneous data and improved communication and interoperability. Additionally, it presents mathematical formulations of the challenges in FL and discusses potential solutions aimed at enhancing the resilience and scalability of IoT implementations. Eventually, convergence of XAI and FL enhances interpretability and promotes the development of trustworthy and transparent AI systems, establishing a strong foundation for impactful applications in the ever evolving Next-Generation IoT landscape.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"221 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ontology learning, particularly axiom learning, is a challenging task that focuses on building expressive and decidable ontologies. The literature proposes several research efforts aimed to resolve the complexities inherent in axiom and rule learning, which seeks to automatically infer logical constructs from diverse data sources. The goal of this paper is to conduct a comprehensive review of existing work in this domain. It aims to critically analyze the contributions and limitations of current approaches, providing a clear understanding of the state-of-the-art and identifying areas where further research is needed.
{"title":"Ontology learning towards expressiveness: A survey","authors":"Pauline Armary, Cheikh Brahim El-Vaigh, Ouassila Labbani Narsis, Christophe Nicolle","doi":"10.1016/j.cosrev.2024.100693","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100693","url":null,"abstract":"<ce:italic>Ontology learning</ce:italic>, particularly axiom learning, is a challenging task that focuses on building expressive and decidable ontologies. The literature proposes several research efforts aimed to resolve the complexities inherent in axiom and rule learning, which seeks to automatically infer logical constructs from diverse data sources. The goal of this paper is to conduct a comprehensive review of existing work in this domain. It aims to critically analyze the contributions and limitations of current approaches, providing a clear understanding of the state-of-the-art and identifying areas where further research is needed.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"20 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1016/j.cosrev.2024.100695
Paola Magillo
The raster format of images and data is commonly intended as a synonymous of a square grid. Indeed, the square is not the only shape that can tessellate the plane. Other grids are well-known, and recently they have moved out of the fields of art and mathematics, and have started being of interest for technological applications. After introducing the main types of non-square grids, this paper presents experiences of practical uses of non-square grids, especially the hexagonal one, in various fields, including digital imaging, geographic systems, and their applications in sciences like medicine, environmental monitoring, etc. We conclude with considerations on the state of the art and perspectives for the future. In our opinion, the research is mature enough to prefigure a broader diffusion of some non-square grids, especially the hexagonal one.
{"title":"Non-square grids: A new trend in imaging and modeling?","authors":"Paola Magillo","doi":"10.1016/j.cosrev.2024.100695","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100695","url":null,"abstract":"The raster format of images and data is commonly intended as a synonymous of a square grid. Indeed, the square is not the only shape that can tessellate the plane. Other grids are well-known, and recently they have moved out of the fields of art and mathematics, and have started being of interest for technological applications. After introducing the main types of non-square grids, this paper presents experiences of practical uses of non-square grids, especially the hexagonal one, in various fields, including digital imaging, geographic systems, and their applications in sciences like medicine, environmental monitoring, etc. We conclude with considerations on the state of the art and perspectives for the future. In our opinion, the research is mature enough to prefigure a broader diffusion of some non-square grids, especially the hexagonal one.","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"18 1","pages":""},"PeriodicalIF":12.9,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1016/j.cosrev.2024.100694
S. Dhanasekar
The Internet of Things (IoT) is the basis of smart agriculture technology since it connects all aspects of intelligent systems in other industries and agricultural applications. The current farming methods are sufficient to supply adequate food in the future due to the fast-expanding global population. Smart farming aims to increase farm output and efficiency by leveraging state-of-the-art information technologies. The present study of IoT in agriculture was discussed in this review paper by studying significant literature, new techniques, protocols, challenges, issues, and potential paths for IoT-based smart farming. The soil-free technique connected to the hydroponic and aeroponic methods, known as soilless cultivation, is an alternative technology that can adapt well to such circumstances. The aeroponics method offers more advantages regarding faster plant development, increased productivity, and better nutrient absorption. Moreover, several methods and their measures used in aeroponics system using IoT devices were discussed.
{"title":"A comprehensive review on current issues and advancements of Internet of Things in precision agriculture","authors":"S. Dhanasekar","doi":"10.1016/j.cosrev.2024.100694","DOIUrl":"10.1016/j.cosrev.2024.100694","url":null,"abstract":"<div><div>The Internet of Things (IoT) is the basis of smart agriculture technology since it connects all aspects of intelligent systems in other industries and agricultural applications. The current farming methods are sufficient to supply adequate food in the future due to the fast-expanding global population. Smart farming aims to increase farm output and efficiency by leveraging state-of-the-art information technologies. The present study of IoT in agriculture was discussed in this review paper by studying significant literature, new techniques, protocols, challenges, issues, and potential paths for IoT-based smart farming. The soil-free technique connected to the hydroponic and aeroponic methods, known as soilless cultivation, is an alternative technology that can adapt well to such circumstances. The aeroponics method offers more advantages regarding faster plant development, increased productivity, and better nutrient absorption. Moreover, several methods and their measures used in aeroponics system using IoT devices were discussed.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"55 ","pages":"Article 100694"},"PeriodicalIF":13.3,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-23DOI: 10.1016/j.cosrev.2024.100692
Amandeep Kaur , C. Rama Krishna , Nilesh Vishwasrao Patil
Software Defined network (SDN) represents a sophisticated networking approach that separates the control logic from the data plane. This separation results in a loosely coupled architecture between the control and data planes, enhancing flexibility in managing and transforming network configurations. Additionally, SDN provides a centralized management model through the SDN controller, simplifying network administration. Despite these advantages, SDN has its security challenges. Issues such as topology spoofing, bandwidth exhaustion, flow table updates, and Distributed Denial of Service (DDoS) attacks are prevalent. Among these, DDoS attacks pose a significant threat to the SDN infrastructure. Understanding SDN’s comprehensive ecosystem and functionality is crucial for mitigating SDN vulnerabilities that may attract DDoS attacks. Further, the central data controller of SDN becomes the primary target of DDoS attacks. In this article, we present: (i) A comprehensive SDN environment ecosystem with analysis of each class, (ii) A DDoS attacks taxonomy for the SDN environment with characterization of each class, (iii) Critically analyzed existing statistical, machine and deep learning-based DDoS attacks detection approaches for the SDN environment, (iv) Systematically characterize and compare existing open-source Distributed Processing Frameworks (DPF) for traffic engineering in the SDN environment, (v) Security challenges associated with the SDN environment, (vi) Summarize publically available DDoS attack datasets, (vii) Highlight open issues and future research directions for protecting the SDN environment from DDoS attacks.
软件定义网络(SDN)是一种复杂的网络方法,它将控制逻辑与数据平面分离开来。这种分离使控制平面和数据平面之间形成了松散耦合的架构,提高了管理和转换网络配置的灵活性。此外,SDN 还通过 SDN 控制器提供了一种集中管理模式,从而简化了网络管理。尽管有这些优势,SDN 也有其安全挑战。拓扑欺骗、带宽耗尽、流量表更新和分布式拒绝服务(DDoS)攻击等问题普遍存在。其中,DDoS 攻击对 SDN 基础设施构成了重大威胁。了解 SDN 的综合生态系统和功能对于减少可能吸引 DDoS 攻击的 SDN 漏洞至关重要。此外,SDN 的中央数据控制器会成为 DDoS 攻击的主要目标。在本文中,我们将介绍(i) 全面的 SDN 环境生态系统,并对每一类进行分析;(ii) SDN 环境的 DDoS 攻击分类法,并对每一类进行特征描述;(iii) 针对 SDN 环境批判性地分析现有的基于统计、机器和深度学习的 DDoS 攻击检测方法、(iv) 系统分析和比较现有开源分布式处理框架 (DPF),用于 SDN 环境中的流量工程;(v) 与 SDN 环境相关的安全挑战;(vi) 总结公开可用的 DDoS 攻击数据集;(vii) 强调保护 SDN 环境免受 DDoS 攻击的公开问题和未来研究方向。
{"title":"A comprehensive review on Software-Defined Networking (SDN) and DDoS attacks: Ecosystem, taxonomy, traffic engineering, challenges and research directions","authors":"Amandeep Kaur , C. Rama Krishna , Nilesh Vishwasrao Patil","doi":"10.1016/j.cosrev.2024.100692","DOIUrl":"10.1016/j.cosrev.2024.100692","url":null,"abstract":"<div><div>Software Defined network (SDN) represents a sophisticated networking approach that separates the control logic from the data plane. This separation results in a loosely coupled architecture between the control and data planes, enhancing flexibility in managing and transforming network configurations. Additionally, SDN provides a centralized management model through the SDN controller, simplifying network administration. Despite these advantages, SDN has its security challenges. Issues such as topology spoofing, bandwidth exhaustion, flow table updates, and Distributed Denial of Service (DDoS) attacks are prevalent. Among these, DDoS attacks pose a significant threat to the SDN infrastructure. Understanding SDN’s comprehensive ecosystem and functionality is crucial for mitigating SDN vulnerabilities that may attract DDoS attacks. Further, the central data controller of SDN becomes the primary target of DDoS attacks. In this article, we present: (i) A comprehensive SDN environment ecosystem with analysis of each class, (ii) A DDoS attacks taxonomy for the SDN environment with characterization of each class, (iii) Critically analyzed existing statistical, machine and deep learning-based DDoS attacks detection approaches for the SDN environment, (iv) Systematically characterize and compare existing open-source Distributed Processing Frameworks (DPF) for traffic engineering in the SDN environment, (v) Security challenges associated with the SDN environment, (vi) Summarize publically available DDoS attack datasets, (vii) Highlight open issues and future research directions for protecting the SDN environment from DDoS attacks.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"55 ","pages":"Article 100692"},"PeriodicalIF":13.3,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142703289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}