Pub Date : 2024-07-03DOI: 10.1016/j.cosrev.2024.100655
Benjamin Antunes, David R.C. Hill
Reproducibility is widely acknowledged as a fundamental principle in scientific research. Currently, the scientific community grapples with numerous challenges associated with reproducibility, often referred to as the “reproducibility crisis”. This crisis permeated numerous scientific disciplines. In this study, we examined the factors in scientific practices that might contribute to this lack of reproducibility. Significant focus is placed on the prevalent integration of computation in research, which can sometimes operates as a black box in published papers. Our study primarily focuses on high-performance computing (HPC), which presents unique reproducibility challenges. This paper provides a comprehensive review of these concerns and potential solutions. Furthermore, we discuss the critical role of reproducible research in advancing science and identifying persisting issues within the field of HPC.
{"title":"Reproducibility, Replicability and Repeatability: A survey of reproducible research with a focus on high performance computing","authors":"Benjamin Antunes, David R.C. Hill","doi":"10.1016/j.cosrev.2024.100655","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100655","url":null,"abstract":"<div><p>Reproducibility is widely acknowledged as a fundamental principle in scientific research. Currently, the scientific community grapples with numerous challenges associated with reproducibility, often referred to as the “reproducibility crisis”. This crisis permeated numerous scientific disciplines. In this study, we examined the factors in scientific practices that might contribute to this lack of reproducibility. Significant focus is placed on the prevalent integration of computation in research, which can sometimes operates as a black box in published papers. Our study primarily focuses on high-performance computing (HPC), which presents unique reproducibility challenges. This paper provides a comprehensive review of these concerns and potential solutions. Furthermore, we discuss the critical role of reproducible research in advancing science and identifying persisting issues within the field of HPC.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100655"},"PeriodicalIF":13.3,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-29DOI: 10.1016/j.cosrev.2024.100656
Peng Peng , Weiwei Lin , Wentai Wu , Haotong Zhang , Shaoliang Peng , Qingbo Wu , Keqin Li
Driven by the demand of time-sensitive and data-intensive applications, edge computing has attracted wide attention as one of the cornerstones of modern service architectures. An edge-based system can facilitate a flexible processing of tasks over heterogeneous resources. Hence, computation offloading is the key technique for systematic service improvement. However, with the proliferation of devices, traditional approaches have clear limits in handling dynamic and heterogeneous systems at scale. Deep Reinforcement Learning (DRL), as a promising alternative, has shown great potential with powerful high-dimensional perception and decision-making capability to enable intelligent offloading, but the great complexity in DRL-based algorithm design turns out to be an obstacle. In light of this, this survey provides a comprehensive view of DRL-based approaches to computation offloading in edge computing systems. We cover state-of-the-art advances by delving into the fundamental elements of DRL algorithm design with focuses on the target environmental factors, Markov Decision Process (MDP) model construction, and refined learning strategies. Based on our investigation, several open challenges are further highlighted from both the perspective of algorithm design and realistic requirements that deserve more attention in future research.
{"title":"A survey on computation offloading in edge systems: From the perspective of deep reinforcement learning approaches","authors":"Peng Peng , Weiwei Lin , Wentai Wu , Haotong Zhang , Shaoliang Peng , Qingbo Wu , Keqin Li","doi":"10.1016/j.cosrev.2024.100656","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100656","url":null,"abstract":"<div><p>Driven by the demand of time-sensitive and data-intensive applications, edge computing has attracted wide attention as one of the cornerstones of modern service architectures. An edge-based system can facilitate a flexible processing of tasks over heterogeneous resources. Hence, computation offloading is the key technique for systematic service improvement. However, with the proliferation of devices, traditional approaches have clear limits in handling dynamic and heterogeneous systems at scale. Deep Reinforcement Learning (DRL), as a promising alternative, has shown great potential with powerful high-dimensional perception and decision-making capability to enable intelligent offloading, but the great complexity in DRL-based algorithm design turns out to be an obstacle. In light of this, this survey provides a comprehensive view of DRL-based approaches to computation offloading in edge computing systems. We cover state-of-the-art advances by delving into the fundamental elements of DRL algorithm design with focuses on the target environmental factors, Markov Decision Process (MDP) model construction, and refined learning strategies. Based on our investigation, several open challenges are further highlighted from both the perspective of algorithm design and realistic requirements that deserve more attention in future research.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100656"},"PeriodicalIF":13.3,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141485434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-22DOI: 10.1016/j.cosrev.2024.100654
Arturo Montejo-Ráez , M. Dolores Molina-González , Salud María Jiménez-Zafra , Miguel Ángel García-Cumbreras , Luis Joaquín García-López
For years, the scientific community has researched monitoring approaches for the detection of certain mental disorders and risky behaviors, like depression, eating disorders, gambling, and suicidal ideation among others, in order to activate prevention or mitigation strategies and, in severe cases, clinical treatment. Natural Language Processing is one of the most active disciplines dealing with the automatic detection of mental disorders. This paper offers a comprehensive and extensive review of research works on Natural Language Processing applied to the identification of some mental disorders. To this end, we have identified from a literature review, which are the main types of features used to represent the texts, the machine learning algorithms that are preferred or the most targeted social media platforms, among other aspects. Besides, the paper reports on scientific forums and projects focused on the automatic detection of these problems over the most popular social networks. Thus, this compilation provides a broad view of the matter, summarizing main strategies, and significant findings, but, also, recognizing some of the weaknesses in the research works published so far, serving as clues for future research.
{"title":"A survey on detecting mental disorders with natural language processing: Literature review, trends and challenges","authors":"Arturo Montejo-Ráez , M. Dolores Molina-González , Salud María Jiménez-Zafra , Miguel Ángel García-Cumbreras , Luis Joaquín García-López","doi":"10.1016/j.cosrev.2024.100654","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100654","url":null,"abstract":"<div><p>For years, the scientific community has researched monitoring approaches for the detection of certain mental disorders and risky behaviors, like depression, eating disorders, gambling, and suicidal ideation among others, in order to activate prevention or mitigation strategies and, in severe cases, clinical treatment. Natural Language Processing is one of the most active disciplines dealing with the automatic detection of mental disorders. This paper offers a comprehensive and extensive review of research works on Natural Language Processing applied to the identification of some mental disorders. To this end, we have identified from a literature review, which are the main types of features used to represent the texts, the machine learning algorithms that are preferred or the most targeted social media platforms, among other aspects. Besides, the paper reports on scientific forums and projects focused on the automatic detection of these problems over the most popular social networks. Thus, this compilation provides a broad view of the matter, summarizing main strategies, and significant findings, but, also, recognizing some of the weaknesses in the research works published so far, serving as clues for future research.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100654"},"PeriodicalIF":13.3,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1574013724000388/pdfft?md5=1aa9d3d86e8e2a92377e4b8afd982458&pid=1-s2.0-S1574013724000388-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1016/j.cosrev.2024.100649
Oscar Díaz , Marcela Genero , Jeremías P. Contell , Mario Piattini
Research in Software Engineering greatly benefits from Systematic Literature Reviews (SLRs), in view of the citations they receive. While there has been a focus on improving the quality of SLRs in terms of the process, it remains unclear if this emphasis on rigor has also led to an increase in relevance. This study introduces Citation Context Analysis for SLRs as a method to go beyond simple citation counting by examining the reasons behind citations. To achieve this, we propose the Resonance Scheme, which characterizes how referring papers use SLRs based on the outputs that SLRs can provide, either backward-oriented (such as synthesis or aggregating evidence) or forward-oriented (such as theory building or identifying research gaps). A proof-of-concept demonstrates that most referring papers appreciate SLRs for their synthesis efforts, while only a small number refer to forward-oriented outputs. This approach is expected to be useful for three stakeholders. First, SLR producers can use the scheme to capture the contributions of their SLRs. Second, SLR consumers, such as Ph.D. students looking for research gaps, can easily identify the contributions of interest. Third, SLR reviewers can use the scheme as a tool to assess the contributions that merit SLR publication.
{"title":"Adding relevance to rigor: Assessing the contributions of SLRs in Software Engineering through Citation Context Analysis","authors":"Oscar Díaz , Marcela Genero , Jeremías P. Contell , Mario Piattini","doi":"10.1016/j.cosrev.2024.100649","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100649","url":null,"abstract":"<div><p>Research in Software Engineering greatly benefits from Systematic Literature Reviews (SLRs), in view of the citations they receive. While there has been a focus on improving the quality of SLRs in terms of the process, it remains unclear if this emphasis on rigor has also led to an increase in relevance. This study introduces Citation Context Analysis for SLRs as a method to go beyond simple citation counting by examining the reasons behind citations. To achieve this, we propose the Resonance Scheme, which characterizes how referring papers use SLRs based on the outputs that SLRs can provide, either backward-oriented (such as synthesis or aggregating evidence) or forward-oriented (such as theory building or identifying research gaps). A proof-of-concept demonstrates that most referring papers appreciate SLRs for their synthesis efforts, while only a small number refer to forward-oriented outputs. This approach is expected to be useful for three stakeholders. First, SLR producers can use the scheme to capture the contributions of their SLRs. Second, SLR consumers, such as Ph.D. students looking for research gaps, can easily identify the contributions of interest. Third, SLR reviewers can use the scheme as a tool to assess the contributions that merit SLR publication.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100649"},"PeriodicalIF":12.9,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1016/j.cosrev.2024.100648
Ramkumar Thirunavukarasu , Evans Kotei
The Transformer network is the main application area for natural language processing. It has gained traction lately and exhibits potential in the field of computer vision. This cutting-edge method has proven to offer a significant impact on image analysis, a crucial area of computer vision. The transformer's outstanding performance in vision computing places it as an alternative to the convolutional neural network for vision tasks. Transformers have taken center stage in the field of natural language processing. Despite the outstanding performance of transformer networks in natural image processing, their implementation in medical image analysis is gradually gaining roots. This study focuses on the transformer application in natural and medical image analysis. The first part of the study provides an overview of the core concepts of the attention mechanism built into transformers for long-range feature extraction. The study again highlights the various transformer architectures proposed for natural and medical image tasks such as segmentation, classification, image registration and diagnosis. Finally, the paper presents limitations identified in proposed transformer networks for natural and medical image processing. It also highlights prospective study opportunities for further research to better the computer vision domain, especially medical image analysis. This study offers knowledge to scholars and researchers studying computer vision applications as they focus on creating innovative transformer network-based solutions.
{"title":"A comprehensive review on transformer network for natural and medical image analysis","authors":"Ramkumar Thirunavukarasu , Evans Kotei","doi":"10.1016/j.cosrev.2024.100648","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100648","url":null,"abstract":"<div><p>The Transformer network is the main application area for natural language processing. It has gained traction lately and exhibits potential in the field of computer vision. This cutting-edge method has proven to offer a significant impact on image analysis, a crucial area of computer vision. The transformer's outstanding performance in vision computing places it as an alternative to the convolutional neural network for vision tasks. Transformers have taken center stage in the field of natural language processing. Despite the outstanding performance of transformer networks in natural image processing, their implementation in medical image analysis is gradually gaining roots. This study focuses on the transformer application in natural and medical image analysis. The first part of the study provides an overview of the core concepts of the attention mechanism built into transformers for long-range feature extraction. The study again highlights the various transformer architectures proposed for natural and medical image tasks such as segmentation, classification, image registration and diagnosis. Finally, the paper presents limitations identified in proposed transformer networks for natural and medical image processing. It also highlights prospective study opportunities for further research to better the computer vision domain, especially medical image analysis. This study offers knowledge to scholars and researchers studying computer vision applications as they focus on creating innovative transformer network-based solutions.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100648"},"PeriodicalIF":12.9,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The auto-scaling feature is fundamental to serverless computing, and it automatically allows applications to scale as needed. Hence, this allows applications to be configured to adapt to current traffic and demands and acquire resources as necessary without the need to manage servers directly. Auto-scaling is an important principle in developing serverless applications that is considered and increasingly recognized by academia and industry. Despite the strong interest in auto-scaling in serverless computing in the scientific and industrial community, no clear, comprehensive, and systematic investigation has been conducted. As part of the study of automatic scaling in serverless computing, key strategies and
approaches are investigated during the lifecycle of cloud applications. This research examines three key approaches to automatically scaling serverless computing applications in the taxonomy presented. These approaches include machine learning (ML)-based, frameworks-based, and models-based. Additionally, we provide an overview of key performance metrics essential to the auto-scaling process of cloud applications and discuss the requirements. It discusses key concepts and limitations of serverless computing approaches, challenges, future directions, and research opportunities.
{"title":"Auto-scaling mechanisms in serverless computing: A comprehensive review","authors":"Mohammad Tari , Mostafa Ghobaei-Arani , Jafar Pouramini , Mohsen Ghorbian","doi":"10.1016/j.cosrev.2024.100650","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100650","url":null,"abstract":"<div><p>The auto-scaling feature is fundamental to serverless computing, and it automatically allows applications to scale as needed. Hence, this allows applications to be configured to adapt to current traffic and demands and acquire resources as necessary without the need to manage servers directly. Auto-scaling is an important principle in developing serverless applications that is considered and increasingly recognized by academia and industry. Despite the strong interest in auto-scaling in serverless computing in the scientific and industrial community, no clear, comprehensive, and systematic investigation has been conducted. As part of the study of automatic scaling in serverless computing, key strategies and</p><p>approaches are investigated during the lifecycle of cloud applications. This research examines three key approaches to automatically scaling serverless computing applications in the taxonomy presented. These approaches include machine learning (ML)-based, frameworks-based, and models-based. Additionally, we provide an overview of key performance metrics essential to the auto-scaling process of cloud applications and discuss the requirements. It discusses key concepts and limitations of serverless computing approaches, challenges, future directions, and research opportunities.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100650"},"PeriodicalIF":12.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To understand the real world using various types of data, Artificial Intelligence (AI) is the most used technique nowadays. While finding the pattern within the analyzed data represents the main task. This is performed by extracting representative features step, which is proceeded using the statistical algorithms or using some specific filters. However, the selection of useful features from large-scale data represented a crucial challenge. Now, with the development of convolution neural networks (CNNs), feature extraction operation has become more automatic and easier. CNNs allow to work on large-scale size of data, as well as cover different scenarios for a specific task. For computer vision tasks, convolutional networks are used to extract features and also for the other parts of a deep learning model. The selection of a suitable network for feature extraction or the other parts of a DL model is not random work. So, the implementation of such a model can be related to the target task as well as its computational complexity. Many networks have been proposed and become famous networks used for any DL models in any AI task. These networks are exploited for feature extraction or at the beginning of any DL model which is named backbones. A backbone is a known network trained and demonstrates its effectiveness. In this paper, an overview of the existing backbones, e.g. VGGs, ResNets, DenseNet, etc, is given with a detailed description. Also, a couple of computer vision tasks are discussed by providing a review of each task regarding the backbones used. In addition, a comparison in terms of performance is also provided, based on the backbone used for each task.
{"title":"Backbones-review: Feature extractor networks for deep learning and deep reinforcement learning approaches in computer vision","authors":"Omar Elharrouss , Younes Akbari , Noor Almadeed , Somaya Al-Maadeed","doi":"10.1016/j.cosrev.2024.100645","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100645","url":null,"abstract":"<div><p>To understand the real world using various types of data, Artificial Intelligence (AI) is the most used technique nowadays. While finding the pattern within the analyzed data represents the main task. This is performed by extracting representative features step, which is proceeded using the statistical algorithms or using some specific filters. However, the selection of useful features from large-scale data represented a crucial challenge. Now, with the development of convolution neural networks (CNNs), feature extraction operation has become more automatic and easier. CNNs allow to work on large-scale size of data, as well as cover different scenarios for a specific task. For computer vision tasks, convolutional networks are used to extract features and also for the other parts of a deep learning model. The selection of a suitable network for feature extraction or the other parts of a DL model is not random work. So, the implementation of such a model can be related to the target task as well as its computational complexity. Many networks have been proposed and become famous networks used for any DL models in any AI task. These networks are exploited for feature extraction or at the beginning of any DL model which is named backbones. A backbone is a known network trained and demonstrates its effectiveness. In this paper, an overview of the existing backbones, e.g. VGGs, ResNets, DenseNet, etc, is given with a detailed description. Also, a couple of computer vision tasks are discussed by providing a review of each task regarding the backbones used. In addition, a comparison in terms of performance is also provided, based on the backbone used for each task.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100645"},"PeriodicalIF":12.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141291570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.cosrev.2024.100647
Raja Oueslati , Ghaith Manita , Amit Chhabra , Ouajdi Korbaa
Chaos Game Optimization Algorithm (CGO) is a novel advancement in metaheuristic optimization inspired by chaos theory. It addresses complex optimization problems in dynamical systems, exhibiting unique behaviours such as fractals and self-organized patterns. CGO’s design exemplifies adaptability and robustness, making it a significant tool for tackling intricate optimization scenarios. This study presents a comprehensive and updated overview of CGO, exploring the various variants and adaptations that have been published in numerous research studies since its introduction in 2020, with 4% in book chapters, 7% in international conference proceedings, and 89% in prestigious international journals. CGO variants covered in this paper include 4% binary, 22% for multi-objective and modification and 52% for hybridization variants. Moreover, the applications of CGO, demonstrate its efficacy and flexibility across different domains with 32% in energy, 28% in engineering, 11% in IoT and machine learning, 6% in truss structures, 4% in big data, 2% in medical imaging, in security, in electronic, and in microarray technology. Furthermore, we discuss the future directions of CGO, hypothesizing its potential advancements and broader implications in optimization theory and practice. The primary objectives of this survey paper are to provide a comprehensive overview of CGO, highlighting its innovative approach, discussing its variants and their usage in different sectors, and the burgeoning interest it has sparked in metaheuristic algorithms. As a result, this manuscript is expected to offer valuable insights for engineers, professionals across different sectors, and academic researchers.
{"title":"Chaos Game Optimization: A comprehensive study of its variants, applications, and future directions","authors":"Raja Oueslati , Ghaith Manita , Amit Chhabra , Ouajdi Korbaa","doi":"10.1016/j.cosrev.2024.100647","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100647","url":null,"abstract":"<div><p>Chaos Game Optimization Algorithm (CGO) is a novel advancement in metaheuristic optimization inspired by chaos theory. It addresses complex optimization problems in dynamical systems, exhibiting unique behaviours such as fractals and self-organized patterns. CGO’s design exemplifies adaptability and robustness, making it a significant tool for tackling intricate optimization scenarios. This study presents a comprehensive and updated overview of CGO, exploring the various variants and adaptations that have been published in numerous research studies since its introduction in 2020, with 4% in book chapters, 7% in international conference proceedings, and 89% in prestigious international journals. CGO variants covered in this paper include 4% binary, 22% for multi-objective and modification and 52% for hybridization variants. Moreover, the applications of CGO, demonstrate its efficacy and flexibility across different domains with 32% in energy, 28% in engineering, 11% in IoT and machine learning, 6% in truss structures, 4% in big data, 2% in medical imaging, in security, in electronic, and in microarray technology. Furthermore, we discuss the future directions of CGO, hypothesizing its potential advancements and broader implications in optimization theory and practice. The primary objectives of this survey paper are to provide a comprehensive overview of CGO, highlighting its innovative approach, discussing its variants and their usage in different sectors, and the burgeoning interest it has sparked in metaheuristic algorithms. As a result, this manuscript is expected to offer valuable insights for engineers, professionals across different sectors, and academic researchers.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100647"},"PeriodicalIF":12.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-defined Networking (SDN) is a transformative approach for addressing the limitations of legacy networks due to decoupling of control planes from data planes. It offers increased programmability and flexibility for designing of cloud-based data centers. SDN-Enabled cloud data centers help in managing the huge traffic very effectively and efficiently. However, the security of SDN-Enabled Cloud data centers against different attacks is a key concern for cloud security professionals. Distributed Denial of Service Attacks have emerged as one of the most devastating attacks that constantly worried the entire cloud security research community. To prelude this, it is pertinent to fundamentally focus on classification of these attacks and their defence strategies in an effective way which has been the basis of this research paper. The aim of this paper is to formulate and conceptualize the taxonomies of DDoS attacks and its Défense mechanisms. Improved taxonomy of DDoS attacks highlights the various vulnerable points of vulnerability in SDN-enabled cloud architecture. Additionally, a taxonomy of defence mechanisms offers an extensive survey of recent techniques for detecting and mitigating DDoS attacks in the SDN-enabled cloud environment. Finally, we discuss the open research issues and challenges for the cloud security research community for carrying out future research and investigation.
软件定义网络(Software-defined Networking,SDN)是一种变革性方法,可解决传统网络因控制平面与数据平面解耦而产生的局限性。它为设计基于云的数据中心提供了更高的可编程性和灵活性。支持 SDN 的云数据中心有助于高效管理巨大的流量。然而,SDN-Enabled 云数据中心针对不同攻击的安全性是云安全专业人员关注的一个关键问题。分布式拒绝服务攻击是最具破坏性的攻击之一,一直困扰着整个云安全研究界。在此之前,有必要从根本上关注这些攻击的分类及其有效的防御策略,这也是本研究论文的基础。本文旨在对 DDoS 攻击及其防御机制进行分类和概念化。改进后的 DDoS 攻击分类法强调了启用 SDN 的云架构中的各种易受攻击点。此外,防御机制分类法还提供了在支持 SDN 的云环境中检测和缓解 DDoS 攻击的最新技术的广泛调查。最后,我们讨论了云安全研究界在开展未来研究和调查方面面临的公开研究课题和挑战。
{"title":"DDoS attacks & defense mechanisms in SDN-enabled cloud: Taxonomy, review and research challenges","authors":"Jasmeen Kaur Chahal , Abhinav Bhandari , Sunny Behal","doi":"10.1016/j.cosrev.2024.100644","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100644","url":null,"abstract":"<div><p>Software-defined Networking (SDN) is a transformative approach for addressing the limitations of legacy networks due to decoupling of control planes from data planes. It offers increased programmability and flexibility for designing of cloud-based data centers. SDN-Enabled cloud data centers help in managing the huge traffic very effectively and efficiently. However, the security of SDN-Enabled Cloud data centers against different attacks is a key concern for cloud security professionals. Distributed Denial of Service Attacks have emerged as one of the most devastating attacks that constantly worried the entire cloud security research community. To prelude this, it is pertinent to fundamentally focus on classification of these attacks and their defence strategies in an effective way which has been the basis of this research paper. The aim of this paper is to formulate and conceptualize the taxonomies of DDoS attacks and its Défense mechanisms. Improved taxonomy of DDoS attacks highlights the various vulnerable points of vulnerability in SDN-enabled cloud architecture. Additionally, a taxonomy of defence mechanisms offers an extensive survey of recent techniques for detecting and mitigating DDoS attacks in the SDN-enabled cloud environment. Finally, we discuss the open research issues and challenges for the cloud security research community for carrying out future research and investigation.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100644"},"PeriodicalIF":12.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The variety of enormous information on the web encourages the field of recommender systems (RS) to flourish. In recent times, deep learning techniques have significantly impacted information retrieval tasks, including RS. The probabilistic and non-linear views of neural networks emerge to generative models for recommendation tasks. At present, there is an absence of extensive survey on deep generative models for RS. Therefore, this article aims at providing a coherent and comprehensive survey on recent efforts on deep generative models for RS. In particular, we provide an in-depth research effort in devising the taxonomy of deep generative models for RS, along with the summary of state-of-art methods. Lastly, we highlight the potential future prospects based on recent trends and new research avenues in this interesting and developing field. Public code links, papers, and popular datasets covered in this survey are accessible at: https://github.com/creyesp/Awesome-recsys?tab=readme-ov-file#papers.
{"title":"Deep learning with the generative models for recommender systems: A survey","authors":"Ravi Nahta , Ganpat Singh Chauhan , Yogesh Kumar Meena , Dinesh Gopalani","doi":"10.1016/j.cosrev.2024.100646","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100646","url":null,"abstract":"<div><p>The variety of enormous information on the web encourages the field of recommender systems (RS) to flourish. In recent times, deep learning techniques have significantly impacted information retrieval tasks, including RS. The probabilistic and non-linear views of neural networks emerge to generative models for recommendation tasks. At present, there is an absence of extensive survey on deep generative models for RS. Therefore, this article aims at providing a coherent and comprehensive survey on recent efforts on deep generative models for RS. In particular, we provide an in-depth research effort in devising the taxonomy of deep generative models for RS, along with the summary of state-of-art methods. Lastly, we highlight the potential future prospects based on recent trends and new research avenues in this interesting and developing field. Public code links, papers, and popular datasets covered in this survey are accessible at: <span>https://github.com/creyesp/Awesome-recsys?tab=readme-ov-file#papers</span><svg><path></path></svg>.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100646"},"PeriodicalIF":12.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}