首页 > 最新文献

Computer Science Review最新文献

英文 中文
Reproducibility, Replicability and Repeatability: A survey of reproducible research with a focus on high performance computing 可重复性、可复制性和可重复性:以高性能计算为重点的可重复研究调查
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-03 DOI: 10.1016/j.cosrev.2024.100655
Benjamin Antunes, David R.C. Hill

Reproducibility is widely acknowledged as a fundamental principle in scientific research. Currently, the scientific community grapples with numerous challenges associated with reproducibility, often referred to as the “reproducibility crisis”. This crisis permeated numerous scientific disciplines. In this study, we examined the factors in scientific practices that might contribute to this lack of reproducibility. Significant focus is placed on the prevalent integration of computation in research, which can sometimes operates as a black box in published papers. Our study primarily focuses on high-performance computing (HPC), which presents unique reproducibility challenges. This paper provides a comprehensive review of these concerns and potential solutions. Furthermore, we discuss the critical role of reproducible research in advancing science and identifying persisting issues within the field of HPC.

可重复性被公认为是科学研究的一项基本原则。目前,科学界正努力应对与可重复性相关的众多挑战,这些挑战通常被称为 "可重复性危机"。这一危机渗透到众多科学学科。在本研究中,我们探讨了科学实践中可能导致缺乏可重复性的因素。研究的重点是计算在研究中的普遍应用,因为计算有时在发表的论文中就像一个黑盒子。我们的研究主要集中于高性能计算(HPC),因为它带来了独特的可重复性挑战。本文全面回顾了这些问题和潜在的解决方案。此外,我们还讨论了可重现性研究在推动科学发展方面的关键作用,并指出了高性能计算领域长期存在的问题。
{"title":"Reproducibility, Replicability and Repeatability: A survey of reproducible research with a focus on high performance computing","authors":"Benjamin Antunes,&nbsp;David R.C. Hill","doi":"10.1016/j.cosrev.2024.100655","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100655","url":null,"abstract":"<div><p>Reproducibility is widely acknowledged as a fundamental principle in scientific research. Currently, the scientific community grapples with numerous challenges associated with reproducibility, often referred to as the “reproducibility crisis”. This crisis permeated numerous scientific disciplines. In this study, we examined the factors in scientific practices that might contribute to this lack of reproducibility. Significant focus is placed on the prevalent integration of computation in research, which can sometimes operates as a black box in published papers. Our study primarily focuses on high-performance computing (HPC), which presents unique reproducibility challenges. This paper provides a comprehensive review of these concerns and potential solutions. Furthermore, we discuss the critical role of reproducible research in advancing science and identifying persisting issues within the field of HPC.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100655"},"PeriodicalIF":13.3,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on computation offloading in edge systems: From the perspective of deep reinforcement learning approaches 边缘系统计算卸载调查:从深度强化学习方法的角度
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-29 DOI: 10.1016/j.cosrev.2024.100656
Peng Peng , Weiwei Lin , Wentai Wu , Haotong Zhang , Shaoliang Peng , Qingbo Wu , Keqin Li

Driven by the demand of time-sensitive and data-intensive applications, edge computing has attracted wide attention as one of the cornerstones of modern service architectures. An edge-based system can facilitate a flexible processing of tasks over heterogeneous resources. Hence, computation offloading is the key technique for systematic service improvement. However, with the proliferation of devices, traditional approaches have clear limits in handling dynamic and heterogeneous systems at scale. Deep Reinforcement Learning (DRL), as a promising alternative, has shown great potential with powerful high-dimensional perception and decision-making capability to enable intelligent offloading, but the great complexity in DRL-based algorithm design turns out to be an obstacle. In light of this, this survey provides a comprehensive view of DRL-based approaches to computation offloading in edge computing systems. We cover state-of-the-art advances by delving into the fundamental elements of DRL algorithm design with focuses on the target environmental factors, Markov Decision Process (MDP) model construction, and refined learning strategies. Based on our investigation, several open challenges are further highlighted from both the perspective of algorithm design and realistic requirements that deserve more attention in future research.

在时间敏感型和数据密集型应用需求的推动下,边缘计算作为现代服务架构的基石之一受到广泛关注。基于边缘的系统可以促进在异构资源上灵活处理任务。因此,计算卸载是系统性服务改进的关键技术。然而,随着设备的激增,传统方法在大规模处理动态异构系统方面存在明显的局限性。深度强化学习(DRL)作为一种有前途的替代方法,凭借强大的高维感知和决策能力,在实现智能卸载方面展现出巨大潜力,但基于 DRL 的算法设计的巨大复杂性却成为障碍。有鉴于此,本调查报告全面介绍了边缘计算系统中基于 DRL 的计算卸载方法。我们深入研究了 DRL 算法设计的基本要素,重点关注目标环境因素、马尔可夫决策过程(MDP)模型构建和精炼学习策略,从而涵盖了最新进展。基于我们的研究,我们从算法设计和现实需求两个角度进一步强调了几个有待解决的挑战,这些挑战值得在未来的研究中给予更多关注。
{"title":"A survey on computation offloading in edge systems: From the perspective of deep reinforcement learning approaches","authors":"Peng Peng ,&nbsp;Weiwei Lin ,&nbsp;Wentai Wu ,&nbsp;Haotong Zhang ,&nbsp;Shaoliang Peng ,&nbsp;Qingbo Wu ,&nbsp;Keqin Li","doi":"10.1016/j.cosrev.2024.100656","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100656","url":null,"abstract":"<div><p>Driven by the demand of time-sensitive and data-intensive applications, edge computing has attracted wide attention as one of the cornerstones of modern service architectures. An edge-based system can facilitate a flexible processing of tasks over heterogeneous resources. Hence, computation offloading is the key technique for systematic service improvement. However, with the proliferation of devices, traditional approaches have clear limits in handling dynamic and heterogeneous systems at scale. Deep Reinforcement Learning (DRL), as a promising alternative, has shown great potential with powerful high-dimensional perception and decision-making capability to enable intelligent offloading, but the great complexity in DRL-based algorithm design turns out to be an obstacle. In light of this, this survey provides a comprehensive view of DRL-based approaches to computation offloading in edge computing systems. We cover state-of-the-art advances by delving into the fundamental elements of DRL algorithm design with focuses on the target environmental factors, Markov Decision Process (MDP) model construction, and refined learning strategies. Based on our investigation, several open challenges are further highlighted from both the perspective of algorithm design and realistic requirements that deserve more attention in future research.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100656"},"PeriodicalIF":13.3,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141485434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on detecting mental disorders with natural language processing: Literature review, trends and challenges 利用自然语言处理检测精神障碍的调查:文献综述、趋势与挑战
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-22 DOI: 10.1016/j.cosrev.2024.100654
Arturo Montejo-Ráez , M. Dolores Molina-González , Salud María Jiménez-Zafra , Miguel Ángel García-Cumbreras , Luis Joaquín García-López

For years, the scientific community has researched monitoring approaches for the detection of certain mental disorders and risky behaviors, like depression, eating disorders, gambling, and suicidal ideation among others, in order to activate prevention or mitigation strategies and, in severe cases, clinical treatment. Natural Language Processing is one of the most active disciplines dealing with the automatic detection of mental disorders. This paper offers a comprehensive and extensive review of research works on Natural Language Processing applied to the identification of some mental disorders. To this end, we have identified from a literature review, which are the main types of features used to represent the texts, the machine learning algorithms that are preferred or the most targeted social media platforms, among other aspects. Besides, the paper reports on scientific forums and projects focused on the automatic detection of these problems over the most popular social networks. Thus, this compilation provides a broad view of the matter, summarizing main strategies, and significant findings, but, also, recognizing some of the weaknesses in the research works published so far, serving as clues for future research.

多年来,科学界一直在研究监测方法,以检测某些精神障碍和危险行为,如抑郁症、饮食失调、赌博和自杀意念等,从而启动预防或缓解策略,并在严重情况下进行临床治疗。自然语言处理是处理精神障碍自动检测的最活跃学科之一。本文对应用自然语言处理技术识别某些精神障碍的研究工作进行了全面而广泛的综述。为此,我们通过文献综述确定了用于表示文本的主要特征类型、首选的机器学习算法或最有针对性的社交媒体平台等方面。此外,本文还报道了一些科学论坛和项目,这些论坛和项目的重点是在最流行的社交网络上自动检测这些问题。因此,本汇编提供了一个广阔的视角,总结了主要策略和重要发现,同时也认识到了迄今为止已发表的研究成果中的一些不足之处,为今后的研究提供了线索。
{"title":"A survey on detecting mental disorders with natural language processing: Literature review, trends and challenges","authors":"Arturo Montejo-Ráez ,&nbsp;M. Dolores Molina-González ,&nbsp;Salud María Jiménez-Zafra ,&nbsp;Miguel Ángel García-Cumbreras ,&nbsp;Luis Joaquín García-López","doi":"10.1016/j.cosrev.2024.100654","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100654","url":null,"abstract":"<div><p>For years, the scientific community has researched monitoring approaches for the detection of certain mental disorders and risky behaviors, like depression, eating disorders, gambling, and suicidal ideation among others, in order to activate prevention or mitigation strategies and, in severe cases, clinical treatment. Natural Language Processing is one of the most active disciplines dealing with the automatic detection of mental disorders. This paper offers a comprehensive and extensive review of research works on Natural Language Processing applied to the identification of some mental disorders. To this end, we have identified from a literature review, which are the main types of features used to represent the texts, the machine learning algorithms that are preferred or the most targeted social media platforms, among other aspects. Besides, the paper reports on scientific forums and projects focused on the automatic detection of these problems over the most popular social networks. Thus, this compilation provides a broad view of the matter, summarizing main strategies, and significant findings, but, also, recognizing some of the weaknesses in the research works published so far, serving as clues for future research.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100654"},"PeriodicalIF":13.3,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1574013724000388/pdfft?md5=1aa9d3d86e8e2a92377e4b8afd982458&pid=1-s2.0-S1574013724000388-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adding relevance to rigor: Assessing the contributions of SLRs in Software Engineering through Citation Context Analysis 为严谨性添加相关性:通过引文上下文分析评估软件工程中 SLR 的贡献
IF 12.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-18 DOI: 10.1016/j.cosrev.2024.100649
Oscar Díaz , Marcela Genero , Jeremías P. Contell , Mario Piattini

Research in Software Engineering greatly benefits from Systematic Literature Reviews (SLRs), in view of the citations they receive. While there has been a focus on improving the quality of SLRs in terms of the process, it remains unclear if this emphasis on rigor has also led to an increase in relevance. This study introduces Citation Context Analysis for SLRs as a method to go beyond simple citation counting by examining the reasons behind citations. To achieve this, we propose the Resonance Scheme, which characterizes how referring papers use SLRs based on the outputs that SLRs can provide, either backward-oriented (such as synthesis or aggregating evidence) or forward-oriented (such as theory building or identifying research gaps). A proof-of-concept demonstrates that most referring papers appreciate SLRs for their synthesis efforts, while only a small number refer to forward-oriented outputs. This approach is expected to be useful for three stakeholders. First, SLR producers can use the scheme to capture the contributions of their SLRs. Second, SLR consumers, such as Ph.D. students looking for research gaps, can easily identify the contributions of interest. Third, SLR reviewers can use the scheme as a tool to assess the contributions that merit SLR publication.

从系统文献综述 (SLR) 的引用率来看,软件工程领域的研究大大受益于系统文献综述 (SLR)。虽然人们一直在关注如何从流程上提高 SLR 的质量,但对严谨性的强调是否也导致了相关性的提高,这一点仍不清楚。本研究介绍了 SLR 的引文背景分析方法,该方法通过研究引文背后的原因,超越了简单的引文计数。为此,我们提出了 "共振方案"(Resonance Scheme),该方案根据 SLR 所能提供的产出,或面向后方(如综合或汇总证据),或面向前方(如理论构建或确定研究差距),来描述引用论文如何使用 SLR。概念验证表明,大多数参考文献赞赏 SLR 的综合工作,而只有少数参考文献提及前瞻性产出。这种方法预计将对三个利益相关者有用。首先,可持续土地资源生产者可使用该计划来记录其可持续土地资源的贡献。其次,SLR 消费者,如寻找研究空白的博士生,可轻松识别感兴趣的贡献。第三,SLR 审核人员可将该计划作为一种工具,用于评估值得发表 SLR 的贡献。
{"title":"Adding relevance to rigor: Assessing the contributions of SLRs in Software Engineering through Citation Context Analysis","authors":"Oscar Díaz ,&nbsp;Marcela Genero ,&nbsp;Jeremías P. Contell ,&nbsp;Mario Piattini","doi":"10.1016/j.cosrev.2024.100649","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100649","url":null,"abstract":"<div><p>Research in Software Engineering greatly benefits from Systematic Literature Reviews (SLRs), in view of the citations they receive. While there has been a focus on improving the quality of SLRs in terms of the process, it remains unclear if this emphasis on rigor has also led to an increase in relevance. This study introduces Citation Context Analysis for SLRs as a method to go beyond simple citation counting by examining the reasons behind citations. To achieve this, we propose the Resonance Scheme, which characterizes how referring papers use SLRs based on the outputs that SLRs can provide, either backward-oriented (such as synthesis or aggregating evidence) or forward-oriented (such as theory building or identifying research gaps). A proof-of-concept demonstrates that most referring papers appreciate SLRs for their synthesis efforts, while only a small number refer to forward-oriented outputs. This approach is expected to be useful for three stakeholders. First, SLR producers can use the scheme to capture the contributions of their SLRs. Second, SLR consumers, such as Ph.D. students looking for research gaps, can easily identify the contributions of interest. Third, SLR reviewers can use the scheme as a tool to assess the contributions that merit SLR publication.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100649"},"PeriodicalIF":12.9,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review on transformer network for natural and medical image analysis 用于自然和医学图像分析的变压器网络综述
IF 12.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-14 DOI: 10.1016/j.cosrev.2024.100648
Ramkumar Thirunavukarasu , Evans Kotei

The Transformer network is the main application area for natural language processing. It has gained traction lately and exhibits potential in the field of computer vision. This cutting-edge method has proven to offer a significant impact on image analysis, a crucial area of computer vision. The transformer's outstanding performance in vision computing places it as an alternative to the convolutional neural network for vision tasks. Transformers have taken center stage in the field of natural language processing. Despite the outstanding performance of transformer networks in natural image processing, their implementation in medical image analysis is gradually gaining roots. This study focuses on the transformer application in natural and medical image analysis. The first part of the study provides an overview of the core concepts of the attention mechanism built into transformers for long-range feature extraction. The study again highlights the various transformer architectures proposed for natural and medical image tasks such as segmentation, classification, image registration and diagnosis. Finally, the paper presents limitations identified in proposed transformer networks for natural and medical image processing. It also highlights prospective study opportunities for further research to better the computer vision domain, especially medical image analysis. This study offers knowledge to scholars and researchers studying computer vision applications as they focus on creating innovative transformer network-based solutions.

变形网络是自然语言处理的主要应用领域。最近,它在计算机视觉领域获得了广泛应用,并展现出巨大潜力。事实证明,这种前沿方法对图像分析这一计算机视觉的重要领域具有重大影响。变压器在视觉计算方面的出色表现使其成为视觉任务中卷积神经网络的替代品。变压器在自然语言处理领域占据了中心位置。尽管变换器网络在自然图像处理中表现出色,但其在医学图像分析中的应用正逐渐扎根。本研究重点关注变换器在自然和医学图像分析中的应用。研究的第一部分概述了转换器中用于远距离特征提取的注意力机制的核心概念。研究再次强调了针对自然和医学图像任务(如分割、分类、图像配准和诊断)提出的各种变换器架构。最后,论文介绍了针对自然和医学图像处理提出的变换器网络的局限性。它还强调了进一步研究的前瞻性机会,以改善计算机视觉领域,尤其是医学图像分析。这项研究为研究计算机视觉应用的学者和研究人员提供了知识,因为他们专注于创建基于变压器网络的创新解决方案。
{"title":"A comprehensive review on transformer network for natural and medical image analysis","authors":"Ramkumar Thirunavukarasu ,&nbsp;Evans Kotei","doi":"10.1016/j.cosrev.2024.100648","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100648","url":null,"abstract":"<div><p>The Transformer network is the main application area for natural language processing. It has gained traction lately and exhibits potential in the field of computer vision. This cutting-edge method has proven to offer a significant impact on image analysis, a crucial area of computer vision. The transformer's outstanding performance in vision computing places it as an alternative to the convolutional neural network for vision tasks. Transformers have taken center stage in the field of natural language processing. Despite the outstanding performance of transformer networks in natural image processing, their implementation in medical image analysis is gradually gaining roots. This study focuses on the transformer application in natural and medical image analysis. The first part of the study provides an overview of the core concepts of the attention mechanism built into transformers for long-range feature extraction. The study again highlights the various transformer architectures proposed for natural and medical image tasks such as segmentation, classification, image registration and diagnosis. Finally, the paper presents limitations identified in proposed transformer networks for natural and medical image processing. It also highlights prospective study opportunities for further research to better the computer vision domain, especially medical image analysis. This study offers knowledge to scholars and researchers studying computer vision applications as they focus on creating innovative transformer network-based solutions.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100648"},"PeriodicalIF":12.9,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-scaling mechanisms in serverless computing: A comprehensive review 无服务器计算中的自动扩展机制:全面回顾
IF 12.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-13 DOI: 10.1016/j.cosrev.2024.100650
Mohammad Tari , Mostafa Ghobaei-Arani , Jafar Pouramini , Mohsen Ghorbian

The auto-scaling feature is fundamental to serverless computing, and it automatically allows applications to scale as needed. Hence, this allows applications to be configured to adapt to current traffic and demands and acquire resources as necessary without the need to manage servers directly. Auto-scaling is an important principle in developing serverless applications that is considered and increasingly recognized by academia and industry. Despite the strong interest in auto-scaling in serverless computing in the scientific and industrial community, no clear, comprehensive, and systematic investigation has been conducted. As part of the study of automatic scaling in serverless computing, key strategies and

approaches are investigated during the lifecycle of cloud applications. This research examines three key approaches to automatically scaling serverless computing applications in the taxonomy presented. These approaches include machine learning (ML)-based, frameworks-based, and models-based. Additionally, we provide an overview of key performance metrics essential to the auto-scaling process of cloud applications and discuss the requirements. It discusses key concepts and limitations of serverless computing approaches, challenges, future directions, and research opportunities.

自动缩放功能是无服务器计算的基础,它可以根据需要自动缩放应用程序。因此,这允许对应用程序进行配置,以适应当前的流量和需求,并在必要时获取资源,而无需直接管理服务器。自动缩放是开发无服务器应用程序的一个重要原则,已被学术界和工业界所考虑并日益认可。尽管科学界和工业界对无服务器计算中的自动缩放有着浓厚的兴趣,但尚未开展明确、全面和系统的研究。作为无服务器计算中自动扩展研究的一部分,研究了云应用生命周期中的关键策略和方法。本研究在提出的分类法中研究了自动扩展无服务器计算应用的三种关键方法。这些方法包括基于机器学习(ML)的方法、基于框架的方法和基于模型的方法。此外,我们还概述了云应用自动扩展过程中必不可少的关键性能指标,并讨论了相关要求。报告讨论了无服务器计算方法的关键概念和局限性、挑战、未来方向和研究机会。
{"title":"Auto-scaling mechanisms in serverless computing: A comprehensive review","authors":"Mohammad Tari ,&nbsp;Mostafa Ghobaei-Arani ,&nbsp;Jafar Pouramini ,&nbsp;Mohsen Ghorbian","doi":"10.1016/j.cosrev.2024.100650","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100650","url":null,"abstract":"<div><p>The auto-scaling feature is fundamental to serverless computing, and it automatically allows applications to scale as needed. Hence, this allows applications to be configured to adapt to current traffic and demands and acquire resources as necessary without the need to manage servers directly. Auto-scaling is an important principle in developing serverless applications that is considered and increasingly recognized by academia and industry. Despite the strong interest in auto-scaling in serverless computing in the scientific and industrial community, no clear, comprehensive, and systematic investigation has been conducted. As part of the study of automatic scaling in serverless computing, key strategies and</p><p>approaches are investigated during the lifecycle of cloud applications. This research examines three key approaches to automatically scaling serverless computing applications in the taxonomy presented. These approaches include machine learning (ML)-based, frameworks-based, and models-based. Additionally, we provide an overview of key performance metrics essential to the auto-scaling process of cloud applications and discuss the requirements. It discusses key concepts and limitations of serverless computing approaches, challenges, future directions, and research opportunities.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100650"},"PeriodicalIF":12.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Backbones-review: Feature extractor networks for deep learning and deep reinforcement learning approaches in computer vision Backbones-review:计算机视觉中深度学习和深度强化学习方法的特征提取器网络
IF 12.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-07 DOI: 10.1016/j.cosrev.2024.100645
Omar Elharrouss , Younes Akbari , Noor Almadeed , Somaya Al-Maadeed

To understand the real world using various types of data, Artificial Intelligence (AI) is the most used technique nowadays. While finding the pattern within the analyzed data represents the main task. This is performed by extracting representative features step, which is proceeded using the statistical algorithms or using some specific filters. However, the selection of useful features from large-scale data represented a crucial challenge. Now, with the development of convolution neural networks (CNNs), feature extraction operation has become more automatic and easier. CNNs allow to work on large-scale size of data, as well as cover different scenarios for a specific task. For computer vision tasks, convolutional networks are used to extract features and also for the other parts of a deep learning model. The selection of a suitable network for feature extraction or the other parts of a DL model is not random work. So, the implementation of such a model can be related to the target task as well as its computational complexity. Many networks have been proposed and become famous networks used for any DL models in any AI task. These networks are exploited for feature extraction or at the beginning of any DL model which is named backbones. A backbone is a known network trained and demonstrates its effectiveness. In this paper, an overview of the existing backbones, e.g. VGGs, ResNets, DenseNet, etc, is given with a detailed description. Also, a couple of computer vision tasks are discussed by providing a review of each task regarding the backbones used. In addition, a comparison in terms of performance is also provided, based on the backbone used for each task.

要利用各种类型的数据了解现实世界,人工智能(AI)是当今最常用的技术。在分析的数据中找到模式是主要任务。这是通过提取具有代表性的特征步骤来完成的,该步骤使用统计算法或一些特定的过滤器来进行。然而,从大规模数据中选择有用的特征是一项重大挑战。现在,随着卷积神经网络(CNN)的发展,特征提取操作变得更加自动和简单。卷积神经网络可以处理大规模数据,并能覆盖特定任务的不同场景。在计算机视觉任务中,卷积网络可用于提取特征,也可用于深度学习模型的其他部分。为特征提取或深度学习模型的其他部分选择合适的网络并不是一件随意的工作。因此,这种模型的实现可能与目标任务及其计算复杂度有关。许多网络已被提出并成为人工智能任务中任何 DL 模型的著名网络。这些网络被用于特征提取或任何 DL 模型的开端,这些网络被命名为骨干网。骨干网络是经过训练并证明其有效性的已知网络。本文概述了现有的骨干网络,如 VGG、ResNets、DenseNet 等,并进行了详细描述。此外,本文还讨论了几个计算机视觉任务,对每个任务所使用的骨干网进行了回顾。此外,还根据每个任务所使用的骨干网对性能进行了比较。
{"title":"Backbones-review: Feature extractor networks for deep learning and deep reinforcement learning approaches in computer vision","authors":"Omar Elharrouss ,&nbsp;Younes Akbari ,&nbsp;Noor Almadeed ,&nbsp;Somaya Al-Maadeed","doi":"10.1016/j.cosrev.2024.100645","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100645","url":null,"abstract":"<div><p>To understand the real world using various types of data, Artificial Intelligence (AI) is the most used technique nowadays. While finding the pattern within the analyzed data represents the main task. This is performed by extracting representative features step, which is proceeded using the statistical algorithms or using some specific filters. However, the selection of useful features from large-scale data represented a crucial challenge. Now, with the development of convolution neural networks (CNNs), feature extraction operation has become more automatic and easier. CNNs allow to work on large-scale size of data, as well as cover different scenarios for a specific task. For computer vision tasks, convolutional networks are used to extract features and also for the other parts of a deep learning model. The selection of a suitable network for feature extraction or the other parts of a DL model is not random work. So, the implementation of such a model can be related to the target task as well as its computational complexity. Many networks have been proposed and become famous networks used for any DL models in any AI task. These networks are exploited for feature extraction or at the beginning of any DL model which is named backbones. A backbone is a known network trained and demonstrates its effectiveness. In this paper, an overview of the existing backbones, e.g. VGGs, ResNets, DenseNet, etc, is given with a detailed description. Also, a couple of computer vision tasks are discussed by providing a review of each task regarding the backbones used. In addition, a comparison in terms of performance is also provided, based on the backbone used for each task.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100645"},"PeriodicalIF":12.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141291570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chaos Game Optimization: A comprehensive study of its variants, applications, and future directions 混沌博弈优化:对其变体、应用和未来方向的全面研究
IF 12.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-07 DOI: 10.1016/j.cosrev.2024.100647
Raja Oueslati , Ghaith Manita , Amit Chhabra , Ouajdi Korbaa

Chaos Game Optimization Algorithm (CGO) is a novel advancement in metaheuristic optimization inspired by chaos theory. It addresses complex optimization problems in dynamical systems, exhibiting unique behaviours such as fractals and self-organized patterns. CGO’s design exemplifies adaptability and robustness, making it a significant tool for tackling intricate optimization scenarios. This study presents a comprehensive and updated overview of CGO, exploring the various variants and adaptations that have been published in numerous research studies since its introduction in 2020, with 4% in book chapters, 7% in international conference proceedings, and 89% in prestigious international journals. CGO variants covered in this paper include 4% binary, 22% for multi-objective and modification and 52% for hybridization variants. Moreover, the applications of CGO, demonstrate its efficacy and flexibility across different domains with 32% in energy, 28% in engineering, 11% in IoT and machine learning, 6% in truss structures, 4% in big data, 2% in medical imaging, in security, in electronic, and in microarray technology. Furthermore, we discuss the future directions of CGO, hypothesizing its potential advancements and broader implications in optimization theory and practice. The primary objectives of this survey paper are to provide a comprehensive overview of CGO, highlighting its innovative approach, discussing its variants and their usage in different sectors, and the burgeoning interest it has sparked in metaheuristic algorithms. As a result, this manuscript is expected to offer valuable insights for engineers, professionals across different sectors, and academic researchers.

混沌博弈优化算法(CGO)是受混沌理论启发而在元启发式优化方面取得的新进展。它能解决动态系统中的复杂优化问题,表现出分形和自组织模式等独特行为。CGO 的设计体现了适应性和鲁棒性,使其成为解决复杂优化问题的重要工具。本研究全面介绍了 CGO 的最新概况,探讨了自 2020 年 CGO 问世以来在大量研究中发表的各种变体和适应性,其中 4% 发表在书籍章节中,7% 发表在国际会议论文集中,89% 发表在著名国际期刊上。本文涉及的 CGO 变体包括 4% 的二元变体、22% 的多目标变体和修正变体,以及 52% 的混合变体。此外,CGO 在不同领域的应用证明了它的有效性和灵活性,其中能源领域占 32%,工程领域占 28%,物联网和机器学习领域占 11%,桁架结构领域占 6%,大数据领域占 4%,医学成像、安全、电子和微阵列技术领域占 2%。此外,我们还讨论了 CGO 的未来发展方向,假设其在优化理论和实践中的潜在进步和更广泛的影响。本调查报告的主要目的是全面概述 CGO,突出其创新方法,讨论其变体及其在不同领域的应用,以及它在元搜索算法中引发的蓬勃兴趣。因此,本手稿有望为工程师、不同行业的专业人士和学术研究人员提供有价值的见解。
{"title":"Chaos Game Optimization: A comprehensive study of its variants, applications, and future directions","authors":"Raja Oueslati ,&nbsp;Ghaith Manita ,&nbsp;Amit Chhabra ,&nbsp;Ouajdi Korbaa","doi":"10.1016/j.cosrev.2024.100647","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100647","url":null,"abstract":"<div><p>Chaos Game Optimization Algorithm (CGO) is a novel advancement in metaheuristic optimization inspired by chaos theory. It addresses complex optimization problems in dynamical systems, exhibiting unique behaviours such as fractals and self-organized patterns. CGO’s design exemplifies adaptability and robustness, making it a significant tool for tackling intricate optimization scenarios. This study presents a comprehensive and updated overview of CGO, exploring the various variants and adaptations that have been published in numerous research studies since its introduction in 2020, with 4% in book chapters, 7% in international conference proceedings, and 89% in prestigious international journals. CGO variants covered in this paper include 4% binary, 22% for multi-objective and modification and 52% for hybridization variants. Moreover, the applications of CGO, demonstrate its efficacy and flexibility across different domains with 32% in energy, 28% in engineering, 11% in IoT and machine learning, 6% in truss structures, 4% in big data, 2% in medical imaging, in security, in electronic, and in microarray technology. Furthermore, we discuss the future directions of CGO, hypothesizing its potential advancements and broader implications in optimization theory and practice. The primary objectives of this survey paper are to provide a comprehensive overview of CGO, highlighting its innovative approach, discussing its variants and their usage in different sectors, and the burgeoning interest it has sparked in metaheuristic algorithms. As a result, this manuscript is expected to offer valuable insights for engineers, professionals across different sectors, and academic researchers.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100647"},"PeriodicalIF":12.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDoS attacks & defense mechanisms in SDN-enabled cloud: Taxonomy, review and research challenges 支持 SDN 的云中的 DDoS 攻击和防御机制:分类、回顾与研究挑战
IF 12.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1016/j.cosrev.2024.100644
Jasmeen Kaur Chahal , Abhinav Bhandari , Sunny Behal

Software-defined Networking (SDN) is a transformative approach for addressing the limitations of legacy networks due to decoupling of control planes from data planes. It offers increased programmability and flexibility for designing of cloud-based data centers. SDN-Enabled cloud data centers help in managing the huge traffic very effectively and efficiently. However, the security of SDN-Enabled Cloud data centers against different attacks is a key concern for cloud security professionals. Distributed Denial of Service Attacks have emerged as one of the most devastating attacks that constantly worried the entire cloud security research community. To prelude this, it is pertinent to fundamentally focus on classification of these attacks and their defence strategies in an effective way which has been the basis of this research paper. The aim of this paper is to formulate and conceptualize the taxonomies of DDoS attacks and its Défense mechanisms. Improved taxonomy of DDoS attacks highlights the various vulnerable points of vulnerability in SDN-enabled cloud architecture. Additionally, a taxonomy of defence mechanisms offers an extensive survey of recent techniques for detecting and mitigating DDoS attacks in the SDN-enabled cloud environment. Finally, we discuss the open research issues and challenges for the cloud security research community for carrying out future research and investigation.

软件定义网络(Software-defined Networking,SDN)是一种变革性方法,可解决传统网络因控制平面与数据平面解耦而产生的局限性。它为设计基于云的数据中心提供了更高的可编程性和灵活性。支持 SDN 的云数据中心有助于高效管理巨大的流量。然而,SDN-Enabled 云数据中心针对不同攻击的安全性是云安全专业人员关注的一个关键问题。分布式拒绝服务攻击是最具破坏性的攻击之一,一直困扰着整个云安全研究界。在此之前,有必要从根本上关注这些攻击的分类及其有效的防御策略,这也是本研究论文的基础。本文旨在对 DDoS 攻击及其防御机制进行分类和概念化。改进后的 DDoS 攻击分类法强调了启用 SDN 的云架构中的各种易受攻击点。此外,防御机制分类法还提供了在支持 SDN 的云环境中检测和缓解 DDoS 攻击的最新技术的广泛调查。最后,我们讨论了云安全研究界在开展未来研究和调查方面面临的公开研究课题和挑战。
{"title":"DDoS attacks & defense mechanisms in SDN-enabled cloud: Taxonomy, review and research challenges","authors":"Jasmeen Kaur Chahal ,&nbsp;Abhinav Bhandari ,&nbsp;Sunny Behal","doi":"10.1016/j.cosrev.2024.100644","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100644","url":null,"abstract":"<div><p>Software-defined Networking (SDN) is a transformative approach for addressing the limitations of legacy networks due to decoupling of control planes from data planes. It offers increased programmability and flexibility for designing of cloud-based data centers. SDN-Enabled cloud data centers help in managing the huge traffic very effectively and efficiently. However, the security of SDN-Enabled Cloud data centers against different attacks is a key concern for cloud security professionals. Distributed Denial of Service Attacks have emerged as one of the most devastating attacks that constantly worried the entire cloud security research community. To prelude this, it is pertinent to fundamentally focus on classification of these attacks and their defence strategies in an effective way which has been the basis of this research paper. The aim of this paper is to formulate and conceptualize the taxonomies of DDoS attacks and its Défense mechanisms. Improved taxonomy of DDoS attacks highlights the various vulnerable points of vulnerability in SDN-enabled cloud architecture. Additionally, a taxonomy of defence mechanisms offers an extensive survey of recent techniques for detecting and mitigating DDoS attacks in the SDN-enabled cloud environment. Finally, we discuss the open research issues and challenges for the cloud security research community for carrying out future research and investigation.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100644"},"PeriodicalIF":12.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning with the generative models for recommender systems: A survey 用于推荐系统的生成模型深度学习:调查
IF 12.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-04 DOI: 10.1016/j.cosrev.2024.100646
Ravi Nahta , Ganpat Singh Chauhan , Yogesh Kumar Meena , Dinesh Gopalani

The variety of enormous information on the web encourages the field of recommender systems (RS) to flourish. In recent times, deep learning techniques have significantly impacted information retrieval tasks, including RS. The probabilistic and non-linear views of neural networks emerge to generative models for recommendation tasks. At present, there is an absence of extensive survey on deep generative models for RS. Therefore, this article aims at providing a coherent and comprehensive survey on recent efforts on deep generative models for RS. In particular, we provide an in-depth research effort in devising the taxonomy of deep generative models for RS, along with the summary of state-of-art methods. Lastly, we highlight the potential future prospects based on recent trends and new research avenues in this interesting and developing field. Public code links, papers, and popular datasets covered in this survey are accessible at: https://github.com/creyesp/Awesome-recsys?tab=readme-ov-file#papers.

网络信息种类繁多,推动了推荐系统(RS)领域的蓬勃发展。近来,深度学习技术对包括推荐系统在内的信息检索任务产生了重大影响。神经网络的概率和非线性观点成为推荐任务的生成模型。目前,还没有关于 RS 深度生成模型的广泛调查。因此,本文旨在对近期针对 RS 的深度生成模型所做的努力进行连贯而全面的调查。特别是,我们深入研究了为 RS 设计深度生成模型的分类方法,并总结了最先进的方法。最后,我们根据这一有趣且不断发展的领域的最新趋势和新研究途径,强调了未来的潜在前景。本调查所涉及的公共代码链接、论文和流行数据集可在以下网址访问:https://github.com/creyesp/Awesome-recsys?tab=readme-ov-file#papers。
{"title":"Deep learning with the generative models for recommender systems: A survey","authors":"Ravi Nahta ,&nbsp;Ganpat Singh Chauhan ,&nbsp;Yogesh Kumar Meena ,&nbsp;Dinesh Gopalani","doi":"10.1016/j.cosrev.2024.100646","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100646","url":null,"abstract":"<div><p>The variety of enormous information on the web encourages the field of recommender systems (RS) to flourish. In recent times, deep learning techniques have significantly impacted information retrieval tasks, including RS. The probabilistic and non-linear views of neural networks emerge to generative models for recommendation tasks. At present, there is an absence of extensive survey on deep generative models for RS. Therefore, this article aims at providing a coherent and comprehensive survey on recent efforts on deep generative models for RS. In particular, we provide an in-depth research effort in devising the taxonomy of deep generative models for RS, along with the summary of state-of-art methods. Lastly, we highlight the potential future prospects based on recent trends and new research avenues in this interesting and developing field. Public code links, papers, and popular datasets covered in this survey are accessible at: <span>https://github.com/creyesp/Awesome-recsys?tab=readme-ov-file#papers</span><svg><path></path></svg>.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100646"},"PeriodicalIF":12.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1