首页 > 最新文献

Computer Science Review最新文献

英文 中文
Recent advances in anomaly detection in Internet of Things: Status, challenges, and perspectives 物联网异常检测的最新进展:现状、挑战和前景
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-22 DOI: 10.1016/j.cosrev.2024.100665
Deepak Adhikari , Wei Jiang , Jinyu Zhan , Danda B. Rawat , Asmita Bhattarai

This paper provides a comprehensive survey of anomaly detection for the Internet of Things (IoT). Anomaly detection poses numerous challenges in IoT, with broad applications, including intrusion detection, fraud monitoring, cybersecurity, industrial automation, etc. Intensive attention has been received by network security analytics and researchers, particularly on anomaly detection in the network, deliberately crucial in network security. It is of critical importance to detect network anomalies timely. Due to various issues and resource-constrained features, conventional anomaly detection strategies cannot be implemented in the IoT. Hence, this paper attempts to highlight various recent techniques to detect anomalies in IoT and its applications. We also present anomalies at multiple layers of the IoT architecture. In addition, we discuss multiple computing platforms and highlight various challenges of anomaly detection. Finally, the potential future directions of the methods are suggested, leading to various open research issues to be analyzed afterward. With this survey, we hope that readers can get a better understanding of anomaly detection, as well as research trends in this domain.

本文全面介绍了物联网(IoT)的异常检测。异常检测给物联网带来了众多挑战,其应用领域十分广泛,包括入侵检测、欺诈监控、网络安全、工业自动化等。网络安全分析和研究人员对网络中的异常检测尤为关注,这在网络安全中至关重要。及时发现网络异常至关重要。由于各种问题和资源受限的特点,传统的异常检测策略无法在物联网中实施。因此,本文试图重点介绍在物联网及其应用中检测异常的各种最新技术。我们还介绍了物联网架构多层的异常情况。此外,我们还讨论了多种计算平台,并强调了异常检测所面临的各种挑战。最后,我们提出了这些方法的潜在未来发展方向,从而引出了各种有待分析的开放研究课题。通过本调查,我们希望读者能更好地了解异常检测以及该领域的研究趋势。
{"title":"Recent advances in anomaly detection in Internet of Things: Status, challenges, and perspectives","authors":"Deepak Adhikari ,&nbsp;Wei Jiang ,&nbsp;Jinyu Zhan ,&nbsp;Danda B. Rawat ,&nbsp;Asmita Bhattarai","doi":"10.1016/j.cosrev.2024.100665","DOIUrl":"10.1016/j.cosrev.2024.100665","url":null,"abstract":"<div><p>This paper provides a comprehensive survey of anomaly detection for the Internet of Things (IoT). Anomaly detection poses numerous challenges in IoT, with broad applications, including intrusion detection, fraud monitoring, cybersecurity, industrial automation, etc. Intensive attention has been received by network security analytics and researchers, particularly on anomaly detection in the network, deliberately crucial in network security. It is of critical importance to detect network anomalies timely. Due to various issues and resource-constrained features, conventional anomaly detection strategies cannot be implemented in the IoT. Hence, this paper attempts to highlight various recent techniques to detect anomalies in IoT and its applications. We also present anomalies at multiple layers of the IoT architecture. In addition, we discuss multiple computing platforms and highlight various challenges of anomaly detection. Finally, the potential future directions of the methods are suggested, leading to various open research issues to be analyzed afterward. With this survey, we hope that readers can get a better understanding of anomaly detection, as well as research trends in this domain.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100665"},"PeriodicalIF":13.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic survey on fault-tolerant solutions for distributed data analytics: Taxonomy, comparison, and future directions 分布式数据分析容错解决方案系统调查:分类、比较和未来方向
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.cosrev.2024.100660
Sucharitha Isukapalli, Satish Narayana Srirama

Fault tolerance is becoming increasingly important for upcoming exascale systems, supporting distributed data processing, due to the expected decrease in the Mean Time Between Failures (MTBF). To ensure the availability, reliability, dependability, and performance of the system, addressing the fault tolerance challenge is crucial. It aims to keep the distributed system running at a reduced capacity while avoiding complete data loss, even in the presence of faults, with minimal impact on system performance. This comprehensive survey aims to provide a detailed understanding of the importance of fault tolerance in distributed systems, including a classification of faults, errors, failures, and fault-tolerant techniques (reactive, proactive, and predictive). We collected a corpus of 490 papers published from 2014 to 2023 by searching in Scopus, IEEE Xplore, Springer, and ACM digital library databases. After a systematic review, 17 reactive models, 17 proactive models, and 14 predictive models were shortlisted and compared. A taxonomy of ideas behind the proposed models was also created for each of these categories of fault-tolerant solutions. Additionally, it examines how fault tolerance capability is incorporated into popular big data processing tools such as Apache Hadoop, Spark, and Flink. Finally, promising future research directions in this domain are discussed.

由于平均故障间隔时间(MTBF)预计会缩短,容错对于即将到来的支持分布式数据处理的超大规模系统变得越来越重要。为了确保系统的可用性、可靠性、可靠性和性能,应对容错挑战至关重要。容错的目的是使分布式系统以较低的容量运行,同时避免数据完全丢失,即使在出现故障的情况下,对系统性能的影响也最小。本综合调查旨在提供对分布式系统容错重要性的详细了解,包括故障、错误、失效和容错技术(反应式、主动式和预测式)的分类。我们通过在 Scopus、IEEE Xplore、Springer 和 ACM 数字图书馆数据库中搜索,收集了从 2014 年到 2023 年发表的 490 篇论文。经过系统审查,我们筛选并比较了 17 种被动模型、17 种主动模型和 14 种预测模型。还为每一类容错解决方案创建了建议模型背后的思想分类法。此外,本研究还探讨了如何将容错能力纳入流行的大数据处理工具,如 Apache Hadoop、Spark 和 Flink。最后,还讨论了该领域未来的研究方向。
{"title":"A systematic survey on fault-tolerant solutions for distributed data analytics: Taxonomy, comparison, and future directions","authors":"Sucharitha Isukapalli,&nbsp;Satish Narayana Srirama","doi":"10.1016/j.cosrev.2024.100660","DOIUrl":"10.1016/j.cosrev.2024.100660","url":null,"abstract":"<div><p>Fault tolerance is becoming increasingly important for upcoming exascale systems, supporting distributed data processing, due to the expected decrease in the Mean Time Between Failures (MTBF). To ensure the availability, reliability, dependability, and performance of the system, addressing the fault tolerance challenge is crucial. It aims to keep the distributed system running at a reduced capacity while avoiding complete data loss, even in the presence of faults, with minimal impact on system performance. This comprehensive survey aims to provide a detailed understanding of the importance of fault tolerance in distributed systems, including a classification of faults, errors, failures, and fault-tolerant techniques (reactive, proactive, and predictive). We collected a corpus of 490 papers published from 2014 to 2023 by searching in Scopus, IEEE Xplore, Springer, and ACM digital library databases. After a systematic review, 17 reactive models, 17 proactive models, and 14 predictive models were shortlisted and compared. A taxonomy of ideas behind the proposed models was also created for each of these categories of fault-tolerant solutions. Additionally, it examines how fault tolerance capability is incorporated into popular big data processing tools such as Apache Hadoop, Spark, and Flink. Finally, promising future research directions in this domain are discussed.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100660"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning for hyperspectral image classification: A survey 用于高光谱图像分类的深度学习:调查
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.cosrev.2024.100658
Vinod Kumar , Ravi Shankar Singh , Medara Rambabu , Yaman Dua

Hyperspectral image (HSI) classification is a significant topic of discussion in real-world applications. The prevalence of these applications stems from the precise spectral information offered by each pixelś data in hyperspectral imaging (HS). Classical machine learning (ML) methods face challenges in precise object classification with HSI data complexity. The intrinsic non-linear relationship between spectral information and materials complicates the task. Deep learning (DL) has proven to be a robust feature extractor in computer vision, effectively addressing nonlinear challenges. This validation drives its integration into HSI classification, which proves to be highly effective. This review compares DL approaches to HSI classification, highlighting its superiority over classical ML algorithms. Subsequently, a framework is constructed to analyze current advances in DL-based HSI classification, categorizing studies based on a network using only spectral features, spatial features, or both spectral–spatial features. Moreover, we have explained a few recent advanced DL models. Additionally, the study acknowledges that DL demands a substantial number of labeled training instances. However, obtaining such a large dataset for the HSI classification framework proves to be time and cost-intensive. So, we also explain the DL methodologies, which work well with the limited training data availability. Consequently, the survey introduces techniques aimed at enhancing the generalization performance of DL procedures, offering guidance for the future.

高光谱图像(HSI)分类是现实世界应用中的一个重要讨论主题。这些应用的普及源于高光谱成像(HS)中每个像素数据所提供的精确光谱信息。经典的机器学习(ML)方法在利用高光谱成像数据复杂性进行精确物体分类时面临挑战。光谱信息与材料之间固有的非线性关系使任务变得更加复杂。深度学习(DL)已被证明是计算机视觉中一种强大的特征提取器,能有效解决非线性挑战。这种验证推动了将其集成到 HSI 分类中,并证明非常有效。本综述比较了用于人脸识别分类的 DL 方法,突出了其优于经典 ML 算法的特点。随后,我们构建了一个框架来分析当前基于 DL 的人机交互分类的进展,根据仅使用频谱特征、空间特征或同时使用频谱和空间特征的网络对研究进行分类。此外,我们还解释了最近几种先进的 DL 模型。此外,该研究还承认,DL 需要大量标注的训练实例。然而,事实证明,为人机交互分类框架获取如此庞大的数据集既费时又费钱。因此,我们还解释了在训练数据有限的情况下也能很好工作的 DL 方法。因此,调查介绍了旨在提高 DL 程序泛化性能的技术,为未来提供了指导。
{"title":"Deep learning for hyperspectral image classification: A survey","authors":"Vinod Kumar ,&nbsp;Ravi Shankar Singh ,&nbsp;Medara Rambabu ,&nbsp;Yaman Dua","doi":"10.1016/j.cosrev.2024.100658","DOIUrl":"10.1016/j.cosrev.2024.100658","url":null,"abstract":"<div><p>Hyperspectral image (HSI) classification is a significant topic of discussion in real-world applications. The prevalence of these applications stems from the precise spectral information offered by each pixelś data in hyperspectral imaging (HS). Classical machine learning (ML) methods face challenges in precise object classification with HSI data complexity. The intrinsic non-linear relationship between spectral information and materials complicates the task. Deep learning (DL) has proven to be a robust feature extractor in computer vision, effectively addressing nonlinear challenges. This validation drives its integration into HSI classification, which proves to be highly effective. This review compares DL approaches to HSI classification, highlighting its superiority over classical ML algorithms. Subsequently, a framework is constructed to analyze current advances in DL-based HSI classification, categorizing studies based on a network using only spectral features, spatial features, or both spectral–spatial features. Moreover, we have explained a few recent advanced DL models. Additionally, the study acknowledges that DL demands a substantial number of labeled training instances. However, obtaining such a large dataset for the HSI classification framework proves to be time and cost-intensive. So, we also explain the DL methodologies, which work well with the limited training data availability. Consequently, the survey introduces techniques aimed at enhancing the generalization performance of DL procedures, offering guidance for the future.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100658"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141891665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised affinity learning based on manifold analysis for image retrieval: A survey 基于流形分析的图像检索无监督亲和性学习:调查
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.cosrev.2024.100657
V.H. Pereira-Ferrero , T.G. Lewis , L.P. Valem , L.G.P. Ferrero , D.C.G. Pedronette , L.J. Latecki

Despite the advances in machine learning techniques, similarity assessment among multimedia data remains a challenging task of broad interest in computer science. Substantial progress has been achieved in acquiring meaningful data representations, but how to compare them, plays a pivotal role in machine learning and retrieval tasks. Traditional pairwise measures are widely used, yet unsupervised affinity learning approaches have emerged as a valuable solution for enhancing retrieval effectiveness. These methods leverage the dataset manifold to encode contextual information, refining initial similarity/dissimilarity measures through post-processing. In other words, measuring the similarity between data objects within the context of other data objects is often more effective. This survey provides a comprehensive discussion about unsupervised post-processing methods, addressing the historical development and proposing an organization of the area, with a specific emphasis on image retrieval. A systematic review was conducted contributing to a formal understanding of the field. Additionally, an experimental study is presented to evaluate the potential of such methods in improving retrieval results, focusing on recent features extracted from Convolutional Neural Networks (CNNs) and Transformer models, in 8 distinct datasets, and over 329.877 images analyzed. State-of-the-art comparison for Flowers, Corel5k, and ALOI datasets, the Rank Flow Embedding method outperformed all state-of-art approaches, achieving 99.65%, 96.79%, and 97.73%, respectively.

尽管机器学习技术在不断进步,但多媒体数据之间的相似性评估仍然是计算机科学领域广泛关注的一项具有挑战性的任务。在获取有意义的数据表示方面已经取得了长足的进步,但如何对它们进行比较,在机器学习和检索任务中起着举足轻重的作用。传统的成对测量方法被广泛使用,而无监督的亲和学习方法已成为提高检索效率的重要解决方案。这些方法利用数据集流形编码上下文信息,通过后处理完善初始相似性/不相似性度量。换句话说,在其他数据对象的上下文中测量数据对象之间的相似性往往更为有效。本调查报告对无监督后处理方法进行了全面讨论,论述了该领域的历史发展,并提出了该领域的组织结构,特别强调了图像检索。系统性的回顾有助于对该领域的正式了解。此外,还介绍了一项实验研究,以评估这些方法在改进检索结果方面的潜力,重点是在 8 个不同的数据集中从卷积神经网络(CNN)和变换器模型中提取的最新特征,分析了超过 329.877 幅图像。通过对 Flowers、Corel5k 和 ALOI 数据集的最新技术比较,秩流嵌入方法的检索结果优于所有最新方法,分别达到 99.65%、96.79% 和 97.73%。
{"title":"Unsupervised affinity learning based on manifold analysis for image retrieval: A survey","authors":"V.H. Pereira-Ferrero ,&nbsp;T.G. Lewis ,&nbsp;L.P. Valem ,&nbsp;L.G.P. Ferrero ,&nbsp;D.C.G. Pedronette ,&nbsp;L.J. Latecki","doi":"10.1016/j.cosrev.2024.100657","DOIUrl":"10.1016/j.cosrev.2024.100657","url":null,"abstract":"<div><p>Despite the advances in machine learning techniques, similarity assessment among multimedia data remains a challenging task of broad interest in computer science. Substantial progress has been achieved in acquiring meaningful data representations, but how to compare them, plays a pivotal role in machine learning and retrieval tasks. Traditional pairwise measures are widely used, yet unsupervised affinity learning approaches have emerged as a valuable solution for enhancing retrieval effectiveness. These methods leverage the dataset manifold to encode contextual information, refining initial similarity/dissimilarity measures through post-processing. In other words, measuring the similarity between data objects within the context of other data objects is often more effective. This survey provides a comprehensive discussion about unsupervised post-processing methods, addressing the historical development and proposing an organization of the area, with a specific emphasis on image retrieval. A systematic review was conducted contributing to a formal understanding of the field. Additionally, an experimental study is presented to evaluate the potential of such methods in improving retrieval results, focusing on recent features extracted from Convolutional Neural Networks (CNNs) and Transformer models, in 8 distinct datasets, and over 329.877 images analyzed. State-of-the-art comparison for Flowers, Corel5k, and ALOI datasets, the Rank Flow Embedding method outperformed all state-of-art approaches, achieving 99.65%, 96.79%, and 97.73%, respectively.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100657"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141891666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital image watermarking using deep learning: A survey 使用深度学习的数字图像水印:调查
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.cosrev.2024.100662
Khalid M. Hosny, Amal Magdi, Osama ElKomy, Hanaa M. Hamza

Lately, a lot of attention has been paid to securing the ownership rights of digital images. The expanding usage of the Internet causes several problems, including data piracy and data tampering. Image watermarking is a typical method of protecting an image's copyright. Robust watermarking for digital images is a process of embedding watermarks on the cover image and extracting them correctly under different attacks. The embedded watermark might be either visible or invisible. Deep learning extracts image features using neural networks, which are highly effective in feature extraction. Watermarking techniques that utilize deep learning have gained a lot of interest due to their remarkable ability to extract features. This article offers an overview of digital image watermarking and deep learning. This article will discuss several research articles on digital image watermarking in deep-learning environments.

近来,确保数字图像所有权的问题备受关注。互联网使用的不断扩大引发了一些问题,包括数据盗版和数据篡改。图像水印是保护图像版权的一种典型方法。数字图像的鲁棒水印是在封面图像上嵌入水印并在不同攻击下正确提取水印的过程。嵌入的水印可以是可见的,也可以是不可见的。深度学习利用神经网络提取图像特征,在特征提取方面非常有效。利用深度学习提取特征的水印技术因其卓越的能力而备受关注。本文概述了数字图像水印和深度学习。本文将讨论深度学习环境下数字图像水印的几篇研究文章。
{"title":"Digital image watermarking using deep learning: A survey","authors":"Khalid M. Hosny,&nbsp;Amal Magdi,&nbsp;Osama ElKomy,&nbsp;Hanaa M. Hamza","doi":"10.1016/j.cosrev.2024.100662","DOIUrl":"10.1016/j.cosrev.2024.100662","url":null,"abstract":"<div><p>Lately, a lot of attention has been paid to securing the ownership rights of digital images. The expanding usage of the Internet causes several problems, including data piracy and data tampering. Image watermarking is a typical method of protecting an image's copyright. Robust watermarking for digital images is a process of embedding watermarks on the cover image and extracting them correctly under different attacks. The embedded watermark might be either visible or invisible. Deep learning extracts image features using neural networks, which are highly effective in feature extraction. Watermarking techniques that utilize deep learning have gained a lot of interest due to their remarkable ability to extract features. This article offers an overview of digital image watermarking and deep learning. This article will discuss several research articles on digital image watermarking in deep-learning environments.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100662"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review of vulnerabilities and AI-enabled defense against DDoS attacks for securing cloud services 全面评述漏洞和人工智能防御 DDoS 攻击以确保云服务安全
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.cosrev.2024.100661
Surendra Kumar , Mridula Dwivedi , Mohit Kumar , Sukhpal Singh Gill

The advent of cloud computing has made a global impact by providing on-demand services, elasticity, scalability, and flexibility, hence delivering cost-effective resources to end users in pay-as-you-go manner. However, securing cloud services against vulnerabilities, threats, and modern attacks remains a major concern. Application layer attacks are particularly problematic because they can cause significant damage and are often difficult to detect, as malicious traffic can be indistinguishable from normal traffic flows. Moreover, preventing Distributed Denial of Service (DDoS) attacks is challenging due to its high impact on physical computer resources and network bandwidth. This study examines new variations of DDoS attacks within the broader context of cyber threats and utilizes Artificial Intelligence (AI)-based approaches to detect and prevent such modern attacks. The conducted investigation determines that the current detection methods predominantly employ collectively, hybrid, and single Machine Learning (ML)/Deep Learning (DL) techniques. Further, the analysis of diverse DDoS attacks and their related defensive strategies is vital in safeguarding cloud infrastructure against the detrimental consequences of DDoS attacks. This article offers a comprehensive classification of the various types of cloud DDoS attacks, along with an in-depth analysis of the characterization, detection, prevention, and mitigation strategies employed. The article presents, an in-depth analysis of crucial performance measures used to assess different defence systems and their effectiveness in a cloud computing environment. This article aims to encourage cloud security researchers to devise efficient defence strategies against diverse DDoS attacks. The survey identifies and elucidates the research gaps and obstacles, while also providing an overview of potential future research areas.

云计算的出现产生了全球性影响,它提供按需服务、弹性、可扩展性和灵活性,从而以 "即用即付 "的方式为终端用户提供具有成本效益的资源。然而,如何确保云服务免受漏洞、威胁和现代攻击仍然是一个主要问题。应用层攻击问题尤为严重,因为它们可能造成重大损害,而且往往难以检测,因为恶意流量可能与正常流量无法区分。此外,由于分布式拒绝服务(DDoS)攻击对物理计算机资源和网络带宽的影响很大,因此预防这种攻击具有挑战性。本研究在更广泛的网络威胁背景下研究了 DDoS 攻击的新变化,并利用基于人工智能 (AI) 的方法来检测和预防此类现代攻击。调查发现,目前的检测方法主要采用集体、混合和单一的机器学习(ML)/深度学习(DL)技术。此外,对各种 DDoS 攻击及其相关防御策略的分析对于保护云基础设施免受 DDoS 攻击的不利影响至关重要。本文对各种类型的云 DDoS 攻击进行了全面分类,并对所采用的特征描述、检测、预防和缓解策略进行了深入分析。文章深入分析了用于评估不同防御系统及其在云计算环境中有效性的关键性能指标。本文旨在鼓励云安全研究人员针对各种 DDoS 攻击制定高效的防御策略。调查确定并阐明了研究差距和障碍,同时还概述了未来潜在的研究领域。
{"title":"A comprehensive review of vulnerabilities and AI-enabled defense against DDoS attacks for securing cloud services","authors":"Surendra Kumar ,&nbsp;Mridula Dwivedi ,&nbsp;Mohit Kumar ,&nbsp;Sukhpal Singh Gill","doi":"10.1016/j.cosrev.2024.100661","DOIUrl":"10.1016/j.cosrev.2024.100661","url":null,"abstract":"<div><p>The advent of cloud computing has made a global impact by providing on-demand services, elasticity, scalability, and flexibility, hence delivering cost-effective resources to end users in pay-as-you-go manner. However, securing cloud services against vulnerabilities, threats, and modern attacks remains a major concern. Application layer attacks are particularly problematic because they can cause significant damage and are often difficult to detect, as malicious traffic can be indistinguishable from normal traffic flows. Moreover, preventing Distributed Denial of Service (DDoS) attacks is challenging due to its high impact on physical computer resources and network bandwidth. This study examines new variations of DDoS attacks within the broader context of cyber threats and utilizes Artificial Intelligence (AI)-based approaches to detect and prevent such modern attacks. The conducted investigation determines that the current detection methods predominantly employ collectively, hybrid, and single Machine Learning (ML)/Deep Learning (DL) techniques. Further, the analysis of diverse DDoS attacks and their related defensive strategies is vital in safeguarding cloud infrastructure against the detrimental consequences of DDoS attacks. This article offers a comprehensive classification of the various types of cloud DDoS attacks, along with an in-depth analysis of the characterization, detection, prevention, and mitigation strategies employed. The article presents, an in-depth analysis of crucial performance measures used to assess different defence systems and their effectiveness in a cloud computing environment. This article aims to encourage cloud security researchers to devise efficient defence strategies against diverse DDoS attacks. The survey identifies and elucidates the research gaps and obstacles, while also providing an overview of potential future research areas.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100661"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on the parameterized complexity of reconfiguration problems 关于重新配置问题参数化复杂性的调查
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.cosrev.2024.100663
Nicolas Bousquet , Amer E. Mouawad , Naomi Nishimura , Sebastian Siebertz

A graph vertex-subset problem defines which subsets of the vertices of an input graph are feasible solutions. We view a feasible solution as a set of tokens placed on the vertices of the graph. A reconfiguration variant of a vertex-subset problem asks, given two feasible solutions of size k, whether it is possible to transform one into the other by a sequence of token slides (along edges of the graph) or token jumps (between arbitrary vertices of the graph) such that each intermediate set remains a feasible solution of size k. Many algorithmic questions present themselves in the form of reconfiguration problems: Given the description of an initial system state and the description of a target state, is it possible to transform the system from its initial state into the target one while preserving certain properties of the system in the process? Such questions have received a substantial amount of attention under the so-called combinatorial reconfiguration framework. We consider reconfiguration variants of three fundamental underlying graph vertex-subset problems, namely Independent Set, Dominating Set, and Connected Dominating Set. We survey both older and more recent work on the parameterized complexity of all three problems when parameterized by the number of tokens k. The emphasis will be on positive results and the most common techniques for the design of fixed-parameter tractable algorithms.

图顶点子集问题定义了输入图中哪些顶点子集是可行解。我们将可行解视为放置在图顶点上的一组标记。顶点子集问题的重构变体问的是,给定两个大小为 的可行解,是否有可能通过一系列令牌滑动(沿图的边)或令牌跳跃(在图的任意顶点之间)将其中一个转化为另一个,从而使每个中间集合仍然是大小为 的可行解。许多算法问题都是以重新配置问题的形式出现的:给定系统初始状态的描述和目标状态的描述,是否有可能将系统从初始状态转换到目标状态,同时在此过程中保留系统的某些属性?在所谓的组合重组框架下,这类问题受到了大量关注。我们考虑了三个基本的基础图顶点子集问题的重组变体,即 、 和 。我们考察了以代币数量为参数时,所有这三个问题的参数化复杂度方面较早和较新的工作。重点将放在正面结果和设计固定参数可控算法的最常用技术上。
{"title":"A survey on the parameterized complexity of reconfiguration problems","authors":"Nicolas Bousquet ,&nbsp;Amer E. Mouawad ,&nbsp;Naomi Nishimura ,&nbsp;Sebastian Siebertz","doi":"10.1016/j.cosrev.2024.100663","DOIUrl":"10.1016/j.cosrev.2024.100663","url":null,"abstract":"<div><p>A graph vertex-subset problem defines which subsets of the vertices of an input graph are feasible solutions. We view a feasible solution as a set of tokens placed on the vertices of the graph. A reconfiguration variant of a vertex-subset problem asks, given two feasible solutions of size <span><math><mi>k</mi></math></span>, whether it is possible to transform one into the other by a sequence of token slides (along edges of the graph) or token jumps (between arbitrary vertices of the graph) such that each intermediate set remains a feasible solution of size <span><math><mi>k</mi></math></span>. Many algorithmic questions present themselves in the form of reconfiguration problems: Given the description of an initial system state and the description of a target state, is it possible to transform the system from its initial state into the target one while preserving certain properties of the system in the process? Such questions have received a substantial amount of attention under the so-called combinatorial reconfiguration framework. We consider reconfiguration variants of three fundamental underlying graph vertex-subset problems, namely <span>Independent Set</span>, <span>Dominating Set</span>, and <span>Connected Dominating Set</span>. We survey both older and more recent work on the parameterized complexity of all three problems when parameterized by the number of tokens <span><math><mi>k</mi></math></span>. The emphasis will be on positive results and the most common techniques for the design of fixed-parameter tractable algorithms.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100663"},"PeriodicalIF":13.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141910596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The interaction design of 3D virtual humans: A survey 三维虚拟人的交互设计:一项调查
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-17 DOI: 10.1016/j.cosrev.2024.100653
Xueyang Wang, Nan Cao, Qing Chen, Shixiong Cao

Virtual humans have become a hot research topic in recent years due to the development of AI technology and computer graphics. In this survey, we provide a comprehensive review of the interaction design of 3D virtual humans. We first categorize the interac- tion design of virtual humans into speech, eye, facial expressions, and posture interactions. Then we describe the combination of different modalities of virtual humans in the multimodal interaction design section. We also summarize the applications of intelli- gent virtual humans in the fields of education, healthcare, and work assistance. The final part of the paper discusses the remaining challenges and opportunities in virtual human interaction design, along with future directions in this field. This paper hopes to help researchers quickly understand the characteristics of various modal interactions in the process of designing intelligent virtual humans and provide design guidance and suggestions.

近年来,随着人工智能技术和计算机图形学的发展,虚拟人已成为一个热门研究课题。在本研究中,我们对三维虚拟人的交互设计进行了全面综述。我们首先将虚拟人的交互设计分为语言交互、眼神交互、面部表情交互和姿势交互。然后,我们在多模态交互设计部分介绍了虚拟人不同模态的组合。我们还总结了智能虚拟人在教育、医疗保健和工作辅助领域的应用。本文的最后一部分讨论了虚拟人交互设计中仍然存在的挑战和机遇,以及该领域的未来发展方向。本文希望能帮助研究人员在设计智能虚拟人的过程中快速了解各种模式交互的特点,并提供设计指导和建议。
{"title":"The interaction design of 3D virtual humans: A survey","authors":"Xueyang Wang,&nbsp;Nan Cao,&nbsp;Qing Chen,&nbsp;Shixiong Cao","doi":"10.1016/j.cosrev.2024.100653","DOIUrl":"10.1016/j.cosrev.2024.100653","url":null,"abstract":"<div><p>Virtual humans have become a hot research topic in recent years due to the development of AI technology and computer graphics. In this survey, we provide a comprehensive review of the interaction design of 3D virtual humans. We first categorize the interac- tion design of virtual humans into speech, eye, facial expressions, and posture interactions. Then we describe the combination of different modalities of virtual humans in the multimodal interaction design section. We also summarize the applications of intelli- gent virtual humans in the fields of education, healthcare, and work assistance. The final part of the paper discusses the remaining challenges and opportunities in virtual human interaction design, along with future directions in this field. This paper hopes to help researchers quickly understand the characteristics of various modal interactions in the process of designing intelligent virtual humans and provide design guidance and suggestions.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100653"},"PeriodicalIF":13.3,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile robot localization: Current challenges and future prospective 移动机器人定位:当前挑战与未来展望
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-05 DOI: 10.1016/j.cosrev.2024.100651
Inam Ullah , Deepak Adhikari , Habib Khan , M. Shahid Anwar , Shabir Ahmad , Xiaoshan Bai

Mobile Robots (MRs) and their applications are undergoing massive development, requiring a diversity of autonomous or self-directed robots to fulfill numerous objectives and responsibilities. Integrating MRs with the Intelligent Internet of Things (IIoT) not only makes robots innovative, trackable, and powerful but also generates numerous threats and challenges in multiple applications. The IIoT combines intelligent techniques, including artificial intelligence and machine learning, with the Internet of Things (IoT). The location information (localization) of the MRs triggers innumerable domains. To fully accomplish the potential of localization, Mobile Robot Localization (MRL) algorithms need to be integrated with complementary technologies, such as MR classification, indoor localization mapping solutions, three-dimensional localization, etc. Thus, this paper endeavors to comprehensively review different methodologies and technologies for MRL, emphasizing intelligent architecture, indoor and outdoor methodologies, concepts, and security-related issues. Additionally, we highlight the diverse MRL applications where information about localization is challenging and present the various computing platforms. Finally, discussions on several challenges regarding navigation path planning, localization, obstacle avoidance, security, localization problem categories, etc., and potential future perspectives on MRL techniques and applications are highlighted.

移动机器人(MR)及其应用正在经历大规模发展,需要多种自主或自导机器人来实现众多目标和责任。将移动机器人(MR)与智能物联网(IIoT)相结合,不仅能使机器人具有创新性、可追踪性和强大的功能,还能在多种应用中产生众多威胁和挑战。IIoT 将人工智能和机器学习等智能技术与物联网(IoT)相结合。磁共振的位置信息(定位)引发了无数领域。为了充分发挥定位的潜力,移动机器人定位(MRL)算法需要与磁共振分类、室内定位绘图解决方案、三维定位等互补技术相结合。因此,本文致力于全面回顾 MRL 的不同方法和技术,强调智能架构、室内和室外方法、概念以及安全相关问题。此外,我们还强调了具有定位信息挑战的各种 MRL 应用,并介绍了各种计算平台。最后,我们重点讨论了导航路径规划、定位、避障、安全、定位问题类别等方面的若干挑战,以及 MRL 技术和应用的潜在未来前景。
{"title":"Mobile robot localization: Current challenges and future prospective","authors":"Inam Ullah ,&nbsp;Deepak Adhikari ,&nbsp;Habib Khan ,&nbsp;M. Shahid Anwar ,&nbsp;Shabir Ahmad ,&nbsp;Xiaoshan Bai","doi":"10.1016/j.cosrev.2024.100651","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100651","url":null,"abstract":"<div><p>Mobile Robots (MRs) and their applications are undergoing massive development, requiring a diversity of autonomous or self-directed robots to fulfill numerous objectives and responsibilities. Integrating MRs with the Intelligent Internet of Things (IIoT) not only makes robots innovative, trackable, and powerful but also generates numerous threats and challenges in multiple applications. The IIoT combines intelligent techniques, including artificial intelligence and machine learning, with the Internet of Things (IoT). The location information (localization) of the MRs triggers innumerable domains. To fully accomplish the potential of localization, Mobile Robot Localization (MRL) algorithms need to be integrated with complementary technologies, such as MR classification, indoor localization mapping solutions, three-dimensional localization, etc. Thus, this paper endeavors to comprehensively review different methodologies and technologies for MRL, emphasizing intelligent architecture, indoor and outdoor methodologies, concepts, and security-related issues. Additionally, we highlight the diverse MRL applications where information about localization is challenging and present the various computing platforms. Finally, discussions on several challenges regarding navigation path planning, localization, obstacle avoidance, security, localization problem categories, etc., and potential future perspectives on MRL techniques and applications are highlighted.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100651"},"PeriodicalIF":13.3,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applicability of genetic algorithms for stock market prediction: A systematic survey of the last decade 遗传算法在股市预测中的适用性:过去十年的系统调查
IF 13.3 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-03 DOI: 10.1016/j.cosrev.2024.100652
Ankit Thakkar, Kinjal Chaudhari

Stock market is one of the attractive domains for researchers as well as academicians. It represents highly complex non-linear fluctuating market behaviours where traders, investors, and organizers look forward to reliable future predictions of the market indices. Such prediction problems can be computationally addressed using various machine learning, deep learning, sentiment analysis, as well as mining approaches. However, the internal parameters configuration can play an important role in the prediction performance; also, feature selection is a crucial task. Therefore, to optimize such approaches, the evolutionary computation-based algorithms can be integrated in several ways. In this article, we systematically conduct a focused survey on genetic algorithm (GA) and its applications for stock market prediction; GAs are known for their parallel search mechanism to solve complex real-world problems; various genetic perspectives are also integrated with machine learning and deep learning methods to address financial forecasting. Thus, we aim to analyse the potential extensibility and adaptability of GAs for stock market prediction. We review stock price and stock trend prediction, as well as portfolio optimization, approaches over the recent years (2013–2022) to signify the state-of-the-art of GA-based optimization in financial markets. We broaden our discussion by briefly reviewing other genetic perspectives and their applications for stock market forecasting. We balance our survey with the consideration of competitiveness and complementation of GAs, followed by highlighting the challenges and potential future research directions of applying GAs for stock market prediction.

股票市场是吸引研究人员和学者的领域之一。它代表着高度复杂的非线性波动市场行为,交易者、投资者和组织者都期待着对市场指数的未来做出可靠预测。此类预测问题可以通过各种机器学习、深度学习、情感分析以及挖掘方法来计算解决。然而,内部参数配置对预测性能起着重要作用;同时,特征选择也是一项关键任务。因此,为了优化这些方法,可以通过多种方式整合基于进化计算的算法。在本文中,我们系统地对遗传算法(GA)及其在股市预测中的应用进行了重点调查;GA 以其并行搜索机制解决复杂的实际问题而著称;各种遗传观点还与机器学习和深度学习方法相结合,以解决金融预测问题。因此,我们旨在分析 GA 在股市预测方面的潜在扩展性和适应性。我们回顾了近年来(2013-2022 年)的股价和股票走势预测以及投资组合优化方法,以说明基于遗传算法的优化在金融市场中的最新进展。我们简要回顾了其他遗传学观点及其在股市预测中的应用,从而拓宽了我们的讨论范围。我们通过考虑遗传算法的竞争力和互补性来平衡我们的调查,随后强调了将遗传算法应用于股市预测的挑战和潜在的未来研究方向。
{"title":"Applicability of genetic algorithms for stock market prediction: A systematic survey of the last decade","authors":"Ankit Thakkar,&nbsp;Kinjal Chaudhari","doi":"10.1016/j.cosrev.2024.100652","DOIUrl":"https://doi.org/10.1016/j.cosrev.2024.100652","url":null,"abstract":"<div><p>Stock market is one of the attractive domains for researchers as well as academicians. It represents highly complex non-linear fluctuating market behaviours where traders, investors, and organizers look forward to reliable future predictions of the market indices. Such prediction problems can be computationally addressed using various machine learning, deep learning, sentiment analysis, as well as mining approaches. However, the internal parameters configuration can play an important role in the prediction performance; also, feature selection is a crucial task. Therefore, to optimize such approaches, the evolutionary computation-based algorithms can be integrated in several ways. In this article, we systematically conduct a focused survey on genetic algorithm (GA) and its applications for stock market prediction; GAs are known for their parallel search mechanism to solve complex real-world problems; various genetic perspectives are also integrated with machine learning and deep learning methods to address financial forecasting. Thus, we aim to analyse the potential extensibility and adaptability of GAs for stock market prediction. We review stock price and stock trend prediction, as well as portfolio optimization, approaches over the recent years (2013–2022) to signify the state-of-the-art of GA-based optimization in financial markets. We broaden our discussion by briefly reviewing other genetic perspectives and their applications for stock market forecasting. We balance our survey with the consideration of competitiveness and complementation of GAs, followed by highlighting the challenges and potential future research directions of applying GAs for stock market prediction.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"53 ","pages":"Article 100652"},"PeriodicalIF":13.3,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141542853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Science Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1