首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Improved deep network-based load predictor and optimal load balancing in cloud-fog services 基于深度网络的改进型负载预测器和云雾服务中的优化负载平衡
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-04 DOI: 10.1002/cpe.8275
Shubham Singh, Amit Kumar Mishra, Siddhartha Kumar Arjaria, Chinmay Bhatt, Daya Shankar Pandey, Ritesh Kumar Yadav

Cloud computing is commonly utilized in remote contexts to handle user demands for resources and services. Each assignment has unique processing needs that are determined by the time it takes to complete. However, if load balancing is not properly managed, the effectiveness of resources may suffer dramatically. Consequently, cloud service providers have to emphasize rapid and precise load balancing as well as proper resource supply. This paper proposes a novel enhanced deep network-based load predictor and load balancing in cloud-fog services. In prior, the workload is predicted using a deep network called Multiple Layers Assisted in LSTM (MLA-LSTM) model that considers the capacity of virtual machine (VM) and task as input and predicts the target label as underload, overload and equally balanced. According to this prediction, the optimal load balancing is performed through a hybrid optimization named Osprey Assisted Pelican Optimization Algorithm (OAPOA) while taking into account several parameters such as makespan, execution cost, resource consumption, and server load. Additionally, a process known as load migration is carried out, in which machines with overload tasks are assigned to machines with underload tasks. This migration is applied optimally via the OAPOA strategy under the consideration of constraints including migration cost and migration efficiency.

摘要云计算通常用于远程环境,以处理用户对资源和服务的需求。每项任务都有独特的处理需求,这些需求由完成任务所需的时间决定。但是,如果负载平衡管理不当,资源的有效性可能会大打折扣。因此,云服务提供商必须强调快速、精确的负载平衡以及适当的资源供应。本文提出了一种新颖的基于深度网络的增强型负载预测器和云雾服务中的负载平衡。该模型将虚拟机(VM)和任务的容量作为输入,并预测目标标签为欠载、过载和均衡。根据这一预测,通过名为 "Osprey Assisted Pelican Optimization Algorithm (OAPOA) "的混合优化来执行最佳负载平衡,同时考虑到一些参数,如时间跨度、执行成本、资源消耗和服务器负载。此外,还执行了一个称为负载迁移的过程,将超载任务的机器分配给负载不足的机器。这种迁移是在考虑迁移成本和迁移效率等约束条件的情况下,通过 OAPOA 策略优化应用的。
{"title":"Improved deep network-based load predictor and optimal load balancing in cloud-fog services","authors":"Shubham Singh,&nbsp;Amit Kumar Mishra,&nbsp;Siddhartha Kumar Arjaria,&nbsp;Chinmay Bhatt,&nbsp;Daya Shankar Pandey,&nbsp;Ritesh Kumar Yadav","doi":"10.1002/cpe.8275","DOIUrl":"10.1002/cpe.8275","url":null,"abstract":"<div>\u0000 \u0000 <p>Cloud computing is commonly utilized in remote contexts to handle user demands for resources and services. Each assignment has unique processing needs that are determined by the time it takes to complete. However, if load balancing is not properly managed, the effectiveness of resources may suffer dramatically. Consequently, cloud service providers have to emphasize rapid and precise load balancing as well as proper resource supply. This paper proposes a novel enhanced deep network-based load predictor and load balancing in cloud-fog services. In prior, the workload is predicted using a deep network called Multiple Layers Assisted in LSTM (MLA-LSTM) model that considers the capacity of virtual machine (VM) and task as input and predicts the target label as underload, overload and equally balanced. According to this prediction, the optimal load balancing is performed through a hybrid optimization named Osprey Assisted Pelican Optimization Algorithm (OAPOA) while taking into account several parameters such as makespan, execution cost, resource consumption, and server load. Additionally, a process known as load migration is carried out, in which machines with overload tasks are assigned to machines with underload tasks. This migration is applied optimally via the OAPOA strategy under the consideration of constraints including migration cost and migration efficiency.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language model evaluation for high-performance computing software development 高性能计算软件开发的大型语言模型评估
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-04 DOI: 10.1002/cpe.8269
William F. Godoy, Pedro Valero-Lara, Keita Teranishi, Prasanna Balaprakash, Jeffrey S. Vetter

We apply AI-assisted large language model (LLM) capabilities of GPT-3 targeting high-performance computing (HPC) kernels for (i) code generation, and (ii) auto-parallelization of serial code in C ++, Fortran, Python and Julia. Our scope includes the following fundamental numerical kernels: AXPY, GEMV, GEMM, SpMV, Jacobi Stencil, and CG, and language/programming models: (1) C++ (e.g., OpenMP [including offload], OpenACC, Kokkos, SyCL, CUDA, and HIP), (2) Fortran (e.g., OpenMP [including offload] and OpenACC), (3) Python (e.g., numpy, Numba, cuPy, and pyCUDA), and (4) Julia (e.g., Threads, CUDA.jl, AMDGPU.jl, and KernelAbstractions.jl). Kernel implementations are generated using GitHub Copilot capabilities powered by the GPT-based OpenAI Codex available in Visual Studio Code given simple <kernel> + <programming model> + <optional hints> prompt variants. To quantify and compare the generated results, we propose a proficiency metric around the initial 10 suggestions given for each prompt. For auto-parallelization, we use ChatGPT interactively giving simple prompts as in a dialogue with another human including simple “prompt engineering” follow ups. Results suggest that correct outputs for C++ correlate with the adoption and maturity of programming models. For example, OpenMP and CUDA score really high, whereas HIP is still lacking. We found that prompts from either a targeted language such as Fortran or the more general-purpose Python can benefit from adding language keywords, while Julia prompts perform acceptably well for its Threads and CUDA.jl programming models. We expect to provide an initial quantifiable point of reference for code generation in each programming model using a state-of-the-art LLM. Overall, understanding the convergence of LLMs, AI, and HPC is crucial due to its rapidly evolving nature and how it is redefining human-computer interactions.

我们应用 GPT-3 的人工智能辅助大型语言模型(LLM)功能,针对高性能计算(HPC)内核进行(i)代码生成,以及(ii)C ++、Fortran、Python 和 Julia 中串行代码的自动并行化。我们的研究范围包括以下基本数值内核:AXPY, GEMV, GEMM, SpMV, Jacobi Stencil, and CG, and language/programming models: (1) C++ (e.g., OpenMP [including offload], OpenACC, Kokkos, SyCL, CUDA, and HIP), (2) Fortran (e.g.、(3) Python(如 numpy、Numba、cuPy 和 pyCUDA),以及 (4) Julia(如 Threads、CUDA.jl、AMDGPU.jl 和 Kernelions.jl)。内核实现是利用 GitHub Copilot 功能生成的,该功能由 Visual Studio Code 中基于 GPT 的 OpenAI Codex 提供,并给出了简单的<内核> + <编程模型> + <可选提示>提示变量。为了量化和比较生成的结果,我们围绕每个提示给出的最初 10 个建议提出了一个熟练度指标。为了实现自动并行化,我们使用 ChatGPT 以交互方式给出简单的提示,就像与另一人对话一样,包括简单的 "提示工程 "跟进。结果表明,C++ 的正确输出与编程模型的采用和成熟度相关。例如,OpenMP 和 CUDA 的得分非常高,而 HIP 仍然不足。我们发现,Fortran 等目标语言或通用性更强的 Python 可从添加语言关键词中获益,而 Julia 提示在 Threads 和 CUDA.jl 编程模型方面的表现尚可接受。我们希望使用最先进的 LLM 为每种编程模型的代码生成提供一个可量化的初始参考点。总之,理解 LLM、人工智能和 HPC 的融合至关重要,因为它具有快速发展的性质,并且正在重新定义人机交互。
{"title":"Large language model evaluation for high-performance computing software development","authors":"William F. Godoy,&nbsp;Pedro Valero-Lara,&nbsp;Keita Teranishi,&nbsp;Prasanna Balaprakash,&nbsp;Jeffrey S. Vetter","doi":"10.1002/cpe.8269","DOIUrl":"10.1002/cpe.8269","url":null,"abstract":"<p>We apply AI-assisted large language model (LLM) capabilities of GPT-3 targeting high-performance computing (HPC) kernels for (i) code generation, and (ii) auto-parallelization of serial code in C <span>++</span>, Fortran, Python and Julia. Our scope includes the following fundamental numerical kernels: AXPY, GEMV, GEMM, SpMV, Jacobi Stencil, and CG, and language/programming models: (1) C<span>++</span> (e.g., OpenMP [including offload], OpenACC, Kokkos, SyCL, CUDA, and HIP), (2) Fortran (e.g., OpenMP [including offload] and OpenACC), (3) Python (e.g., numpy, Numba, cuPy, and pyCUDA), and (4) Julia (e.g., Threads, CUDA.jl, AMDGPU.jl, and KernelAbstractions.jl). Kernel implementations are generated using GitHub Copilot capabilities powered by the GPT-based OpenAI Codex available in Visual Studio Code given simple <span>&lt;kernel&gt; + &lt;programming model&gt; + &lt;optional hints&gt;</span> prompt variants. To quantify and compare the generated results, we propose a proficiency metric around the initial 10 suggestions given for each prompt. For auto-parallelization, we use ChatGPT interactively giving simple prompts as in a dialogue with another human including simple “prompt engineering” follow ups. Results suggest that correct outputs for C<span>++</span> correlate with the adoption and maturity of programming models. For example, OpenMP and CUDA score really high, whereas HIP is still lacking. We found that prompts from either a targeted language such as Fortran or the more general-purpose Python can benefit from adding language keywords, while Julia prompts perform acceptably well for its Threads and CUDA.jl programming models. We expect to provide an initial quantifiable point of reference for code generation in each programming model using a state-of-the-art LLM. Overall, understanding the convergence of LLMs, AI, and HPC is crucial due to its rapidly evolving nature and how it is redefining human-computer interactions.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and mitigation of few control plane attacks in software defined network environments using deep learning algorithm 利用深度学习算法检测和缓解软件定义网络环境中的少量控制平面攻击
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-03 DOI: 10.1002/cpe.8256
M. Anand Kumar, Edeh Michael Onyema, B. Sundaravadivazhagan, Manish Gupta, Achyut Shankar, Venkataramaiah Gude, Nagendar Yamsani

In order to make networks more adaptable and flexible, software-defined networking (SDN) is an architecture that abstracts the many, easily distinct layers of a network. By enabling businesses and service providers to react swiftly to shifting business requirements, SDN aims to improve network control. SDN has become an important framework for Internet of Things (IoT) and 5G. Despite recent research endeavors focused on pinpointing constraints within SDN design components, various security attacks persist, including man-in-the-middle attacks, host hijacking, ARP poisoning, and saturation attacks. Overcoming these limitations poses a challenge, necessitating robust security techniques to detect and counteract such attacks in SDN environments. This study is dedicated to developing a method for detecting and mitigating control plane attacks within Software Defined Network Environments utilizing Deep Learning Algorithms. The study presents a deep-learning-based approach to identifying malicious hosts within SDN networks, thus thwarting unauthorized access to the controller. Experimental results demonstrate the effectiveness of the proposed model in host classification, exhibiting high accuracy and performance compared to alternative approaches.

摘要 为了使网络更具适应性和灵活性,软件定义网络(SDN)是一种架构,它抽象了网络中许多易于区分的层。通过使企业和服务提供商能够对不断变化的业务需求做出快速反应,SDN 旨在改进网络控制。SDN 已成为物联网 (IoT) 和 5G 的重要框架。尽管最近的研究工作侧重于找出 SDN 设计组件中的限制因素,但各种安全攻击依然存在,包括中间人攻击、主机劫持、ARP 中毒和饱和攻击。克服这些局限性是一项挑战,需要强大的安全技术来检测和抵御 SDN 环境中的此类攻击。本研究致力于开发一种利用深度学习算法检测和缓解软件定义网络环境中控制平面攻击的方法。研究提出了一种基于深度学习的方法,用于识别 SDN 网络中的恶意主机,从而阻止对控制器的未经授权访问。实验结果表明了所提模型在主机分类方面的有效性,与其他方法相比,该模型具有更高的准确性和性能。
{"title":"Detection and mitigation of few control plane attacks in software defined network environments using deep learning algorithm","authors":"M. Anand Kumar,&nbsp;Edeh Michael Onyema,&nbsp;B. Sundaravadivazhagan,&nbsp;Manish Gupta,&nbsp;Achyut Shankar,&nbsp;Venkataramaiah Gude,&nbsp;Nagendar Yamsani","doi":"10.1002/cpe.8256","DOIUrl":"10.1002/cpe.8256","url":null,"abstract":"<div>\u0000 \u0000 <p>In order to make networks more adaptable and flexible, software-defined networking (SDN) is an architecture that abstracts the many, easily distinct layers of a network. By enabling businesses and service providers to react swiftly to shifting business requirements, SDN aims to improve network control. SDN has become an important framework for Internet of Things (IoT) and 5G. Despite recent research endeavors focused on pinpointing constraints within SDN design components, various security attacks persist, including man-in-the-middle attacks, host hijacking, ARP poisoning, and saturation attacks. Overcoming these limitations poses a challenge, necessitating robust security techniques to detect and counteract such attacks in SDN environments. This study is dedicated to developing a method for detecting and mitigating control plane attacks within Software Defined Network Environments utilizing Deep Learning Algorithms. The study presents a deep-learning-based approach to identifying malicious hosts within SDN networks, thus thwarting unauthorized access to the controller. Experimental results demonstrate the effectiveness of the proposed model in host classification, exhibiting high accuracy and performance compared to alternative approaches.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel hawk swarm-optimized deep learning classification with K-nearest neighbor based decision making for autonomous vehicle movement controller 基于 K 最近邻决策的新型鹰群优化深度学习分类法,用于自动驾驶汽车运动控制器
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-31 DOI: 10.1002/cpe.8241
Zhang Qingmiao, Zhang Dinghua

Nowadays, intelligent transportation systems pay a lot of attention to autonomous vehicles it is believed that an autonomous vehicle improves mobility, comfort, safety, and energy efficiency. Making decisions is essential for the development of autonomous vehicles since these algorithms must be able to manage dynamic and complex urban crossings. In this research an optimal deep BiLSTM-GAN classifier to detect the movement of smart vehicles, initially the preprocessing stage is performed to decrease noise in the received data after that the essential regions are next be extracted in the region of interest (ROI) to make the right decision. The extracted data are forwarded to the GAN for road segmentation as well as the optimized deep BiLSTM classifier, which recognizes the traffic sign, simultaneously making it possible to do a modified Hough line-based maneuver prediction using the segmented information from the roads. Finally, the GAN determines the lane, and the BiLSTM predicts the traffic sign. The K-nearest neighbor (KNN)-based autonomous vehicle movement controllers are used to make the decision based on the predicted traffic sign and information about the lane. The proposed HSO algorithm was developed as the outcome of the common fusion of hawk and swarm optimization. Based on lane detecting achievements, at training percentage (TP) 90, the accuracy is 91.75%, Peak signal-to-noise ratio (PSNR) is 64.84%, mean square error (MSE) is 28.78, and mean absolute error (MAE) is 20.20, respectively, similarly based on the traffic sign prediction achievements at TP 90, the accuracy is 93.71%, sensitivity is 95.15%, specificity is 93.91%, and MSE is 28.78%, respectively.

摘要 如今,智能交通系统对自动驾驶汽车给予了极大关注,认为自动驾驶汽车可以提高机动性、舒适性、安全性和能源效率。决策对于自动驾驶汽车的发展至关重要,因为这些算法必须能够管理动态和复杂的城市交叉路口。在这项研究中,首先要进行预处理以减少接收到的数据中的噪声,然后在感兴趣区域(ROI)中提取重要区域,以便做出正确的决策。提取的数据将转发给 GAN 进行道路分割,并转发给优化的深度 BiLSTM 分类器,该分类器可识别交通标志,同时还能利用道路分割信息进行基于改进的 Hough 线的机动预测。最后,GAN 确定车道,BiLSTM 预测交通标志。基于 K-nearest neighbor (KNN) 的自主车辆运动控制器将根据预测的交通标志和车道信息做出决策。所提出的 HSO 算法是鹰优化和蜂群优化共同融合的结果。根据车道检测结果,在训练百分比(TP)为 90 时,准确率为 91.75%,峰值信噪比(PSNR)为 64.84%,均方误差(MSE)为 28.78,平均绝对误差(MAE)为 20.20;同样,根据交通标志预测结果,在训练百分比(TP)为 90 时,准确率为 93.71%,灵敏度为 95.15%,特异性为 93.91%,MSE 为 28.78%。
{"title":"Novel hawk swarm-optimized deep learning classification with K-nearest neighbor based decision making for autonomous vehicle movement controller","authors":"Zhang Qingmiao,&nbsp;Zhang Dinghua","doi":"10.1002/cpe.8241","DOIUrl":"10.1002/cpe.8241","url":null,"abstract":"<div>\u0000 \u0000 <p>Nowadays, intelligent transportation systems pay a lot of attention to autonomous vehicles it is believed that an autonomous vehicle improves mobility, comfort, safety, and energy efficiency. Making decisions is essential for the development of autonomous vehicles since these algorithms must be able to manage dynamic and complex urban crossings. In this research an optimal deep BiLSTM-GAN classifier to detect the movement of smart vehicles, initially the preprocessing stage is performed to decrease noise in the received data after that the essential regions are next be extracted in the region of interest (ROI) to make the right decision. The extracted data are forwarded to the GAN for road segmentation as well as the optimized deep BiLSTM classifier, which recognizes the traffic sign, simultaneously making it possible to do a modified Hough line-based maneuver prediction using the segmented information from the roads. Finally, the GAN determines the lane, and the BiLSTM predicts the traffic sign. The K-nearest neighbor (KNN)-based autonomous vehicle movement controllers are used to make the decision based on the predicted traffic sign and information about the lane. The proposed HSO algorithm was developed as the outcome of the common fusion of hawk and swarm optimization. Based on lane detecting achievements, at training percentage (TP) 90, the accuracy is 91.75%, Peak signal-to-noise ratio (PSNR) is 64.84%, mean square error (MSE) is 28.78, and mean absolute error (MAE) is 20.20, respectively, similarly based on the traffic sign prediction achievements at TP 90, the accuracy is 93.71%, sensitivity is 95.15%, specificity is 93.91%, and MSE is 28.78%, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To secure an e-commerce system using epidemic mathematical modeling with neural network 利用神经网络流行病数学模型确保电子商务系统的安全
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-29 DOI: 10.1002/cpe.8270
Kumar Sachin Yadav, Ajit Kumar Keshri

Securing an e-commerce system using epidemic mathematical modeling with neural networks involves adapting epidemiological principles to combat the spread of misinformation. Just like how epidemiologists track the spread of diseases through populations, we can track the dissemination of fake news through online platforms. By modeling how fake news spreads, we gain insights into its propagation patterns, enabling us to develop more effective countermeasures. Neural networks, with their ability to learn from data, play a crucial role in this process by analyzing vast amounts of information to identify and mitigate the impact of fake news. One potential disadvantage of using epidemic mathematical modeling with neural networks to secure e-commerce systems is the complexity of the approach. The epidemic-based recurrent long short-term memory (E-RLSTM) technique addresses the complexity and evolving nature of fake news propagation by leveraging the strengths of recurrent neural networks (RNNs), specifically long short-term memory (LSTM) units, within an epidemic modeling framework. One advantage of using epidemic mathematical modeling with neural networks to secure e-commerce systems is its proactive nature. One significant finding in employing this approach is the ability to uncover hidden connections and correlations within the data. E-RLSTM stands out by capturing temporal dynamics and integrating epidemic parameters into its LSTM architecture, ensuring robustness and adaptability in detecting and combating fake news within e-commerce systems, outperforming other techniques in accuracy and performance. Description of the NSL-KDD dataset offers easy access to a valuable repository for benchmarking cyber security. Contained within are more than 120,000 authentic samples of cyber-attacks across 41 distinct categories, providing an excellent environment for testing intrusion detection systems.

摘要利用神经网络的流行病数学建模来保护电子商务系统,涉及利用流行病学原理来打击错误信息的传播。就像流行病学家追踪疾病在人群中的传播一样,我们也可以追踪假新闻在网络平台上的传播。通过模拟假新闻的传播方式,我们可以深入了解其传播模式,从而制定出更有效的应对措施。神经网络具有从数据中学习的能力,通过分析大量信息来识别和减轻假新闻的影响,在这一过程中发挥着至关重要的作用。利用流行病数学模型和神经网络来确保电子商务系统安全的一个潜在缺点是方法的复杂性。基于流行病的递归长短期记忆(E-RLSTM)技术在流行病建模框架内利用递归神经网络(RNN),特别是长短期记忆(LSTM)单元的优势,解决了假新闻传播的复杂性和演变性问题。利用流行病数学建模和神经网络来确保电子商务系统安全的一个优势是其主动性。采用这种方法的一个重要发现是能够发现数据中隐藏的联系和相关性。E-RLSTM 通过捕捉时间动态并将流行病参数集成到其 LSTM 架构中而脱颖而出,确保了在检测和打击电子商务系统中的假新闻时的鲁棒性和适应性,在准确性和性能方面优于其他技术。NSL-KDD 数据集的描述为网络安全基准测试提供了一个宝贵的资源库。该数据集包含 41 个不同类别的 120,000 多个真实网络攻击样本,为测试入侵检测系统提供了绝佳的环境。
{"title":"To secure an e-commerce system using epidemic mathematical modeling with neural network","authors":"Kumar Sachin Yadav,&nbsp;Ajit Kumar Keshri","doi":"10.1002/cpe.8270","DOIUrl":"10.1002/cpe.8270","url":null,"abstract":"<div>\u0000 \u0000 <p>Securing an e-commerce system using epidemic mathematical modeling with neural networks involves adapting epidemiological principles to combat the spread of misinformation. Just like how epidemiologists track the spread of diseases through populations, we can track the dissemination of fake news through online platforms. By modeling how fake news spreads, we gain insights into its propagation patterns, enabling us to develop more effective countermeasures. Neural networks, with their ability to learn from data, play a crucial role in this process by analyzing vast amounts of information to identify and mitigate the impact of fake news. One potential disadvantage of using epidemic mathematical modeling with neural networks to secure e-commerce systems is the complexity of the approach. The epidemic-based recurrent long short-term memory (E-RLSTM) technique addresses the complexity and evolving nature of fake news propagation by leveraging the strengths of recurrent neural networks (RNNs), specifically long short-term memory (LSTM) units, within an epidemic modeling framework. One advantage of using epidemic mathematical modeling with neural networks to secure e-commerce systems is its proactive nature. One significant finding in employing this approach is the ability to uncover hidden connections and correlations within the data. E-RLSTM stands out by capturing temporal dynamics and integrating epidemic parameters into its LSTM architecture, ensuring robustness and adaptability in detecting and combating fake news within e-commerce systems, outperforming other techniques in accuracy and performance. Description of the NSL-KDD dataset offers easy access to a valuable repository for benchmarking cyber security. Contained within are more than 120,000 authentic samples of cyber-attacks across 41 distinct categories, providing an excellent environment for testing intrusion detection systems.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure access technology for industrial internet of things 工业物联网安全接入技术
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-29 DOI: 10.1002/cpe.8231
Bingquan Wang, Jin Peng, Meili Cui

When terminal devices attempt to access the industrial internet of things (IIoT), preventing illegal access from untrusted terminals becomes challenging. This difficulty arises because most devices adopt the commonly used traditional methods of accessing the internet of things. To address this challenge, we propose a perception-layer-based IIoT trusted connection architecture, derived from the trusted connection architecture (TCA), and names it TCA-IIoT. This architecture enables bidirectional identity and platform integrity authentication between access points and terminals, while also ensuring trusted authentication of IIoT terminal behavior. To validate the effectiveness of TCA-IIoT, the paper details a simulation experiment. This experiment centers on evaluating the success rate of data transmission and measuring the average delay under various conditions, including scenarios with malicious nodes. The results of the study indicate that TCA-IIoT markedly improves the security and reliability of IIoT networks, advancements that are vital for the sustainable development and broader application of these systems.

当终端设备试图访问工业物联网(IIoT)时,防止来自不受信任的终端设备的非法访问变得十分困难。出现这种困难的原因是,大多数设备都采用常用的传统方法访问物联网。为了应对这一挑战,我们提出了一种基于感知层的 IIoT 可信连接架构,该架构源自可信连接架构(TCA),并将其命名为 TCA-IIoT。该架构可在接入点和终端之间实现双向身份和平台完整性验证,同时还能确保对 IIoT 终端行为进行可信验证。为了验证 TCA-IIoT 的有效性,本文详细介绍了一项模拟实验。该实验主要评估数据传输的成功率,并测量各种条件下的平均延迟,包括存在恶意节点的场景。研究结果表明,TCA-IIoT 显著提高了 IIoT 网络的安全性和可靠性,这些进步对于这些系统的可持续发展和更广泛应用至关重要。
{"title":"Secure access technology for industrial internet of things","authors":"Bingquan Wang,&nbsp;Jin Peng,&nbsp;Meili Cui","doi":"10.1002/cpe.8231","DOIUrl":"10.1002/cpe.8231","url":null,"abstract":"<p>When terminal devices attempt to access the industrial internet of things (IIoT), preventing illegal access from untrusted terminals becomes challenging. This difficulty arises because most devices adopt the commonly used traditional methods of accessing the internet of things. To address this challenge, we propose a perception-layer-based IIoT trusted connection architecture, derived from the trusted connection architecture (TCA), and names it TCA-IIoT. This architecture enables bidirectional identity and platform integrity authentication between access points and terminals, while also ensuring trusted authentication of IIoT terminal behavior. To validate the effectiveness of TCA-IIoT, the paper details a simulation experiment. This experiment centers on evaluating the success rate of data transmission and measuring the average delay under various conditions, including scenarios with malicious nodes. The results of the study indicate that TCA-IIoT markedly improves the security and reliability of IIoT networks, advancements that are vital for the sustainable development and broader application of these systems.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 25","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A distributed memory parallel randomized Kaczmarz for sparse system of equations 稀疏方程组的分布式内存并行随机化卡兹马兹算法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-27 DOI: 10.1002/cpe.8274
Ercan Selçuk Bölükbaşı, Fahreddin Şükrü Torun, Murat Manguoğlu

Kaczmarz algorithm is an iterative projection method for solving system of linear equations that arise in science and engineering problems in various application domains. In addition to classical Kaczmarz, there are randomized and parallel variants. The main challenge of the parallel implementation is the dependency of each Kaczmarz iteration on its predecessor. Because of this dependency, frequent communication is required which results in a substantial overhead. In this study, a new distributed parallel method that reduces the communication overhead is proposed. The proposed method partitions the problem so that the Kaczmarz iterations on different blocks are less dependent. A frequency parameter is introduced to see the effect of communication frequency on the performance. The communication overhead is also decreased by allowing communication between processes only if they have shared non-zero columns. The experiments are performed using problems from various domains to compare the effects of different partitioning methods on the communication overhead and performance. Finally, parallel speedups of the proposed method on larger problems are presented.

Kaczmarz 算法是一种迭代投影法,用于解决各种应用领域的科学和工程问题中出现的线性方程组。除经典 Kaczmarz 算法外,还有随机和并行变体。并行执行的主要挑战在于每个 Kaczmarz 迭代都依赖于前一个迭代。由于这种依赖性,需要进行频繁的通信,从而导致大量的开销。本研究提出了一种新的分布式并行方法,可以减少通信开销。所提出的方法对问题进行了分割,从而降低了不同区块上的 Kaczmarz 迭代的依赖性。为了了解通信频率对性能的影响,引入了一个频率参数。只有当进程共享非零列时,才允许进程间通信,从而降低了通信开销。实验使用了不同领域的问题,以比较不同分区方法对通信开销和性能的影响。最后,介绍了所提方法在较大问题上的并行加速效果。
{"title":"A distributed memory parallel randomized Kaczmarz for sparse system of equations","authors":"Ercan Selçuk Bölükbaşı,&nbsp;Fahreddin Şükrü Torun,&nbsp;Murat Manguoğlu","doi":"10.1002/cpe.8274","DOIUrl":"10.1002/cpe.8274","url":null,"abstract":"<p>Kaczmarz algorithm is an iterative projection method for solving system of linear equations that arise in science and engineering problems in various application domains. In addition to classical Kaczmarz, there are randomized and parallel variants. The main challenge of the parallel implementation is the dependency of each Kaczmarz iteration on its predecessor. Because of this dependency, frequent communication is required which results in a substantial overhead. In this study, a new distributed parallel method that reduces the communication overhead is proposed. The proposed method partitions the problem so that the Kaczmarz iterations on different blocks are less dependent. A frequency parameter is introduced to see the effect of communication frequency on the performance. The communication overhead is also decreased by allowing communication between processes only if they have shared non-zero columns. The experiments are performed using problems from various domains to compare the effects of different partitioning methods on the communication overhead and performance. Finally, parallel speedups of the proposed method on larger problems are presented.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 25","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.8274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise-robust neural networks for medical image segmentation by dual-strategy sample selection 通过双策略样本选择实现用于医学图像分割的降噪神经网络
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-26 DOI: 10.1002/cpe.8271
Jialin Shi, Youquan Yang, Kailai Zhang

Deep neural networks for medical image segmentation often face the problem of insufficient clean labeled data. Although non-expert annotations are more readily accessible, these low-quality annotations lead to significant performance degradation of existing neural network methods. In this paper, we focus on robust learning of medical image segmentation with noisy annotations and propose a novel noise-tolerant framework based on dual-strategy sample selection, which selects the informative samples to provide effective supervision information. First, we propose the first round of sample selection by designing a novel joint loss, which includes conventional supervised loss and regularization loss. To further select information-rich samples, we propose confidence-based pseudo-label sample selection from a novel perspective as the complement. The dual strategies are used in a collaborative manner and the network is optimized with mined informative samples. We conducted extensive experiments on datasets with both simulated noisy labels and real-world noisy labels. For instance, on a simulated dataset with 25% noise ratio, our method achieves segmentation Dice value with 90.56% ±$$ pm $$ 0.03%. Furthermore, increasing the noise ratio to 95%, our method still maintains a high Dice value of 73.85% ±$$ pm $$ 0.28% compared to other baselines. Extensive results have demonstrated that our method can weaken the effects of noisy labels on medical image segmentation.

摘要用于医学图像分割的深度神经网络经常面临标注数据不足的问题。虽然非专家注释更容易获得,但这些低质量注释导致现有神经网络方法的性能显著下降。在本文中,我们将重点放在有噪声注释的医学图像分割的鲁棒学习上,并提出了一种基于双策略样本选择的新型噪声容限框架,该框架可选择有信息量的样本来提供有效的监督信息。首先,我们通过设计一种新颖的联合损失(包括传统的监督损失和正则化损失)来进行第一轮样本选择。为了进一步选择信息丰富的样本,我们从新颖的角度提出了基于置信度的伪标签样本选择作为补充。我们以协作的方式使用双重策略,并利用挖掘出的信息样本对网络进行优化。我们在具有模拟噪声标签和真实噪声标签的数据集上进行了大量实验。例如,在噪声率为 25% 的模拟数据集上,我们的方法实现了 90.56% 0.03% 的分割骰子值。此外,将噪声比提高到 95%,与其他基线相比,我们的方法仍然保持了 73.85% 0.28% 的高 Dice 值。大量结果表明,我们的方法可以削弱噪声标签对医学图像分割的影响。
{"title":"Noise-robust neural networks for medical image segmentation by dual-strategy sample selection","authors":"Jialin Shi,&nbsp;Youquan Yang,&nbsp;Kailai Zhang","doi":"10.1002/cpe.8271","DOIUrl":"10.1002/cpe.8271","url":null,"abstract":"<div>\u0000 \u0000 <p>Deep neural networks for medical image segmentation often face the problem of insufficient clean labeled data. Although non-expert annotations are more readily accessible, these low-quality annotations lead to significant performance degradation of existing neural network methods. In this paper, we focus on robust learning of medical image segmentation with noisy annotations and propose a novel noise-tolerant framework based on dual-strategy sample selection, which selects the informative samples to provide effective supervision information. First, we propose the first round of sample selection by designing a novel joint loss, which includes conventional supervised loss and regularization loss. To further select information-rich samples, we propose confidence-based pseudo-label sample selection from a novel perspective as the complement. The dual strategies are used in a collaborative manner and the network is optimized with mined informative samples. We conducted extensive experiments on datasets with both simulated noisy labels and real-world noisy labels. For instance, on a simulated dataset with 25% noise ratio, our method achieves segmentation Dice value with 90.56% <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>±</mo>\u0000 </mrow>\u0000 <annotation>$$ pm $$</annotation>\u0000 </semantics></math> 0.03%. Furthermore, increasing the noise ratio to 95%, our method still maintains a high Dice value of 73.85% <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>±</mo>\u0000 </mrow>\u0000 <annotation>$$ pm $$</annotation>\u0000 </semantics></math> 0.28% compared to other baselines. Extensive results have demonstrated that our method can weaken the effects of noisy labels on medical image segmentation.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 25","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fair XIDS: Ensuring fairness and transparency in intrusion detection models 公平的 XIDS:确保入侵检测模型的公平性和透明度
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-26 DOI: 10.1002/cpe.8268
Chinu, Urvashi Bansal

An intrusion detection system (IDS) is valuable for detecting anomalies and unauthorized access to a system or network. Due to the black-box nature of these IDS models, network experts need more trust in systems to act on alerts and transparency to understand the model's inner logic. Moreover, biased models' decisions affect the model performance and increase the false positive rates, directly affecting the model's accuracy. So, maintaining Transparency and Fairness simultaneously in IDS models is essential for accurate decision-making. Existing methods face challenges of the tradeoff between fairness and accuracy, which also affects the reliability and robustness of the model. Motivated by these research gaps, we developed the Fair-XIDS model. This model clarifies its internal logic with visual explanations and promotes fairness across its entire lifecycle. The Fair-XIDS model successfully integrates complex transparency and fairness algorithms to address issues like Imbalanced datasets, algorithmic bias, and postprocessing bias with an average 85% reduction in false positive rate. To ensure reliability, the proposed model effectively mitigates the tradeoff between accuracy and fairness with an average of 90% accuracy and more than 85% fairness. The assessment results of the proposed model over diverse datasets and classifiers mark its model-agnostic nature. Overall, the model achieves more than 85% consistency among diverse classifiers.

入侵检测系统(IDS)对于检测系统或网络的异常情况和未经授权的访问非常重要。由于这些 IDS 模型的黑箱性质,网络专家需要对系统有更多的信任,才能对警报采取行动,并需要透明度来理解模型的内在逻辑。此外,有偏见的模型决策会影响模型性能,增加误报率,直接影响模型的准确性。因此,在 IDS 模型中同时保持透明度和公平性对准确决策至关重要。现有方法面临着公平性和准确性之间权衡的挑战,这也影响了模型的可靠性和鲁棒性。基于这些研究空白,我们开发了公平-XIDS 模型。该模型通过可视化解释阐明了其内部逻辑,并在整个生命周期中促进了公平性。Fair-XIDS 模型成功地整合了复杂的透明度和公平性算法,解决了不平衡数据集、算法偏差和后处理偏差等问题,平均降低了 85% 的误报率。为了确保可靠性,所提出的模型有效地降低了准确性和公平性之间的权衡,平均准确性达到 90%,公平性超过 85%。该模型在不同数据集和分类器上的评估结果标志着其与模型无关的特性。总体而言,该模型在不同分类器之间实现了 85% 以上的一致性。
{"title":"Fair XIDS: Ensuring fairness and transparency in intrusion detection models","authors":"Chinu,&nbsp;Urvashi Bansal","doi":"10.1002/cpe.8268","DOIUrl":"https://doi.org/10.1002/cpe.8268","url":null,"abstract":"<div>\u0000 \u0000 <p>An intrusion detection system (IDS) is valuable for detecting anomalies and unauthorized access to a system or network. Due to the black-box nature of these IDS models, network experts need more trust in systems to act on alerts and transparency to understand the model's inner logic. Moreover, biased models' decisions affect the model performance and increase the false positive rates, directly affecting the model's accuracy. So, maintaining Transparency and Fairness simultaneously in IDS models is essential for accurate decision-making. Existing methods face challenges of the tradeoff between fairness and accuracy, which also affects the reliability and robustness of the model. Motivated by these research gaps, we developed the Fair-XIDS model. This model clarifies its internal logic with visual explanations and promotes fairness across its entire lifecycle. The Fair-XIDS model successfully integrates complex transparency and fairness algorithms to address issues like Imbalanced datasets, algorithmic bias, and postprocessing bias with an average 85% reduction in false positive rate. To ensure reliability, the proposed model effectively mitigates the tradeoff between accuracy and fairness with an average of 90% accuracy and more than 85% fairness. The assessment results of the proposed model over diverse datasets and classifiers mark its model-agnostic nature. Overall, the model achieves more than 85% consistency among diverse classifiers.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 25","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142525409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building an intrusion detection system on UNSW-NB15: Reducing the margin of error to deal with data overlap and imbalance 在 UNSW-NB15 上构建入侵检测系统:降低误差幅度以处理数据重叠和不平衡问题
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-22 DOI: 10.1002/cpe.8242
Zeinab Zoghi, Gursel Serpen

This study addresses the challenge of data imbalance and class overlap in machine learning for intrusion detection, proposing that targeted algorithmic adjustments can significantly enhance model performance. Our hypothesis contends that an ensemble framework, adeptly integrating novel threshold-adjustment algorithms, can improve classification sensitivity and specificity. To test this, we developed an ensemble model comprising Balanced Bagging (BB), eXtreme Gradient Boosting (XGBoost), and Random Forest (RF), fine-tuned using grid search for BB and XGBoost, and augmented with the Hellinger metric for RF to tackle data imbalance. The innovation lies in our algorithms, which adeptly adjust the discrimination threshold to rectify the class overlap problem, enhancing the model's ability to discern between negative and positive classes. Utilizing the UNSW-NB15 dataset, we conducted a comparative analysis for binary and multi-category classification. Our ensemble model achieved a binary classification accuracy of 97.80%, with a sensitivity rate of 98.26% for detecting attacks, and a multi-category classification accuracy and sensitivity that reached up to 99.73% and 97.24% for certain attack types. These results substantially surpass those of existing models on the same dataset, affirming our model's superiority in dealing with complex data distributions prevalent in network security domains.

摘要 本研究针对入侵检测机器学习中数据不平衡和类重叠的挑战,提出有针对性的算法调整可以显著提高模型性能。我们的假设认为,一个善于整合新型阈值调整算法的集合框架可以提高分类灵敏度和特异性。为了验证这一点,我们开发了一个集合模型,其中包括平衡袋式算法(Balanced Bagging,BB)、极梯度提升算法(eXtreme Gradient Boosting,XGBoost)和随机森林算法(Random Forest,RF),BB 和 XGBoost 算法使用网格搜索进行微调,RF 算法使用海灵格指标进行增强,以解决数据不平衡问题。创新在于我们的算法,它巧妙地调整了判别阈值,纠正了类重叠问题,增强了模型分辨负类和正类的能力。利用 UNSW-NB15 数据集,我们对二元分类和多类别分类进行了比较分析。我们的集合模型在检测攻击方面的二元分类准确率达到了 97.80%,灵敏度达到了 98.26%;在某些攻击类型方面,我们的多类别分类准确率和灵敏度分别达到了 99.73% 和 97.24%。这些结果大大超过了现有模型在相同数据集上的结果,肯定了我们的模型在处理网络安全领域普遍存在的复杂数据分布方面的优越性。
{"title":"Building an intrusion detection system on UNSW-NB15: Reducing the margin of error to deal with data overlap and imbalance","authors":"Zeinab Zoghi,&nbsp;Gursel Serpen","doi":"10.1002/cpe.8242","DOIUrl":"10.1002/cpe.8242","url":null,"abstract":"<p>This study addresses the challenge of data imbalance and class overlap in machine learning for intrusion detection, proposing that targeted algorithmic adjustments can significantly enhance model performance. Our hypothesis contends that an ensemble framework, adeptly integrating novel threshold-adjustment algorithms, can improve classification sensitivity and specificity. To test this, we developed an ensemble model comprising Balanced Bagging (BB), eXtreme Gradient Boosting (XGBoost), and Random Forest (RF), fine-tuned using grid search for BB and XGBoost, and augmented with the Hellinger metric for RF to tackle data imbalance. The innovation lies in our algorithms, which adeptly adjust the discrimination threshold to rectify the class overlap problem, enhancing the model's ability to discern between negative and positive classes. Utilizing the UNSW-NB15 dataset, we conducted a comparative analysis for binary and multi-category classification. Our ensemble model achieved a binary classification accuracy of 97.80%, with a sensitivity rate of 98.26% for detecting attacks, and a multi-category classification accuracy and sensitivity that reached up to 99.73% and 97.24% for certain attack types. These results substantially surpass those of existing models on the same dataset, affirming our model's superiority in dealing with complex data distributions prevalent in network security domains.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 25","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.8242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1