首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Proactive self-healing techniques for cloud computing: A systematic review 云计算的主动自愈技术:系统综述
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-19 DOI: 10.1002/cpe.8246
Seyed Reza Rouholamini, Meghdad Mirabi, Razieh Farazkish, Amir Sahafi

Ensuring the seamless operation of cloud computing services is paramount for meeting user demands and ensuring business continuity. Fault-tolerant self-healing techniques play a crucial role in enhancing the reliability and availability of cloud platforms, minimizing downtime and ensuring uninterrupted service delivery. This article systematically categorizes and analyzes existing research on fault-tolerant self-healing techniques published between 2005 and 2024. We provide a comprehensive technical taxonomy organizing self-healing techniques based on fault tolerance processes, encompassing considerations for both reliability and availability. Additionally, we evaluate applications of proactive self-healing techniques, highlighting their achievements, and limitations in enhancing service continuity. Strategies to address identified weaknesses are discussed, alongside future research challenges and open issues in the domain of cloud resilience. Through this analysis, the article contributes to understanding self-healing techniques in cloud computing, offering insights into their effectiveness in ensuring service continuity. The findings aim to guide future research efforts in developing more robust and resilient cloud infrastructures, ultimately enhancing overall service reliability and availability. By emphasizing the importance of fault tolerance and self-healing techniques, this article lays the foundation for advancing the state-of-the-art in cloud computing.

确保云计算服务的无缝运行对于满足用户需求和确保业务连续性至关重要。容错自修复技术在提高云平台的可靠性和可用性、减少停机时间和确保不间断服务交付方面发挥着至关重要的作用。本文对 2005 年至 2024 年间发表的有关容错自愈技术的现有研究进行了系统的分类和分析。我们提供了一个全面的技术分类法,以容错流程为基础组织自修复技术,包括对可靠性和可用性的考虑。此外,我们还对主动自修复技术的应用进行了评估,强调了这些技术在提高服务连续性方面的成就和局限性。文章还讨论了解决已发现弱点的策略,以及云弹性领域未来的研究挑战和开放性问题。通过分析,文章有助于理解云计算中的自修复技术,深入了解这些技术在确保服务连续性方面的有效性。研究结果旨在指导未来的研究工作,开发更稳健、更具弹性的云基础设施,最终提高整体服务的可靠性和可用性。通过强调容错和自愈技术的重要性,本文为推动云计算技术的发展奠定了基础。
{"title":"Proactive self-healing techniques for cloud computing: A systematic review","authors":"Seyed Reza Rouholamini,&nbsp;Meghdad Mirabi,&nbsp;Razieh Farazkish,&nbsp;Amir Sahafi","doi":"10.1002/cpe.8246","DOIUrl":"https://doi.org/10.1002/cpe.8246","url":null,"abstract":"<div>\u0000 \u0000 <p>Ensuring the seamless operation of cloud computing services is paramount for meeting user demands and ensuring business continuity. Fault-tolerant self-healing techniques play a crucial role in enhancing the reliability and availability of cloud platforms, minimizing downtime and ensuring uninterrupted service delivery. This article systematically categorizes and analyzes existing research on fault-tolerant self-healing techniques published between 2005 and 2024. We provide a comprehensive technical taxonomy organizing self-healing techniques based on fault tolerance processes, encompassing considerations for both reliability and availability. Additionally, we evaluate applications of proactive self-healing techniques, highlighting their achievements, and limitations in enhancing service continuity. Strategies to address identified weaknesses are discussed, alongside future research challenges and open issues in the domain of cloud resilience. Through this analysis, the article contributes to understanding self-healing techniques in cloud computing, offering insights into their effectiveness in ensuring service continuity. The findings aim to guide future research efforts in developing more robust and resilient cloud infrastructures, ultimately enhancing overall service reliability and availability. By emphasizing the importance of fault tolerance and self-healing techniques, this article lays the foundation for advancing the state-of-the-art in cloud computing.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent botnet detection in IoT networks using parallel CNN-LSTM fusion 利用并行 CNN-LSTM 融合技术智能检测物联网网络中的僵尸网络
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-15 DOI: 10.1002/cpe.8258
Rongrong Jiang, Zhengqiu Weng, Lili Shi, Erxuan Weng, Hongmei Li, Weiqiang Wang, Tiantian Zhu, Wuzhao Li

With the development of the Internet of Things (IoT), the number of terminal devices is rapidly growing and at the same time, their security is facing serious challenges. For the industrial control system, there are challenges in detecting and preventing botnet. Traditional detection methods focus on capturing and reverse analyzing the botnet programs first and then parsing the extracted features from the malicious code or attacks. However, their accuracy is very low and their latency is relatively high. Moreover, they sometimes even cannot recognize the unknown botnets. The machine learning based detection methods rely on manual feature engineering and have a weak generalization. The deep learning-based methods mostly rely on the system log, which does not take into account the multisource information such as traffic. To address the above issues, from the perspective of the botnet features, this paper proposes an intelligent detection method over parallel CNN-LSTM, integrating the spatial and temporal features to identify botnets. Experimental demonstrate that the accuracy, recall, and F1-score of our proposed method achieve up to over 98%, and the precision, 97.8%, is not the highest but reasonable. It reveals compared with the existing start-of-the-art methods, our proposed method outperforms in the botnet detection. Our methodology's strength lies in its ability to harness the multifaceted information present in IoT traffic, offering a more nuanced and comprehensive analysis. The parallel CNN-LSTM architecture ensures that spatial and temporal data are processed concurrently, preserving the integrity of the information and enabling a more robust detection mechanism. The result is a detection system that not only performs exceptionally well in a controlled environment but also holds promise for real-world application, where the rapid and accurate identification of botnets is paramount.

摘要随着物联网(IoT)的发展,终端设备的数量迅速增长,与此同时,其安全性也面临着严峻的挑战。对于工业控制系统来说,僵尸网络的检测和防范面临挑战。传统的检测方法主要是先捕获并反向分析僵尸网络程序,然后解析从恶意代码或攻击中提取的特征。然而,这些方法的准确率很低,延迟也相对较高。此外,它们有时甚至无法识别未知的僵尸网络。基于机器学习的检测方法依赖于人工特征工程,泛化能力较弱。基于深度学习的方法大多依赖系统日志,没有考虑流量等多源信息。针对上述问题,本文从僵尸网络特征的角度出发,提出了一种基于并行 CNN-LSTM 的智能检测方法,综合空间和时间特征来识别僵尸网络。实验表明,我们提出的方法的准确率、召回率和 F1 分数都达到了 98% 以上,精度为 97.8%,虽然不是最高的,但也是合理的。实验表明,与现有的先进方法相比,我们提出的方法在僵尸网络检测方面表现出色。我们方法的优势在于能够利用物联网流量中的多方面信息,提供更细致、更全面的分析。并行 CNN-LSTM 架构可确保同时处理空间和时间数据,从而保持信息的完整性,实现更强大的检测机制。因此,该检测系统不仅在受控环境中表现优异,而且有望在现实世界中得到应用,在现实世界中,快速准确地识别僵尸网络至关重要。
{"title":"Intelligent botnet detection in IoT networks using parallel CNN-LSTM fusion","authors":"Rongrong Jiang,&nbsp;Zhengqiu Weng,&nbsp;Lili Shi,&nbsp;Erxuan Weng,&nbsp;Hongmei Li,&nbsp;Weiqiang Wang,&nbsp;Tiantian Zhu,&nbsp;Wuzhao Li","doi":"10.1002/cpe.8258","DOIUrl":"10.1002/cpe.8258","url":null,"abstract":"<div>\u0000 \u0000 <p>With the development of the Internet of Things (IoT), the number of terminal devices is rapidly growing and at the same time, their security is facing serious challenges. For the industrial control system, there are challenges in detecting and preventing botnet. Traditional detection methods focus on capturing and reverse analyzing the botnet programs first and then parsing the extracted features from the malicious code or attacks. However, their accuracy is very low and their latency is relatively high. Moreover, they sometimes even cannot recognize the unknown botnets. The machine learning based detection methods rely on manual feature engineering and have a weak generalization. The deep learning-based methods mostly rely on the system log, which does not take into account the multisource information such as traffic. To address the above issues, from the perspective of the botnet features, this paper proposes an intelligent detection method over parallel CNN-LSTM, integrating the spatial and temporal features to identify botnets. Experimental demonstrate that the accuracy, recall, and <i>F</i>1-score of our proposed method achieve up to over 98%, and the precision, 97.8%, is not the highest but reasonable. It reveals compared with the existing start-of-the-art methods, our proposed method outperforms in the botnet detection. Our methodology's strength lies in its ability to harness the multifaceted information present in IoT traffic, offering a more nuanced and comprehensive analysis. The parallel CNN-LSTM architecture ensures that spatial and temporal data are processed concurrently, preserving the integrity of the information and enabling a more robust detection mechanism. The result is a detection system that not only performs exceptionally well in a controlled environment but also holds promise for real-world application, where the rapid and accurate identification of botnets is paramount.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved federated transfer learning model for intrusion detection in edge computing empowered wireless sensor networks 用于边缘计算授权无线传感器网络入侵检测的改进型联合转移学习模型
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-15 DOI: 10.1002/cpe.8259
L. Raja, G. Sakthi, S. Vimalnath, Gnanasaravanan Subramaniam

Intrusion Detection (ID) is a critical component in cybersecurity, tasked with identifying and thwarting unauthorized access or malicious activities within networked systems. The advent of Edge Computing (EC) has introduced a paradigm shift, empowering Wireless Sensor Networks (WSNs) with decentralized processing capabilities. However, this transition presents new challenges for ID due to the dynamic and resource-constrained nature of Edge environments. In response to these challenges, this study presents a pioneering approach: an Improved Federated Transfer Learning Model. This model integrates a pre-trained ResNet-18 for transfer learning with a meticulously designed Convolutional Neural Network (CNN), tailored to the intricacies of the NSL-KDD dataset. The collaborative synergy of these models culminates in an Intrusion Detection System (IDS) with an impressive accuracy of 96.54%. Implemented in Python, the proposed model not only demonstrates its technical prowess but also underscores its practical applicability in fortifying EC-empowered WSNs against evolving security threats. This research contributes to the ongoing discourse on enhancing cybersecurity measures within emerging computing paradigms.

摘要入侵检测(ID)是网络安全的重要组成部分,其任务是识别和挫败网络系统中的未经授权访问或恶意活动。边缘计算(EC)的出现带来了模式的转变,使无线传感器网络(WSN)具备了分散处理能力。然而,由于边缘环境的动态性和资源有限性,这种转变给 ID 带来了新的挑战。为了应对这些挑战,本研究提出了一种开创性的方法:改进的联合转移学习模型。该模型集成了用于迁移学习的预训练 ResNet-18 和精心设计的卷积神经网络 (CNN),后者是根据 NSL-KDD 数据集的复杂性量身定制的。这些模型的协同作用最终产生了入侵检测系统(IDS),其准确率高达 96.54%,令人印象深刻。该模型用 Python 实现,不仅展示了其技术实力,还强调了它在强化由电子器件供电的 WSN 以应对不断发展的安全威胁方面的实际应用性。这项研究为当前在新兴计算模式中加强网络安全措施的讨论做出了贡献。
{"title":"An improved federated transfer learning model for intrusion detection in edge computing empowered wireless sensor networks","authors":"L. Raja,&nbsp;G. Sakthi,&nbsp;S. Vimalnath,&nbsp;Gnanasaravanan Subramaniam","doi":"10.1002/cpe.8259","DOIUrl":"10.1002/cpe.8259","url":null,"abstract":"<div>\u0000 \u0000 <p>Intrusion Detection (ID) is a critical component in cybersecurity, tasked with identifying and thwarting unauthorized access or malicious activities within networked systems. The advent of Edge Computing (EC) has introduced a paradigm shift, empowering Wireless Sensor Networks (WSNs) with decentralized processing capabilities. However, this transition presents new challenges for ID due to the dynamic and resource-constrained nature of Edge environments. In response to these challenges, this study presents a pioneering approach: an Improved Federated Transfer Learning Model. This model integrates a pre-trained ResNet-18 for transfer learning with a meticulously designed Convolutional Neural Network (CNN), tailored to the intricacies of the NSL-KDD dataset. The collaborative synergy of these models culminates in an Intrusion Detection System (IDS) with an impressive accuracy of 96.54%. Implemented in Python, the proposed model not only demonstrates its technical prowess but also underscores its practical applicability in fortifying EC-empowered WSNs against evolving security threats. This research contributes to the ongoing discourse on enhancing cybersecurity measures within emerging computing paradigms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metaheuristic algorithms for capacitated controller placement in software defined networks considering failure resilience 考虑故障恢复能力的软件定义网络中电容式控制器安置的元heuristic算法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-13 DOI: 10.1002/cpe.8254
Sagarika Mohanty, Bibhudatta Sahoo

Software-defined networking (SDN) has revolutionized network architectures by decoupling the control plane from the data plane. An intriguing challenge within this paradigm is the strategic placement of controllers and the allocation of switches to optimize network performance and resilience. In the event of a controller failure, the switches are disconnected from the controller until they are reassigned to other active controllers possessing sufficient spare capacity. The reassignment could lead to a significant rise in propagation latency. This correspondence presents a mathematical model for capacitated controller placement, strategically designed to anticipate failures and prevent a substantial increase in worst-case latency and disconnections. The aim is to minimize the worst-case latency between switches and their backup controllers and among the controllers. Four metaheuristic algorithms are proposed including, an enhanced genetic algorithm (CCPCFR-EGA), particle swarm optimization (CCPCFR-PSO), a hybrid particle swarm optimization and simulated annealing algorithm (CCPCFR-HPSOSA), and a grey wolf optimization algorithm (CCPCFR-GWO). These algorithms are compared with a simulated annealing method and an optimal method. Evaluation conducted on four network datasets demonstrates that the proposed metaheuristic methods are faster than the optimal method. The experimental outcome indicates that CCPCFR-HPSOSA and CCPCFR-GWO outperform the other methods, consistently providing near-optimal solutions. However, CCPCFR-GWO is preferred over CCPCFR-HPSOSA due to its faster execution time. Specifically, CCPCFR-GWO achieves an average speed-up of 3.9 over the optimal for smaller networks and an average speed-up of 31.78 for larger networks, while still producing near-optimal solutions.

摘要软件定义网络(SDN)通过将控制平面与数据平面分离,彻底改变了网络架构。在这一范例中,一个引人入胜的挑战是如何战略性地放置控制器和分配交换机,以优化网络性能和弹性。在控制器发生故障时,交换机将与控制器断开连接,直到它们被重新分配到拥有足够备用容量的其他活动控制器上。重新分配可能会导致传播延迟显著增加。本文介绍了一个用于容纳控制器位置的数学模型,其战略设计旨在预测故障并防止最坏情况下延迟和断开的大幅增加。其目的是最大限度地减少交换机及其备份控制器之间以及控制器之间的最坏情况延迟。本文提出了四种元启发式算法,包括增强型遗传算法(CCPCFR-EGA)、粒子群优化算法(CCPCFR-PSO)、粒子群优化和模拟退火混合算法(CCPCFR-HPSOSA)以及灰狼优化算法(CCPCFR-GWO)。这些算法与模拟退火方法和最优方法进行了比较。在四个网络数据集上进行的评估表明,所提出的元启发式方法比最优方法更快。实验结果表明,CCPCFR-HPSOSA 和 CCPCFR-GWO 优于其他方法,能持续提供接近最优的解决方案。然而,由于 CCPCFR-HPSOSA 的执行时间更快,CCPCFR-GWO 比 CCPCFR-HPSOSA 更受青睐。具体来说,对于较小的网络,CCPCFR-GWO 比最优方法平均加快了 3.9 倍,对于较大的网络,平均加快了 31.78 倍,但仍能提供接近最优的解决方案。
{"title":"Metaheuristic algorithms for capacitated controller placement in software defined networks considering failure resilience","authors":"Sagarika Mohanty,&nbsp;Bibhudatta Sahoo","doi":"10.1002/cpe.8254","DOIUrl":"10.1002/cpe.8254","url":null,"abstract":"<div>\u0000 \u0000 <p>Software-defined networking (SDN) has revolutionized network architectures by decoupling the control plane from the data plane. An intriguing challenge within this paradigm is the strategic placement of controllers and the allocation of switches to optimize network performance and resilience. In the event of a controller failure, the switches are disconnected from the controller until they are reassigned to other active controllers possessing sufficient spare capacity. The reassignment could lead to a significant rise in propagation latency. This correspondence presents a mathematical model for capacitated controller placement, strategically designed to anticipate failures and prevent a substantial increase in worst-case latency and disconnections. The aim is to minimize the worst-case latency between switches and their backup controllers and among the controllers. Four metaheuristic algorithms are proposed including, an enhanced genetic algorithm (CCPCFR-EGA), particle swarm optimization (CCPCFR-PSO), a hybrid particle swarm optimization and simulated annealing algorithm (CCPCFR-HPSOSA), and a grey wolf optimization algorithm (CCPCFR-GWO). These algorithms are compared with a simulated annealing method and an optimal method. Evaluation conducted on four network datasets demonstrates that the proposed metaheuristic methods are faster than the optimal method. The experimental outcome indicates that CCPCFR-HPSOSA and CCPCFR-GWO outperform the other methods, consistently providing near-optimal solutions. However, CCPCFR-GWO is preferred over CCPCFR-HPSOSA due to its faster execution time. Specifically, CCPCFR-GWO achieves an average speed-up of 3.9 over the optimal for smaller networks and an average speed-up of 31.78 for larger networks, while still producing near-optimal solutions.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adam-Ladybug Beetle Optimization enabled multi-objective service placement strategy in fog computing 亚当-瓢虫甲虫优化雾计算中的多目标服务安置策略
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-13 DOI: 10.1002/cpe.8239
Oshin Sharma, Deepak Sharma

The Internet of Things (IoT) has transformed every aspect of our lives and has become universal in multiple fields from personnel to government and military applications. However, IoT suffers from the inherent limitation of latency and high computational costs, which can be effectively overcome by using a fog computing framework. However, the key challenge in fog computing is to address the problem of service placement among the nodes, thereby providing optimal utilization of resources and minimizing service time. This research work presents a novel service placement technique, by considering the service placement issue as a multi-objective optimization problem. Here, a two-level fog computing network comprising a fog master node and fog cells is considered. The master node is responsible for the service placement of the fog nodes, and the service placement is carried out using the Adam-Ladybug Beetle Optimization (ALBO) algorithm. Further, multiple objectives, like resource utilization, makespan, response time, service time, cost, and energy consumption are considered to enhance service placement. Moreover, the efficiency of the ALBO for service placement (ALBO_SP) is examined considering service cost, energy consumption, and service time and is found to attain values of 19.009, 73.581 J, and 4.854 s, respectively.

摘要物联网(IoT)已经改变了我们生活的方方面面,并已普及到从人事到政府和军事应用等多个领域。然而,物联网存在延迟和计算成本高的固有限制,而使用雾计算框架可以有效克服这些问题。然而,雾计算的关键挑战在于如何解决节点间的服务安置问题,从而实现资源的最佳利用和服务时间的最小化。本研究工作将服务放置问题视为一个多目标优化问题,提出了一种新颖的服务放置技术。这里考虑了一个由雾主节点和雾单元组成的两级雾计算网络。主节点负责雾节点的服务投放,服务投放采用亚当-瓢虫甲虫优化(ALBO)算法。此外,还考虑了多个目标,如资源利用率、时间跨度、响应时间、服务时间、成本和能耗,以提高服务安置的效率。此外,考虑到服务成本、能源消耗和服务时间,还考察了 ALBO 用于服务放置(ALBO_SP)的效率,发现其效率值分别为 19.009、73.581 J 和 4.854 s。
{"title":"Adam-Ladybug Beetle Optimization enabled multi-objective service placement strategy in fog computing","authors":"Oshin Sharma,&nbsp;Deepak Sharma","doi":"10.1002/cpe.8239","DOIUrl":"10.1002/cpe.8239","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Things (IoT) has transformed every aspect of our lives and has become universal in multiple fields from personnel to government and military applications. However, IoT suffers from the inherent limitation of latency and high computational costs, which can be effectively overcome by using a fog computing framework. However, the key challenge in fog computing is to address the problem of service placement among the nodes, thereby providing optimal utilization of resources and minimizing service time. This research work presents a novel service placement technique, by considering the service placement issue as a multi-objective optimization problem. Here, a two-level fog computing network comprising a fog master node and fog cells is considered. The master node is responsible for the service placement of the fog nodes, and the service placement is carried out using the Adam-Ladybug Beetle Optimization (ALBO) algorithm. Further, multiple objectives, like resource utilization, makespan, response time, service time, cost, and energy consumption are considered to enhance service placement. Moreover, the efficiency of the ALBO for service placement (ALBO_SP) is examined considering service cost, energy consumption, and service time and is found to attain values of 19.009, 73.581 J, and 4.854 s, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A smart surveillance system utilizing modified federated machine learning: Gossip-verifiable and quantum-safe approach 利用改良联合机器学习的智能监控系统:流言可验证和量子安全方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-13 DOI: 10.1002/cpe.8238
Dharmaraj Dharani, Kumarasamy Anitha Kumari

Edge computing has the capability to process data closer to its point of origin, leading to the development of critical autonomous infrastructures with frequently communicating peers. The proposed work aims to evaluate the effectiveness of security and privacy mechanisms tailored for distributed systems, particularly focusing on scenarios where the nodes are closed-circuit television (CCTV) systems. Ensuring public safety, object tracking in surveillance systems is a vital responsibility. The workflow has been specifically crafted and simulated for the purpose of weapon detection within public CCTV systems, utilizing sample edge devices. The system's primary objective is to detect any unauthorized use of weapons in public spaces while concurrently ensuring the integrity of video footage for use in criminal investigations. The outcomes of prior research on distributed machine learning (DML) techniques are compared with modified federated machine learning (FML) techniques, specifically designed for being Gossip verifiable and Quantum Safe. The conventional federated averaging algorithm is modified by incorporating the secret sharing principle, coupled with code-based McEliece cryptosystem. This adaptation is designed to fortify the system against quantum threats. The Gossip data dissemination protocol, executed via custom blockchain atop the distributed network, serves to authenticate and validate the learning model propagated among the peers in the network. It provides additional layer of integrity to the system. Potential threats to the proposed model are analyzed and the efficiency of the work is assessed using formal proofs. The outcomes of the proposed work demonstrate that the trustworthiness and consistency are meticulously preserved for both the model and data within the DML framework on the Edge computing platform.

摘要边缘计算有能力在更接近原点的地方处理数据,从而开发出具有频繁通信的对等节点的关键自主基础设施。所提议的工作旨在评估为分布式系统量身定制的安全和隐私机制的有效性,尤其侧重于节点为闭路电视(CCTV)系统的场景。确保公共安全,监控系统中的目标跟踪是一项重要责任。该工作流程是专门为在公共闭路电视系统中进行武器检测而设计和模拟的,使用的是边缘设备样本。该系统的主要目标是检测公共场所中任何未经授权使用武器的行为,同时确保用于刑事调查的视频片段的完整性。之前关于分布式机器学习(DML)技术的研究成果与经过改进的联合机器学习(FML)技术进行了比较,后者是专门为流言可验证和量子安全而设计的。传统的联合平均算法通过结合秘密共享原则和基于代码的 McEliece 密码系统进行了修改。这一修改旨在加强系统的量子威胁防御能力。通过分布式网络顶层的定制区块链执行的 "流言蜚语 "数据传播协议,可用于验证和确认在网络中的对等体之间传播的学习模型。它为系统提供了额外的完整性。我们分析了拟议模型面临的潜在威胁,并使用正式证明评估了这项工作的效率。拟议工作的成果表明,在边缘计算平台上的 DML 框架内,模型和数据的可信度和一致性都得到了精心维护。
{"title":"A smart surveillance system utilizing modified federated machine learning: Gossip-verifiable and quantum-safe approach","authors":"Dharmaraj Dharani,&nbsp;Kumarasamy Anitha Kumari","doi":"10.1002/cpe.8238","DOIUrl":"10.1002/cpe.8238","url":null,"abstract":"<div>\u0000 \u0000 <p>Edge computing has the capability to process data closer to its point of origin, leading to the development of critical autonomous infrastructures with frequently communicating peers. The proposed work aims to evaluate the effectiveness of security and privacy mechanisms tailored for distributed systems, particularly focusing on scenarios where the nodes are closed-circuit television (CCTV) systems. Ensuring public safety, object tracking in surveillance systems is a vital responsibility. The workflow has been specifically crafted and simulated for the purpose of weapon detection within public CCTV systems, utilizing sample edge devices. The system's primary objective is to detect any unauthorized use of weapons in public spaces while concurrently ensuring the integrity of video footage for use in criminal investigations. The outcomes of prior research on distributed machine learning (DML) techniques are compared with modified federated machine learning (FML) techniques, specifically designed for being Gossip verifiable and Quantum Safe. The conventional federated averaging algorithm is modified by incorporating the secret sharing principle, coupled with code-based McEliece cryptosystem. This adaptation is designed to fortify the system against quantum threats. The Gossip data dissemination protocol, executed via custom blockchain atop the distributed network, serves to authenticate and validate the learning model propagated among the peers in the network. It provides additional layer of integrity to the system. Potential threats to the proposed model are analyzed and the efficiency of the work is assessed using formal proofs. The outcomes of the proposed work demonstrate that the trustworthiness and consistency are meticulously preserved for both the model and data within the DML framework on the Edge computing platform.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142196660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M-DFCPP: A runtime library for multi-machine dataflow computing M-DFCPP:多机数据流计算运行库
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-07 DOI: 10.1002/cpe.8248
Qiuming Luo, Senhong Liu, Jinke Huang, Jinrong Li

This article designs and implements a runtime library for general dataflow programming, DFCPP (Luo Q, Huang J, Li J, Du Z. Proceedings of the 52nd International Conference on Parallel Processing Workshops. ACM; 2023:145-152.), and builds upon it to design and implement a multi-machine C++ dataflow library, M-DFCPP. In comparison to existing dataflow programming environments, DFCPP features a user-friendly interface and richer expressive capabilities (Luo Q, Huang J, Li J, Du Z. Proceedings of the 52nd International Conference on Parallel Processing Workshops. ACM; 2023:145-152.), enabling the representation of various types of dataflow actor tasks (static, dynamic and conditional task). Besides that, DFCPP addresses the memory management and task scheduling for non-uniform memory access architectures, while other dataflow libraries lack attention to these issues. M-DFCPP extends the capability of current dataflow runtime libraries (DFCPP, taskflow, openstream, etc.) and capable of multi-machine computing, while maintains the API compatible with DFCPP. M-DFCPP adopts the concepts of master and follower (Dean J, Ghemawat S. Commun ACM. 2008;51(1):107-113; Ghemawat S, Gobioff H, Leung ST. ACM SIGOPS Operating Systems Review. ACM; 2003:29-43.), which form a worksharing framework as many multi-machine system. To shift to the M-DFCPP framework, a filtering layer is inserted to the original DFCPP, transforming it into followers that can cooperate with each other. The master is made of modules for scheduling, data processing, graph partition, state management and so forth. In benchmark tests with workload with directed acyclic graph topology of binary trees and random graphs, DFCPP demonstrated performance improvements of 20% and 8%, respectively, compared to the second fastest library. M-DFCPP consistently exhibits outstanding performance across varying levels of concurrency and task workloads, achieving a maximum speedup of more than 20 over DFCPP, when the task parallelism exceeds 5000 on 32 nodes. Moreover, M-DFCPP, as a runtime library supporting multi-node dataflow computation, is compared with MPI, a runtime library supporting multi-node control flow computation.

摘要本文设计并实现了通用数据流编程的运行时库 DFCPP(Luo Q, Huang J, Li J, Du Z. 第 52 届并行处理国际研讨会论文集。ACM;2023:145-152),并在此基础上设计和实现了多机 C++ 数据流库 M-DFCPP。与现有的数据流编程环境相比,DFCPP 具有友好的用户界面和更丰富的表达能力(Luo Q, Huang J, Li J, Du Z. Proceedings of the 52nd International Conference on Parallel Processing Workshops.ACM; 2023:145-152.),能够表示各种类型的数据流行为任务(静态、动态和条件任务)。除此之外,DFCPP 还解决了非统一内存访问架构下的内存管理和任务调度问题,而其他数据流库则对这些问题缺乏关注。M-DFCPP 扩展了当前数据流运行库(DFCPP、taskflow、openstream 等)的功能,能够支持多机计算,同时保留了与 DFCPP 兼容的 API。M-DFCPP 采用主从概念(Dean J, Ghemawat S. Commun ACM.2008; 51(1):107-113; Ghemawat S, Gobioff H, Leung ST.ACM SIGOPS 操作系统评论》。ACM;2003:29-43。),形成了一个多机系统的工作共享框架。为了转向 M-DFCPP 框架,在原有的 DFCPP 中插入了一个过滤层,将其转化为可以相互合作的跟随者。主控层由调度、数据处理、图分割、状态管理等模块组成。在二叉树有向无环图拓扑和随机图的基准测试中,DFCPP 的性能分别比第二快的库提高了 20% 和 8%。M-DFCPP 在不同并发水平和任务工作量下始终表现出卓越的性能,当 32 个节点上的任务并行度超过 5000 时,M-DFCPP 比 DFCPP 的最大速度提高了 20 多倍。此外,作为支持多节点数据流计算的运行库,M-DFCPP 还与支持多节点控制流计算的运行库 MPI 进行了比较。
{"title":"M-DFCPP: A runtime library for multi-machine dataflow computing","authors":"Qiuming Luo,&nbsp;Senhong Liu,&nbsp;Jinke Huang,&nbsp;Jinrong Li","doi":"10.1002/cpe.8248","DOIUrl":"10.1002/cpe.8248","url":null,"abstract":"<div>\u0000 \u0000 <p>This article designs and implements a runtime library for general dataflow programming, DFCPP (Luo Q, Huang J, Li J, Du Z. <i>Proceedings of the 52nd International Conference on Parallel Processing Workshops</i>. ACM; 2023:145-152.), and builds upon it to design and implement a multi-machine C++ dataflow library, M-DFCPP. In comparison to existing dataflow programming environments, DFCPP features a user-friendly interface and richer expressive capabilities (Luo Q, Huang J, Li J, Du Z. <i>Proceedings of the 52nd International Conference on Parallel Processing Workshops</i>. ACM; 2023:145-152.), enabling the representation of various types of dataflow actor tasks (static, dynamic and conditional task). Besides that, DFCPP addresses the memory management and task scheduling for non-uniform memory access architectures, while other dataflow libraries lack attention to these issues. M-DFCPP extends the capability of current dataflow runtime libraries (DFCPP, taskflow, openstream, etc.) and capable of multi-machine computing, while maintains the API compatible with DFCPP. M-DFCPP adopts the concepts of master and follower (Dean J, Ghemawat S. <i>Commun ACM</i>. 2008;51(1):107-113; Ghemawat S, Gobioff H, Leung ST. <i>ACM SIGOPS Operating Systems Review</i>. ACM; 2003:29-43.), which form a worksharing framework as many multi-machine system. To shift to the M-DFCPP framework, a filtering layer is inserted to the original DFCPP, transforming it into followers that can cooperate with each other. The master is made of modules for scheduling, data processing, graph partition, state management and so forth. In benchmark tests with workload with directed acyclic graph topology of binary trees and random graphs, DFCPP demonstrated performance improvements of 20% and 8%, respectively, compared to the second fastest library. M-DFCPP consistently exhibits outstanding performance across varying levels of concurrency and task workloads, achieving a maximum speedup of more than 20 over DFCPP, when the task parallelism exceeds 5000 on 32 nodes. Moreover, M-DFCPP, as a runtime library supporting multi-node dataflow computation, is compared with MPI, a runtime library supporting multi-node control flow computation.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ab-HIDS: An anomaly-based host intrusion detection system using frequency of N-gram system call features and ensemble learning for containerized environment Ab-HIDS:基于异常的主机入侵检测系统:使用 N-gram 系统调用频率特征和集合学习,适用于容器化环境
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1002/cpe.8249
Nidhi Joraviya, Bhavesh N. Gohil, Udai Pratap Rao

Cloud's operating-system-level virtualization has introduced a new phase of lightweight virtualization through containers. The architecture of cloud-native and microservices-based application development strongly advocates for the use of containers due to their swift and convenient deployment capabilities. However, the security of applications within containers is important, as malicious or vulnerable content could jeopardize the container and the host system. This vulnerability also extends to neighboring containers and may compromise data integrity and confidentiality. The article focuses on developing an intrusion detection system tailored to containerized cloud environments by identifying system call analysis techniques and also proposes an anomaly-based host intrusion detection system (Ab-HIDS). This system employs the frequency of N-grams system calls as distinctive features. To enhance performance, two ensemble learning models, namely voting-based ensemble learning and XGBoost ensemble learning, are employed for training and testing the data. The proposed system is evaluated using the Leipzig Intrusion Detection Data Set (LID-DS), demonstrating substantial performance compared to existing state-of-the-art methods. Ab-HIDS is validated for class imbalance using the imbalance ratio and synthetic minority over-sampling technique methods. Our system achieved significant improvements in detection accuracy with 4% increase for the voting-based ensemble model and 6% increase for the XGBoost ensemble model. Additionally, we observed reductions in the false positive rate by 0.9% and 0.8% for these models, respectively, compared to existing state-of-the-art methods. These results illustrate the potential of our proposed approach in improving security measures within containerized environments.

摘要云的操作系统级虚拟化通过容器引入了轻量级虚拟化的新阶段。基于云原生和微服务的应用程序开发架构因其快速便捷的部署能力而大力提倡使用容器。然而,容器内应用程序的安全性非常重要,因为恶意或脆弱的内容可能会危及容器和主机系统。这种漏洞还会延伸到邻近的容器,并可能危及数据完整性和保密性。文章通过识别系统调用分析技术,重点开发了一种专为容器化云环境定制的入侵检测系统,并提出了一种基于异常的主机入侵检测系统(Ab-HIDS)。该系统采用 N-grams 系统调用频率作为显著特征。为了提高性能,系统采用了两种集合学习模型,即基于投票的集合学习和 XGBoost 集合学习来训练和测试数据。利用莱比锡入侵检测数据集(LID-DS)对所提出的系统进行了评估,结果表明与现有的最先进方法相比,该系统的性能相当可观。使用不平衡率和合成少数群体过度采样技术方法对 Ab-HIDS 的类不平衡进行了验证。我们的系统显著提高了检测准确率,基于投票的集合模型提高了 4%,XGBoost 集合模型提高了 6%。此外,与现有的先进方法相比,我们观察到这些模型的误报率分别降低了 0.9% 和 0.8%。这些结果表明了我们提出的方法在改进集装箱环境安全措施方面的潜力。
{"title":"Ab-HIDS: An anomaly-based host intrusion detection system using frequency of N-gram system call features and ensemble learning for containerized environment","authors":"Nidhi Joraviya,&nbsp;Bhavesh N. Gohil,&nbsp;Udai Pratap Rao","doi":"10.1002/cpe.8249","DOIUrl":"10.1002/cpe.8249","url":null,"abstract":"<div>\u0000 \u0000 <p>Cloud's operating-system-level virtualization has introduced a new phase of lightweight virtualization through containers. The architecture of cloud-native and microservices-based application development strongly advocates for the use of containers due to their swift and convenient deployment capabilities. However, the security of applications within containers is important, as malicious or vulnerable content could jeopardize the container and the host system. This vulnerability also extends to neighboring containers and may compromise data integrity and confidentiality. The article focuses on developing an intrusion detection system tailored to containerized cloud environments by identifying system call analysis techniques and also proposes an anomaly-based host intrusion detection system (Ab-HIDS). This system employs the frequency of N-grams system calls as distinctive features. To enhance performance, two ensemble learning models, namely voting-based ensemble learning and XGBoost ensemble learning, are employed for training and testing the data. The proposed system is evaluated using the Leipzig Intrusion Detection Data Set (LID-DS), demonstrating substantial performance compared to existing state-of-the-art methods. Ab-HIDS is validated for class imbalance using the imbalance ratio and synthetic minority over-sampling technique methods. Our system achieved significant improvements in detection accuracy with 4% increase for the voting-based ensemble model and 6% increase for the XGBoost ensemble model. Additionally, we observed reductions in the false positive rate by 0.9% and 0.8% for these models, respectively, compared to existing state-of-the-art methods. These results illustrate the potential of our proposed approach in improving security measures within containerized environments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 23","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Runtime performance of a GAMESS quantum chemistry application offloaded to GPUs 卸载到 GPU 的 GAMESS 量子化学应用的运行性能
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1002/cpe.8244
Masha Sosonkina, Gabriel Mateescu, Peng Xu, Tosaporn Sattasathuchana, Buu Pham, Mark S. Gordon, Sarom S. Leang

Computational chemistry is at the forefront of solving urgent societal problems, such as polymer upcycling and carbon capture. The complexity of modeling these processes at appropriate length and time scales is mainly manifested in the number and types of chemical species involved in the reactions and may require models of several thousand atoms and large basis sets to accurately capture the chemical complexity and heterogeneity in the physical and chemical processes. The quantum chemistry package General Atomic and Molecular Electronic Structure System (GAMESS) has a wide array of methods that can efficiently and accurately treat complex chemical systems. In this work, we have used the GAMESS Effective Fragment Molecule Orbital (EFMO) method for electronic structure calculation of a challenging mesoporous silica nanoparticle (MSN) model surrounded by about 4700 water molecules to investigate the strong scaling and GPU offloading on hybrid CPU-GPU nodes. Experiments were performed on the Perlmutter platform at the National Energy Research Scientific Computing Center. Good strong scaling and load balancing have been observed on up to 88 hybrid nodes for different settings of the execution parameters for the calculation considered here. When GPUs are oversubscribed by offloading work from multiple CPU processes, using the NVIDIA multi-process service (MPS) has consistently reduced time to solution and energy consumed. Additionally, for some configuration parameter settings, oversubscription with MPS improved performance by up to 5.8% over the case without oversubscription.

摘要计算化学是解决聚合物升级再循环和碳捕获等紧迫社会问题的前沿技术。在适当的长度和时间尺度上对这些过程进行建模的复杂性主要体现在参与反应的化学物种的数量和类型上,可能需要几千个原子和大型基集的模型才能准确捕捉物理和化学过程中的化学复杂性和异质性。量子化学软件包 "通用原子和分子电子结构系统"(GAMESS)拥有多种方法,可以高效、准确地处理复杂的化学系统。在这项工作中,我们使用 GAMESS 有效片段分子轨道(EFMO)方法对一个被约 4700 个水分子包围的具有挑战性的介孔二氧化硅纳米粒子(MSN)模型进行了电子结构计算,以研究 CPU-GPU 混合节点上的强扩展性和 GPU 卸载。实验在国家能源研究科学计算中心的 Perlmutter 平台上进行。在本文所考虑的计算中,根据不同的执行参数设置,在多达88个混合节点上观察到了良好的强扩展性和负载平衡。当GPU通过卸载多个CPU进程的工作而超额使用时,使用英伟达™(NVIDIA®)多进程服务(MPS)可以持续缩短解决问题的时间并降低能耗。此外,在某些配置参数设置下,使用 MPS 超额认购的性能比不超额认购的情况最多提高了 5.8%。
{"title":"Runtime performance of a GAMESS quantum chemistry application offloaded to GPUs","authors":"Masha Sosonkina,&nbsp;Gabriel Mateescu,&nbsp;Peng Xu,&nbsp;Tosaporn Sattasathuchana,&nbsp;Buu Pham,&nbsp;Mark S. Gordon,&nbsp;Sarom S. Leang","doi":"10.1002/cpe.8244","DOIUrl":"10.1002/cpe.8244","url":null,"abstract":"<p>Computational chemistry is at the forefront of solving urgent societal problems, such as polymer upcycling and carbon capture. The complexity of modeling these processes at appropriate length and time scales is mainly manifested in the number and types of chemical species involved in the reactions and may require models of several thousand atoms and large basis sets to accurately capture the chemical complexity and heterogeneity in the physical and chemical processes. The quantum chemistry package General Atomic and Molecular Electronic Structure System (GAMESS) has a wide array of methods that can efficiently and accurately treat complex chemical systems. In this work, we have used the GAMESS Effective Fragment Molecule Orbital (EFMO) method for electronic structure calculation of a challenging mesoporous silica nanoparticle (MSN) model surrounded by about 4700 water molecules to investigate the strong scaling and GPU offloading on hybrid CPU-GPU nodes. Experiments were performed on the Perlmutter platform at the National Energy Research Scientific Computing Center. Good strong scaling and load balancing have been observed on up to 88 hybrid nodes for different settings of the execution parameters for the calculation considered here. When GPUs are oversubscribed by offloading work from multiple CPU processes, using the NVIDIA multi-process service (MPS) has consistently reduced time to solution and energy consumed. Additionally, for some configuration parameter settings, oversubscription with MPS improved performance by up to 5.8% over the case without oversubscription.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 23","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.8244","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study of online academic risk prediction based on neural network multivariate time series features 基于神经网络多变量时间序列特征的在线学术风险预测研究
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1002/cpe.8251
Yang Wu, Mengping Yu, Huan Huang, Rui Hou

Neural networks are becoming increasingly widely used in various fields, especially for academic risk forecasts. Academic risk prediction is a hot topic in the field of big data in education that aims to identify and help students who experience great academic difficulties. In recent years, the use of machine learning algorithms and deep learning algorithms to achieve academic risk prediction has garnered increased attention and development. However, most of these studies use nontime series data as features for prediction, which are slightly insufficient in terms of timeliness. Therefore, this article focuses on time series data features that are more expressive of changes in students' learning status and uses multivariate time series data as predictive features. This article proposes a method based on multivariate time series features and a neural network to predict academic risk. The method includes three steps: first, the multivariate time series feature is extracted from the interaction records of the students' online learning platforms; second, the multivariate time series feature transformation model ROCKET is applied to convert the multivariate time series feature into a new feature; third, the new feature is converted into a final prediction result using a linear classification model. Comparative tests show that the proposed method has high effectiveness.

摘要神经网络在各个领域的应用越来越广泛,尤其是在学业风险预测方面。学业风险预测是教育大数据领域的一个热门话题,旨在发现和帮助那些在学业上遇到巨大困难的学生。近年来,利用机器学习算法和深度学习算法实现学业风险预测的研究得到了越来越多的关注和发展。然而,这些研究大多使用非时间序列数据作为预测特征,在时效性方面略显不足。因此,本文聚焦于更能表达学生学习状态变化的时间序列数据特征,采用多元时间序列数据作为预测特征。本文提出了一种基于多元时间序列特征和神经网络的学业风险预测方法。该方法包括三个步骤:第一,从学生在线学习平台的交互记录中提取多元时间序列特征;第二,应用多元时间序列特征转换模型 ROCKET 将多元时间序列特征转换为新特征;第三,利用线性分类模型将新特征转换为最终预测结果。对比测试表明,所提出的方法具有很高的有效性。
{"title":"A study of online academic risk prediction based on neural network multivariate time series features","authors":"Yang Wu,&nbsp;Mengping Yu,&nbsp;Huan Huang,&nbsp;Rui Hou","doi":"10.1002/cpe.8251","DOIUrl":"10.1002/cpe.8251","url":null,"abstract":"<div>\u0000 \u0000 <p>Neural networks are becoming increasingly widely used in various fields, especially for academic risk forecasts. Academic risk prediction is a hot topic in the field of big data in education that aims to identify and help students who experience great academic difficulties. In recent years, the use of machine learning algorithms and deep learning algorithms to achieve academic risk prediction has garnered increased attention and development. However, most of these studies use nontime series data as features for prediction, which are slightly insufficient in terms of timeliness. Therefore, this article focuses on time series data features that are more expressive of changes in students' learning status and uses multivariate time series data as predictive features. This article proposes a method based on multivariate time series features and a neural network to predict academic risk. The method includes three steps: first, the multivariate time series feature is extracted from the interaction records of the students' online learning platforms; second, the multivariate time series feature transformation model ROCKET is applied to convert the multivariate time series feature into a new feature; third, the new feature is converted into a final prediction result using a linear classification model. Comparative tests show that the proposed method has high effectiveness.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 23","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141939610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1