首页 > 最新文献

Journal of Parallel and Distributed Computing最新文献

英文 中文
Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues) 封面1 -完整的扉页(每期)/特刊扉页(每期)
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-28 DOI: 10.1016/S0743-7315(24)00182-5
{"title":"Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues)","authors":"","doi":"10.1016/S0743-7315(24)00182-5","DOIUrl":"10.1016/S0743-7315(24)00182-5","url":null,"abstract":"","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"196 ","pages":"Article 105018"},"PeriodicalIF":3.4,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content delivery network solutions for the CMS experiment: The evolution towards HL-LHC CMS 实验的内容交付网络解决方案:向 HL-LHC 演进
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-22 DOI: 10.1016/j.jpdc.2024.105014
Carlos Perez Dengra , Josep Flix , Anna Sikora , the CMS Collaboration
The Large Hadron Collider at CERN in Geneva is poised for a transformative upgrade, preparing to enhance both its accelerator and particle detectors. This strategic initiative is driven by the tenfold increase in proton-proton collisions anticipated for the forthcoming high-luminosity phase scheduled to start by 2029. The vital role played by the underlying computational infrastructure, the World-Wide LHC Computing Grid, in processing the data generated during these collisions underlines the need for its expansion and adaptation to meet the demands of the new accelerator phase. The provision of these computational resources by the worldwide community remains essential, all within a constant budgetary framework. While technological advancements offer some relief for the expected increase, numerous research and development projects are underway. Their aim is to bring future resources to manageable levels and provide cost-effective solutions to effectively handle the expanding volume of generated data. In the quest for optimized data access and resource utilization, the LHC community is actively investigating Content Delivery Network (CDN) techniques. These techniques serve as a mechanism for the cost-effective deployment of lightweight storage systems that support both, traditional and opportunistic compute resources. Furthermore, they aim to enhance the performance of executing tasks by facilitating the efficient reading of input data via caching content near the end user. A comprehensive study is presented to assess the benefits of implementing data cache solutions for the Compact Muon Solenoid (CMS) experiment. This in-depth examination serves as a use-case study specifically conducted for the Spanish compute facilities, playing a crucial role in supporting CMS activities. Data access patterns and popularity studies suggest that user analysis tasks benefit the most from CDN techniques. Consequently, a data cache has been introduced in the region to acquire a deeper understanding of these effects. In this paper, the details of the implementation of a data cache system in the PIC Tier-1 compute facility are presented. It includes insights into the developed monitoring tools and discusses the positive impact on CPU usage for analysis tasks executed in the region. The study is augmented by simulations of data caches, with the objective of discerning the most optimal requirements in both size and network connectivity for a data cache serving the Spanish region. Additionally, the study delves into the cost benefits associated with deploying such a solution in a production environment. Furthermore, it investigates the potential impact of incorporating this solution into other regions of the CMS computing infrastructure.
位于日内瓦欧洲核子研究中心(CERN)的大型强子对撞机正准备进行一次变革性的升级,准备同时增强其加速器和粒子探测器。这一战略举措的驱动力来自质子-质子对撞的十倍增长,而即将到来的高亮度阶段计划于2029年启动。底层计算基础设施--全球大型强子对撞机计算网格--在处理这些对撞过程中产生的数据方面发挥着至关重要的作用,因此需要对其进行扩展和调整,以满足新加速器阶段的需求。在预算不变的情况下,全球社会提供这些计算资源仍然至关重要。尽管技术进步在一定程度上缓解了预期增长的压力,但许多研发项目仍在进行之中。这些项目的目标是将未来的资源提高到可管理的水平,并提供具有成本效益的解决方案,以有效处理不断扩大的生成数据量。为了优化数据访问和资源利用,大型强子对撞机社区正在积极研究内容传输网络(CDN)技术。这些技术作为一种机制,可以经济高效地部署轻量级存储系统,同时支持传统计算资源和机会计算资源。此外,这些技术还旨在通过缓存终端用户附近的内容,促进高效读取输入数据,从而提高执行任务的性能。本文介绍了一项综合研究,以评估为紧凑渺子螺线管(CMS)实验实施数据缓存解决方案的好处。这项深入研究是专门针对西班牙计算设施进行的用例研究,在支持 CMS 活动中发挥着至关重要的作用。数据访问模式和受欢迎程度研究表明,用户分析任务最受益于 CDN 技术。因此,该地区引入了数据缓存,以便更深入地了解这些效果。本文介绍了在 PIC 一级计算设施中实施数据缓存系统的细节。其中包括对所开发监控工具的深入分析,并讨论了在该区域执行的分析任务对 CPU 使用率的积极影响。本研究通过对数据缓存的模拟进行了补充,目的是确定为西班牙地区服务的数据缓存在规模和网络连接方面的最佳要求。此外,研究还深入探讨了在生产环境中部署此类解决方案的成本效益。此外,研究还探讨了将该解决方案纳入 CMS 计算基础设施其他区域的潜在影响。
{"title":"Content delivery network solutions for the CMS experiment: The evolution towards HL-LHC","authors":"Carlos Perez Dengra ,&nbsp;Josep Flix ,&nbsp;Anna Sikora ,&nbsp;the CMS Collaboration","doi":"10.1016/j.jpdc.2024.105014","DOIUrl":"10.1016/j.jpdc.2024.105014","url":null,"abstract":"<div><div>The Large Hadron Collider at CERN in Geneva is poised for a transformative upgrade, preparing to enhance both its accelerator and particle detectors. This strategic initiative is driven by the tenfold increase in proton-proton collisions anticipated for the forthcoming high-luminosity phase scheduled to start by 2029. The vital role played by the underlying computational infrastructure, the World-Wide LHC Computing Grid, in processing the data generated during these collisions underlines the need for its expansion and adaptation to meet the demands of the new accelerator phase. The provision of these computational resources by the worldwide community remains essential, all within a constant budgetary framework. While technological advancements offer some relief for the expected increase, numerous research and development projects are underway. Their aim is to bring future resources to manageable levels and provide cost-effective solutions to effectively handle the expanding volume of generated data. In the quest for optimized data access and resource utilization, the LHC community is actively investigating Content Delivery Network (CDN) techniques. These techniques serve as a mechanism for the cost-effective deployment of lightweight storage systems that support both, traditional and opportunistic compute resources. Furthermore, they aim to enhance the performance of executing tasks by facilitating the efficient reading of input data via caching content near the end user. A comprehensive study is presented to assess the benefits of implementing data cache solutions for the Compact Muon Solenoid (CMS) experiment. This in-depth examination serves as a use-case study specifically conducted for the Spanish compute facilities, playing a crucial role in supporting CMS activities. Data access patterns and popularity studies suggest that user analysis tasks benefit the most from CDN techniques. Consequently, a data cache has been introduced in the region to acquire a deeper understanding of these effects. In this paper, the details of the implementation of a data cache system in the PIC Tier-1 compute facility are presented. It includes insights into the developed monitoring tools and discusses the positive impact on CPU usage for analysis tasks executed in the region. The study is augmented by simulations of data caches, with the objective of discerning the most optimal requirements in both size and network connectivity for a data cache serving the Spanish region. Additionally, the study delves into the cost benefits associated with deploying such a solution in a production environment. Furthermore, it investigates the potential impact of incorporating this solution into other regions of the CMS computing infrastructure.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"197 ","pages":"Article 105014"},"PeriodicalIF":3.4,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A large-scale study of the impact of node behavior on loosely coupled data dissemination: The case of the distributed Arctic observatory 节点行为对松散耦合数据传播影响的大规模研究:以分布式北极观测站为例
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-22 DOI: 10.1016/j.jpdc.2024.105013
Loïc Guégan, Issam Raïs, Otto Anshus
A Cyber-Physical System (CPS) deployed in remote and resource-constrained environments faces multiple challenges. It has, no or limited: network coverage, possibility of energy replenishment, physical access by humans.
Cyber-physical nodes deployed to observe and interact with the Arctic tundra face these challenges. They are subject to environmental factors such as avalanches, low temperatures, snow, ice, water and wild animals. Without energy supply infrastructures and humans available, nodes must achieve long operational lifetime from a single battery charge. They must be extremely energy-efficient. To reduce energy costs and increase their energy efficiency, cyber-physical nodes sleep most of the time, and avoid to communicate when they are unreachable.
But, a CPS needs to disseminate data between the nodes for multiple purposes including data reporting to a back-end service, resilient operations, safe-keeping of observational data, and propagating nodes updates. Loosely-coupled data dissemination policies offer this possibility [1]. Although, investigations should be made on their applicability to large-scale CPS.
In this paper, we evaluate and discuss the efficiency in energy, time and number of successful delivery of four data dissemination policies proposed in [1]. This evaluation is based on flow-level simulations. We study small and large-scale CPS, and evaluate the effects of the number of nodes and the size of the disseminated data on the nodes energy consumption and the dissemination's delivery success. To mitigate negative effects raised on large-scale CPS and large disseminated data sizes, different strategies are proposed and evaluated. We show that energy saving strategies do not always imply energy efficiency, and better data dissemination often comes at a cost. This last result highlights the importance of simulation prior to real CPS deployments in constrained environments.
部署在远程和资源受限环境中的信息物理系统(CPS)面临着诸多挑战。它有,没有或有限的:网络覆盖,能量补充的可能性,人类的物理访问。部署用于观测和与北极苔原互动的网络物理节点面临着这些挑战。它们受到雪崩、低温、雪、冰、水和野生动物等环境因素的影响。在没有能源供应基础设施和人力可用的情况下,节点必须通过一次电池充电实现较长的运行寿命。它们必须非常节能。为了降低能源成本和提高能源效率,网络物理节点大部分时间处于睡眠状态,当无法到达时避免通信。但是,CPS需要在节点之间传播数据,以实现多种目的,包括向后端服务报告数据、弹性操作、观测数据的安全保存以及传播节点更新。松耦合的数据传播策略提供了这种可能性。尽管如此,它们在大规模CPS中的适用性还有待进一步研究。在本文中,我们评估和讨论了b[1]中提出的四种数据传播策略在能量、时间和成功交付次数方面的效率。这个评估是基于流级模拟。研究了小型CPS和大型CPS,评估了节点数量和传播数据规模对节点能耗和传播交付成功的影响。为了减轻对大规模CPS和大型传播数据量产生的负面影响,提出并评估了不同的策略。我们表明,节能策略并不总是意味着能源效率,更好的数据传播往往是有代价的。最后一个结果强调了在受限环境中实际部署CPS之前进行模拟的重要性。
{"title":"A large-scale study of the impact of node behavior on loosely coupled data dissemination: The case of the distributed Arctic observatory","authors":"Loïc Guégan,&nbsp;Issam Raïs,&nbsp;Otto Anshus","doi":"10.1016/j.jpdc.2024.105013","DOIUrl":"10.1016/j.jpdc.2024.105013","url":null,"abstract":"<div><div>A Cyber-Physical System (CPS) deployed in remote and resource-constrained environments faces multiple challenges. It has, no or limited: network coverage, possibility of energy replenishment, physical access by humans.</div><div>Cyber-physical nodes deployed to observe and interact with the Arctic tundra face these challenges. They are subject to environmental factors such as avalanches, low temperatures, snow, ice, water and wild animals. Without energy supply infrastructures and humans available, nodes must achieve long operational lifetime from a single battery charge. They must be extremely energy-efficient. To reduce energy costs and increase their energy efficiency, cyber-physical nodes sleep most of the time, and avoid to communicate when they are unreachable.</div><div>But, a CPS needs to disseminate data between the nodes for multiple purposes including data reporting to a back-end service, resilient operations, safe-keeping of observational data, and propagating nodes updates. Loosely-coupled data dissemination policies offer this possibility <span><span>[1]</span></span>. Although, investigations should be made on their applicability to large-scale CPS.</div><div>In this paper, we evaluate and discuss the efficiency in energy, time and number of successful delivery of four data dissemination policies proposed in <span><span>[1]</span></span>. This evaluation is based on flow-level simulations. We study small and large-scale CPS, and evaluate the effects of the number of nodes and the size of the disseminated data on the nodes energy consumption and the dissemination's delivery success. To mitigate negative effects raised on large-scale CPS and large disseminated data sizes, different strategies are proposed and evaluated. We show that energy saving strategies do not always imply energy efficiency, and better data dissemination often comes at a cost. This last result highlights the importance of simulation prior to real CPS deployments in constrained environments.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"197 ","pages":"Article 105013"},"PeriodicalIF":3.4,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU tabu search: A study on using GPU to solve massive instances of the maximum diversity problem GPU禁忌搜索:利用GPU解决大量实例最大多样性问题的研究
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-20 DOI: 10.1016/j.jpdc.2024.105012
Bruno Nogueira , William Rosendo , Eduardo Tavares , Ermeson Andrade
The maximum diversity problem (MDP), a widely studied combinatorial optimization problem due to its broad applications, has seen numerous heuristic methods proposed. However, none of these approaches have addressed the challenges posed by massive instances. Currently, state-of-the-art heuristics are not well-suited for massive instances due to two primary reasons. Firstly, they rely on a matrix-based representation, which proves highly inefficient for sparse instances commonly encountered in massive scenarios. Secondly, as the problem size increases, their local search operators experience a slowdown. This work introduces a GPU-based tabu search algorithm designed to tackle such massive instances. To address the limitations of the state-of-the-art heuristics, our GPU tabu search employs more efficient data structures for sparse instances and leverages GPU parallel capabilities to expedite the local search process. We tested our approach on established small and medium instances, ranging from 2,000 to 5,000 vertices, as well as massive instances with up to 45,000 vertices. In these tests, our approach was compared with a state-of-the-art algorithm. Experimental results demonstrate an up to 30x speedup with of our proposal and its effectiveness on massive instances.
最大多样性问题(MDP)是一个被广泛研究的组合优化问题,由于其广泛的应用,已经提出了许多启发式方法。然而,这些方法都没有解决大规模实例带来的挑战。目前,由于两个主要原因,最先进的启发式方法不太适合大规模实例。首先,它们依赖于基于矩阵的表示,这对于在大规模场景中经常遇到的稀疏实例来说效率非常低。其次,随着问题规模的增加,它们的本地搜索操作会变慢。本文介绍了一种基于gpu的禁忌搜索算法,旨在处理此类大规模实例。为了解决最先进的启发式算法的局限性,我们的GPU禁忌搜索为稀疏实例采用了更有效的数据结构,并利用GPU并行功能来加快本地搜索过程。我们在已建立的小型和中型实例(从2000到5000个顶点)以及具有多达45,000个顶点的大型实例上测试了我们的方法。在这些测试中,我们的方法与最先进的算法进行了比较。实验结果表明,本文提出的算法可以提高30倍的速度,并且在海量实例上是有效的。
{"title":"GPU tabu search: A study on using GPU to solve massive instances of the maximum diversity problem","authors":"Bruno Nogueira ,&nbsp;William Rosendo ,&nbsp;Eduardo Tavares ,&nbsp;Ermeson Andrade","doi":"10.1016/j.jpdc.2024.105012","DOIUrl":"10.1016/j.jpdc.2024.105012","url":null,"abstract":"<div><div>The maximum diversity problem (MDP), a widely studied combinatorial optimization problem due to its broad applications, has seen numerous heuristic methods proposed. However, none of these approaches have addressed the challenges posed by massive instances. Currently, state-of-the-art heuristics are not well-suited for massive instances due to two primary reasons. Firstly, they rely on a matrix-based representation, which proves highly inefficient for sparse instances commonly encountered in massive scenarios. Secondly, as the problem size increases, their local search operators experience a slowdown. This work introduces a GPU-based tabu search algorithm designed to tackle such massive instances. To address the limitations of the state-of-the-art heuristics, our GPU tabu search employs more efficient data structures for sparse instances and leverages GPU parallel capabilities to expedite the local search process. We tested our approach on established small and medium instances, ranging from 2,000 to 5,000 vertices, as well as massive instances with up to 45,000 vertices. In these tests, our approach was compared with a state-of-the-art algorithm. Experimental results demonstrate an up to 30x speedup with of our proposal and its effectiveness on massive instances.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"197 ","pages":"Article 105012"},"PeriodicalIF":3.4,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient conference key agreement protocol suited for resource constrained devices 适合资源有限设备的高效会议密钥协议
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-14 DOI: 10.1016/j.jpdc.2024.105011
Manmohan Pundir , Abhimanyu Kumar
Conference key agreement (CKA) is essential for securing communication in group-oriented scenarios like multi-party messaging and collaborative environments. While elliptic curve cryptography (ECC) offers efficiency and strong security, ECC-based CKA protocols often rely on expensive pairings, making them computationally impractical for deployment over the resource limited devices. This paper introduces a novel CKA approach using ECC without requiring pairing computations, thus addressing scalability and efficiency challenges. The proposed protocol employs scalar point multiplications over a prime field elliptic curve group, enabling secure and efficient CKA operations with reduced computational overhead. Compared to existing ECC-based key agreement protocols, it minimizes user-level computation and enhances performance in computational efficiency, communication overhead, and security strength. Particularly suitable for resource-constrained environments like IoT and edge computing, where computational resources are limited yet secure group communication is crucial.
会议密钥协议(CKA)对于确保多方信息传递和协作环境等面向群组场景的通信安全至关重要。虽然椭圆曲线加密算法(ECC)具有高效性和强大的安全性,但基于 ECC 的 CKA 协议通常依赖于昂贵的配对,这使得它们在资源有限的设备上部署时不切实际。本文介绍了一种使用 ECC 的新型 CKA 方法,无需配对计算,从而解决了可扩展性和效率方面的难题。所提出的协议采用质域椭圆曲线组上的标量点乘法,在减少计算开销的同时实现了安全高效的 CKA 操作。与现有的基于 ECC 的密钥协议相比,它最大限度地减少了用户级计算,并提高了计算效率、通信开销和安全强度。它特别适用于物联网和边缘计算等资源受限的环境,在这些环境中,计算资源有限,但安全的群组通信至关重要。
{"title":"An efficient conference key agreement protocol suited for resource constrained devices","authors":"Manmohan Pundir ,&nbsp;Abhimanyu Kumar","doi":"10.1016/j.jpdc.2024.105011","DOIUrl":"10.1016/j.jpdc.2024.105011","url":null,"abstract":"<div><div>Conference key agreement (CKA) is essential for securing communication in group-oriented scenarios like multi-party messaging and collaborative environments. While elliptic curve cryptography (ECC) offers efficiency and strong security, ECC-based CKA protocols often rely on expensive pairings, making them computationally impractical for deployment over the resource limited devices. This paper introduces a novel CKA approach using ECC without requiring pairing computations, thus addressing scalability and efficiency challenges. The proposed protocol employs scalar point multiplications over a prime field elliptic curve group, enabling secure and efficient CKA operations with reduced computational overhead. Compared to existing ECC-based key agreement protocols, it minimizes user-level computation and enhances performance in computational efficiency, communication overhead, and security strength. Particularly suitable for resource-constrained environments like IoT and edge computing, where computational resources are limited yet secure group communication is crucial.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"196 ","pages":"Article 105011"},"PeriodicalIF":3.4,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142703137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling semi-supervised learning in intrusion detection systems 在入侵检测系统中实现半监督学习
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-12 DOI: 10.1016/j.jpdc.2024.105010
Panagis Sarantos , John Violos , Aris Leivadeas
Intrusion Detection systems (IDS) are alerting cybersecurity tools that analyze network traffic in order to identify suspicious activity and known threats. State of the art IDS rely on supervised machine learning models which are trained to categorize the network flow with a historical labeled dataset. Nonetheless, next-generation networks are characterized as heterogeneous and dynamic. The heterogeneity can make every network environment to be significantly different and the dynamicity means that new threats are constantly emerging. These two factors raise the research question if a supervised machine learning based IDS can work efficiently in a network environment different from the one that generated its labeled training data. In this paper, we first give an answer to this research question and next try to propose a semi-supervised learning approach that can be generalized sufficiently in a different network environment using unlabeled data, taking into consideration that unlabeled data are much easier and cheap to be collected compared to labeled ones. In order to have a proof of concept we made experiments with two labeled datasets CIC-IDS2017, CIC-IDS2018 which are publicly available and one unlabeled dataset PS-Azure2023 which we constructed for this work and make it also publicly available. The results confirm our assumption and the applicability of the semi-supervised learning paradigm for the design of IDS.
入侵检测系统(IDS)是一种警报网络安全工具,它通过分析网络流量来识别可疑活动和已知威胁。最先进的入侵检测系统依赖于有监督的机器学习模型,这些模型经过训练,能利用历史标注数据集对网络流量进行分类。然而,下一代网络具有异构和动态的特点。异构性使每个网络环境都大不相同,而动态性则意味着新的威胁不断出现。这两个因素提出了一个研究问题,即基于监督机器学习的 IDS 能否在不同于产生其标注训练数据的网络环境中有效工作。在本文中,我们首先给出了这一研究问题的答案,然后尝试提出一种半监督学习方法,这种方法可以在不同的网络环境中使用无标记数据进行充分推广,同时考虑到与有标记数据相比,无标记数据更容易收集且成本更低。为了验证这一概念,我们使用两个公开的标注数据集 CIC-IDS2017 和 CIC-IDS2018 以及一个非标注数据集 PS-Azure2023 进行了实验。结果证实了我们的假设以及半监督学习范式在 IDS 设计中的适用性。
{"title":"Enabling semi-supervised learning in intrusion detection systems","authors":"Panagis Sarantos ,&nbsp;John Violos ,&nbsp;Aris Leivadeas","doi":"10.1016/j.jpdc.2024.105010","DOIUrl":"10.1016/j.jpdc.2024.105010","url":null,"abstract":"<div><div>Intrusion Detection systems (IDS) are alerting cybersecurity tools that analyze network traffic in order to identify suspicious activity and known threats. State of the art IDS rely on supervised machine learning models which are trained to categorize the network flow with a historical labeled dataset. Nonetheless, next-generation networks are characterized as heterogeneous and dynamic. The heterogeneity can make every network environment to be significantly different and the dynamicity means that new threats are constantly emerging. These two factors raise the research question if a supervised machine learning based IDS can work efficiently in a network environment different from the one that generated its labeled training data. In this paper, we first give an answer to this research question and next try to propose a semi-supervised learning approach that can be generalized sufficiently in a different network environment using unlabeled data, taking into consideration that unlabeled data are much easier and cheap to be collected compared to labeled ones. In order to have a proof of concept we made experiments with two labeled datasets CIC-IDS2017, CIC-IDS2018 which are publicly available and one unlabeled dataset PS-Azure2023 which we constructed for this work and make it also publicly available. The results confirm our assumption and the applicability of the semi-supervised learning paradigm for the design of IDS.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"196 ","pages":"Article 105010"},"PeriodicalIF":3.4,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fault-tolerance in biswapped multiprocessor interconnection networks 双交换多处理器互连网络中的容错性
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-05 DOI: 10.1016/j.jpdc.2024.105009
Basem Assiri , Muhammad Faisal Nadeem , Waqar Ali , Ali Ahmad
Interconnection networks play a vital role in connecting many sets of processor memories, known as processing vertices. Recently, multiprocessor interconnection networks have obtained much attention due to their cost-effectiveness and wide applications in parallel multi-processor systems connecting processors and memory modules. A locating set in a computer network is to select certain nodes, called the locating set, whose positions determine the positions of all other nodes in the network. The locating number is defined as the minimum size of the locating set needed to identify all network vertices. Since, if any single node fails within the locating set, that set is no longer able to identify all the nodes within the network, if the remaining nodes in the locating set can still locate all other network nodes, then it is termed a fault-tolerant locating set. The fault tolerance becomes highly essential in multiprocessor networks, wherein every processor is subject to an absolute failure, to guarantee the system is at full capacity in case one or more components fail. In this study, we determine the fault-tolerant locating number of biswapped networks by considering different classes of networks as base clusters.
互连网络在连接多组处理器存储器(称为处理顶点)方面发挥着重要作用。近来,多处理器互连网络因其成本效益高以及在连接处理器和内存模块的并行多处理器系统中的广泛应用而备受关注。计算机网络中的定位集就是选择某些节点(称为定位集),这些节点的位置决定了网络中所有其他节点的位置。定位数被定义为识别所有网络顶点所需的最小定位集大小。如果定位集中的任何单个节点发生故障,定位集就无法再识别网络中的所有节点,如果定位集中的其余节点仍能定位所有其他网络节点,则称为容错定位集。在多处理器网络中,每个处理器都有可能出现绝对故障,因此容错就变得非常重要,以保证在一个或多个组件出现故障时系统仍能满负荷运行。在本研究中,我们以不同类别的网络为基础群组,确定了双交换网络的容错定位数。
{"title":"Fault-tolerance in biswapped multiprocessor interconnection networks","authors":"Basem Assiri ,&nbsp;Muhammad Faisal Nadeem ,&nbsp;Waqar Ali ,&nbsp;Ali Ahmad","doi":"10.1016/j.jpdc.2024.105009","DOIUrl":"10.1016/j.jpdc.2024.105009","url":null,"abstract":"<div><div>Interconnection networks play a vital role in connecting many sets of processor memories, known as processing vertices. Recently, multiprocessor interconnection networks have obtained much attention due to their cost-effectiveness and wide applications in parallel multi-processor systems connecting processors and memory modules. A locating set in a computer network is to select certain nodes, called the locating set, whose positions determine the positions of all other nodes in the network. The locating number is defined as the minimum size of the locating set needed to identify all network vertices. Since, if any single node fails within the locating set, that set is no longer able to identify all the nodes within the network, if the remaining nodes in the locating set can still locate all other network nodes, then it is termed a fault-tolerant locating set. The fault tolerance becomes highly essential in multiprocessor networks, wherein every processor is subject to an absolute failure, to guarantee the system is at full capacity in case one or more components fail. In this study, we determine the fault-tolerant locating number of biswapped networks by considering different classes of networks as base clusters.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"196 ","pages":"Article 105009"},"PeriodicalIF":3.4,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues) 封面 1 - 完整扉页(常规期刊)/特刊扉页(特刊)
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-02 DOI: 10.1016/S0743-7315(24)00167-9
{"title":"Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues)","authors":"","doi":"10.1016/S0743-7315(24)00167-9","DOIUrl":"10.1016/S0743-7315(24)00167-9","url":null,"abstract":"","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"195 ","pages":"Article 105003"},"PeriodicalIF":3.4,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and experimental evaluation of algorithms for optimizing the throughput of dispersed computing 设计和实验评估优化分散计算吞吐量的算法
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-29 DOI: 10.1016/j.jpdc.2024.104999
Xiangchen Zhao , Diyi Hu, Bhaskar Krishnamachari
We introduce three optimized scheduling algorithms for dispersed computing and present JupiterTP, a real-world system built on k8s and the prior Jupiter system, enabling end-to-end computation on distributed clusters. Distinguishing itself from traditional throughput optimization approaches that focus on theory and simulations, our work is the first implementation of such an end-to-end system capable of handling arbitrary DAGs across diverse computing networks, including public clouds, IoT systems, and edge networks. Beyond mere scheduling, JupiterTP integrates profilers, execution, and orchestration engines, offering unified interfaces for additional scheduling algorithm integrations. The system's performance is tested on real clusters and real applications, compared to prior work that relied on simulations alone. We make JupiterTP available to the community as open-source software at https://github.com/ANRGUSC/JupiterTP.
我们介绍了三种针对分散计算的优化调度算法,并展示了基于 k8s 和先前 Jupiter 系统的实际系统 JupiterTP,该系统可在分布式集群上实现端到端计算。与注重理论和模拟的传统吞吐量优化方法不同,我们的工作是首次实现这种端到端系统,它能够在包括公共云、物联网系统和边缘网络在内的各种计算网络中处理任意 DAG。除了单纯的调度,JupiterTP 还集成了剖析器、执行和协调引擎,为其他调度算法集成提供了统一接口。与之前仅依赖模拟的工作相比,该系统的性能在真实集群和真实应用上进行了测试。我们将 JupiterTP 作为开源软件提供给社区,网址是 https://github.com/ANRGUSC/JupiterTP。
{"title":"Design and experimental evaluation of algorithms for optimizing the throughput of dispersed computing","authors":"Xiangchen Zhao ,&nbsp;Diyi Hu,&nbsp;Bhaskar Krishnamachari","doi":"10.1016/j.jpdc.2024.104999","DOIUrl":"10.1016/j.jpdc.2024.104999","url":null,"abstract":"<div><div>We introduce three optimized scheduling algorithms for dispersed computing and present JupiterTP, a real-world system built on k8s and the prior Jupiter system, enabling end-to-end computation on distributed clusters. Distinguishing itself from traditional throughput optimization approaches that focus on theory and simulations, our work is the first implementation of such an end-to-end system capable of handling arbitrary DAGs across diverse computing networks, including public clouds, IoT systems, and edge networks. Beyond mere scheduling, JupiterTP integrates profilers, execution, and orchestration engines, offering unified interfaces for additional scheduling algorithm integrations. The system's performance is tested on real clusters and real applications, compared to prior work that relied on simulations alone. We make JupiterTP available to the community as open-source software at <span><span>https://github.com/ANRGUSC/JupiterTP</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"196 ","pages":"Article 104999"},"PeriodicalIF":3.4,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hands-on parallel & distributed computing with Raspberry Pi devices and clusters 使用 Raspberry Pi 设备和集群进行并行和分布式计算实践
IF 3.4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-28 DOI: 10.1016/j.jpdc.2024.104996
Elizabeth Shoop , Suzanne J. Matthews , Richard Brown , Joel C. Adams
Parallel and distributed computing (PDC) concepts are now required topics for accredited undergraduate computer science programs. However, introducing PDC into the CS curriculum is challenging for several reasons, including an instructor's lack of PDC knowledge and difficulties in accessing PDC hardware. This paper addresses both of these challenges by presenting free, interactive, web-based PDC teaching modules using inexpensive Raspberry Pi single board computers (SBCs). Our materials include a free disk image that makes it possible for instructors to build Raspberry Pi clusters in minutes and use our software in a variety of curricular contexts. Our multi-year assessment of these materials on students and faculty members indicates that: (i) our materials increased students' confidence regarding important PDC concepts and motivated them to study PDC further; and (ii) our materials increased faculty members' confidence and preparedness in teaching key PDC concepts at their own institutions.
并行和分布式计算(PDC)概念现已成为经认证的计算机科学本科课程的必修课程。然而,将 PDC 引入计算机科学课程具有挑战性,原因有几个,包括教师缺乏 PDC 知识和难以获得 PDC 硬件。本文利用价格低廉的 Raspberry Pi 单板计算机 (SBC),提供免费、交互式、基于网络的 PDC 教学模块,以解决这两个难题。我们的教材包括一个免费的磁盘映像,使教师能够在几分钟内构建 Raspberry Pi 集群,并在各种课程环境中使用我们的软件。我们对学生和教师使用这些教材的多年评估表明(i) 我们的教材增强了学生对 PDC 重要概念的信心,激发了他们进一步学习 PDC 的动力;(ii) 我们的教材增强了教员的信心,使他们更有准备在自己的机构中教授 PDC 的关键概念。
{"title":"Hands-on parallel & distributed computing with Raspberry Pi devices and clusters","authors":"Elizabeth Shoop ,&nbsp;Suzanne J. Matthews ,&nbsp;Richard Brown ,&nbsp;Joel C. Adams","doi":"10.1016/j.jpdc.2024.104996","DOIUrl":"10.1016/j.jpdc.2024.104996","url":null,"abstract":"<div><div>Parallel and distributed computing (PDC) concepts are now required topics for accredited undergraduate computer science programs. However, introducing PDC into the CS curriculum is challenging for several reasons, including an instructor's lack of PDC knowledge and difficulties in accessing PDC hardware. This paper addresses both of these challenges by presenting free, interactive, web-based PDC teaching modules using inexpensive Raspberry Pi single board computers (SBCs). Our materials include a free disk image that makes it possible for instructors to build Raspberry Pi clusters in minutes and use our software in a variety of curricular contexts. Our multi-year assessment of these materials on students and faculty members indicates that: (i) our materials increased students' confidence regarding important PDC concepts and motivated them to study PDC further; and (ii) our materials increased faculty members' confidence and preparedness in teaching key PDC concepts at their own institutions.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"196 ","pages":"Article 104996"},"PeriodicalIF":3.4,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Parallel and Distributed Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1