首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
Awareness based gannet optimization for source location privacy preservation with multiple assets in wireless sensor networks 在无线传感器网络中利用多资产保护源位置隐私的基于感知的甘奈特优化
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-19 DOI: 10.1002/cpe.8191
Mintu Singh, Maheshwari Prasad Singh

The wireless sensor network (WSN) has been assimilated into modern society and is utilized in many crucial application domains, including animal monitoring, border surveillance, asset monitoring, and so forth. These technologies aid in protecting the place of the event's occurrence from the adversary. Maintaining privacy concerning the source location is challenging due to the sensor nodes' limitations and efficient routing strategies. Hence, this research introduces a novel source location privacy preservation using the awareness-based Gannet with random-Dijkstra's algorithm (AGO-RD). The network is initialized by splitting the hotspot and non-hotspot region optimally using the proposed awareness-based Gannet (AGO) algorithm. Here, the multi-objective fitness function is utilized to initialize the network based on factors like throughput, energy consumption, latency, and entropy. Then, the information is forwarded to the phantom node in the non-hotspot region to preserve the source location's privacy, which is far from the sink node. The proposed random-Dijkstra algorithm is utilized to route the information from the phantom node to the sink with more security. Analysis of the proposed AGO-RD-based source location privacy preservation technique in terms of delay, throughput, network lifetime, and energy consumption accomplished the values of 6.52 ms, 95.68%, 7109.9 rounds, and 0.000125 μJ.

无线传感器网络(WSN)已融入现代社会,并被用于许多重要的应用领域,包括动物监控、边境监控、资产监控等。这些技术有助于保护事件发生地不受敌方攻击。由于传感器节点的局限性和高效的路由策略,保持源位置的隐私具有挑战性。因此,本研究采用基于意识的甘尼特随机-迪克斯特拉算法(AGO-RD),提出了一种新颖的源位置隐私保护方法。利用所提出的基于意识的甘尼特(AGO)算法,以最佳方式分割热点和非热点区域,从而初始化网络。在这里,多目标拟合函数被用来根据吞吐量、能耗、延迟和熵等因素对网络进行初始化。然后,将信息转发给非热点区域的幽灵节点,以保护远离汇节点的源位置隐私。拟议的随机-Dijkstra 算法用于将信息从幽灵节点路由到水槽,安全性更高。通过对所提出的基于 AGO-RD 的源位置隐私保护技术在延迟、吞吐量、网络寿命和能耗方面的分析,得出了 6.52 ms、95.68%、7109.9 rounds 和 0.000125 μJ 的值。
{"title":"Awareness based gannet optimization for source location privacy preservation with multiple assets in wireless sensor networks","authors":"Mintu Singh,&nbsp;Maheshwari Prasad Singh","doi":"10.1002/cpe.8191","DOIUrl":"https://doi.org/10.1002/cpe.8191","url":null,"abstract":"<div>\u0000 \u0000 <p>The wireless sensor network (WSN) has been assimilated into modern society and is utilized in many crucial application domains, including animal monitoring, border surveillance, asset monitoring, and so forth. These technologies aid in protecting the place of the event's occurrence from the adversary. Maintaining privacy concerning the source location is challenging due to the sensor nodes' limitations and efficient routing strategies. Hence, this research introduces a novel source location privacy preservation using the awareness-based Gannet with random-Dijkstra's algorithm (AGO-RD). The network is initialized by splitting the hotspot and non-hotspot region optimally using the proposed awareness-based Gannet (AGO) algorithm. Here, the multi-objective fitness function is utilized to initialize the network based on factors like throughput, energy consumption, latency, and entropy. Then, the information is forwarded to the phantom node in the non-hotspot region to preserve the source location's privacy, which is far from the sink node. The proposed random-Dijkstra algorithm is utilized to route the information from the phantom node to the sink with more security. Analysis of the proposed AGO-RD-based source location privacy preservation technique in terms of delay, throughput, network lifetime, and energy consumption accomplished the values of 6.52 ms, 95.68%, 7109.9 rounds, and 0.000125 μJ.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 21","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RETRACTION: Minimal Channel Cost-Based Energy-Efficient Resource Allocation Algorithm for Task Offloading Under Fog Computing Environment RETRACTION:基于最小信道成本的高能效资源分配算法,用于雾计算环境下的任务卸载
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-18 DOI: 10.1002/cpe.8202

RETRACTION: B. Premalatha and P. Prakasam, “Minimal Channel Cost-Based Energy-Efficient Resource Allocation Algorithm for Task Offloading Under Fog Computing Environment,” Concurrency and Computation: Practice and Experience 36, no. 7 (2024): e7968, https://doi.org/10.1002/cpe.7968.

The above article, published online on 27 November 2023 in Wiley Online Library (wileyonlinelibrary.com), has been retracted by agreement between the journal Editors-in-Chief, David W. Walker, Nitin Auluck, Jinjun Chen, Martin Berzins; and John Wiley and Sons Ltd. The retraction has been agreed upon following an investigation into concerns raised by a third party, which revealed major textual overlap, significant primary data redundancy and simultaneous submission with a previously published article by the same group of authors elsewhere. Such publishing practice is against the journal's policy and Wiley's Best Practice Guidelines on Research Integrity and Publishing Ethics. The authors were informed of the decision to retract but did not agree to the retraction or the wording.

返回:B.Premalatha and P. Prakasam, "Minimal Channel Cost-Based Energy-Efficient Resource Allocation Algorithm for Task Offloading Under Fog Computing Environment," Concurrency and Computation:Practice and Experience 36, no. 7 (2024): e7968, https://doi.org/10.1002/cpe.7968.The 上述文章于 2023 年 11 月 27 日在线发表于 Wiley Online Library (wileyonlinelibrary.com),经期刊主编 David W. Walker、Nitin Auluck、Jinjun Chen、Martin Berzins 和 John Wiley and Sons Ltd.同意,已被撤回。在对第三方提出的问题进行调查后,发现文章内容严重重叠,主要数据严重冗余,而且与同一组作者之前在其他地方发表的文章同时提交,因此同意撤稿。这种出版做法违反了期刊政策和 Wiley 的《研究诚信与出版伦理最佳实践指南》。作者已被告知撤稿决定,但不同意撤稿或撤稿的措辞。
{"title":"RETRACTION: Minimal Channel Cost-Based Energy-Efficient Resource Allocation Algorithm for Task Offloading Under Fog Computing Environment","authors":"","doi":"10.1002/cpe.8202","DOIUrl":"https://doi.org/10.1002/cpe.8202","url":null,"abstract":"<p><b>RETRACTION</b>: B. Premalatha and P. Prakasam, “Minimal Channel Cost-Based Energy-Efficient Resource Allocation Algorithm for Task Offloading Under Fog Computing Environment,” <i>Concurrency and Computation: Practice and Experience</i> 36, no. 7 (2024): e7968, \u0000https://doi.org/10.1002/cpe.7968.</p><p>The above article, published online on 27 November 2023 in Wiley Online Library (\u0000wileyonlinelibrary.com), has been retracted by agreement between the journal Editors-in-Chief, David W. Walker, Nitin Auluck, Jinjun Chen, Martin Berzins; and John Wiley and Sons Ltd. The retraction has been agreed upon following an investigation into concerns raised by a third party, which revealed major textual overlap, significant primary data redundancy and simultaneous submission with a previously published article by the same group of authors elsewhere. Such publishing practice is against the journal's policy and Wiley's Best Practice Guidelines on Research Integrity and Publishing Ethics. The authors were informed of the decision to retract but did not agree to the retraction or the wording.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 24","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.8202","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing UAV-HetNet security through functional encryption framework 通过功能加密框架增强无人机-HetNet 的安全性
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-18 DOI: 10.1002/cpe.8206
Sachin Kumar Gupta, Parul Gupta, Pawan Singh

In the current landscape, the rapid expansion of the internet has brought about a corresponding surge in the number of data consumers. As user volume and diversity have escalated, the shift from conventional, uniform networks to Heterogeneous Networks (HetNets) has emerged. HetNets are designed with a primary objective: enhancing Quality of Service (QoS) standards for users. In the context of HetNets facilitated by Unmanned Aerial Vehicles (UAVs), a substantial influx of users and devices is observed. Within this multifaceted environment, the potential for malicious intruder nodes to efficiently execute and propagate harmful actions across the network is a distinct concern. Consequently, the entirety of network communication becomes susceptible to a multitude of security threats. To address these vulnerabilities and safeguard communication, the Functional Encryption (FE) technique is employed. FE empowers the protection of data against intrusion attacks. This paper presents a comprehensive methodology for implementing FE within UAV-integrated HetNets, executed in two sequential phases. The initial phase secures communication between User Equipment (UE) and Micro Base Station (MBS), followed by the second phase, which focuses on securing communication among MBS and UAV. The viability of the proposed approach is substantiated through validation using the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. The validation process involves the development of High-Level Protocol Specification Language (HLPSL) codes. The successful security validation outcome underscores the capacity of the proposed methodology to provide the intended security measures and robustness to the network environment.

在当前形势下,互联网的快速扩张带来了数据消费者数量的相应激增。随着用户量和多样性的增加,传统的统一网络开始向异构网络(HetNets)转变。HetNets 设计的主要目标是:提高用户的服务质量(QoS)标准。在由无人机(UAV)推动的 HetNets 环境中,用户和设备大量涌入。在这种多层面的环境中,恶意入侵节点有可能在整个网络中有效执行和传播有害行为,这是一个明显的问题。因此,整个网络通信很容易受到多种安全威胁的影响。为了解决这些漏洞并保护通信安全,我们采用了功能加密(FE)技术。FE 能够保护数据免受入侵攻击。本文介绍了在无人机集成 HetNets 中实施 FE 的综合方法,该方法分两个连续阶段执行。第一阶段确保用户设备(UE)和微型基站(MBS)之间的通信安全,第二阶段重点确保微型基站和无人机之间的通信安全。通过使用互联网安全协议和应用自动验证(AVISPA)工具进行验证,证明了所建议方法的可行性。验证过程包括开发高级协议规范语言(HLPSL)代码。成功的安全验证结果强调了所建议的方法能够为网络环境提供预期的安全措施和稳健性。
{"title":"Enhancing UAV-HetNet security through functional encryption framework","authors":"Sachin Kumar Gupta,&nbsp;Parul Gupta,&nbsp;Pawan Singh","doi":"10.1002/cpe.8206","DOIUrl":"https://doi.org/10.1002/cpe.8206","url":null,"abstract":"<div>\u0000 \u0000 <p>In the current landscape, the rapid expansion of the internet has brought about a corresponding surge in the number of data consumers. As user volume and diversity have escalated, the shift from conventional, uniform networks to Heterogeneous Networks (HetNets) has emerged. HetNets are designed with a primary objective: enhancing Quality of Service (QoS) standards for users. In the context of HetNets facilitated by Unmanned Aerial Vehicles (UAVs), a substantial influx of users and devices is observed. Within this multifaceted environment, the potential for malicious intruder nodes to efficiently execute and propagate harmful actions across the network is a distinct concern. Consequently, the entirety of network communication becomes susceptible to a multitude of security threats. To address these vulnerabilities and safeguard communication, the Functional Encryption (FE) technique is employed. FE empowers the protection of data against intrusion attacks. This paper presents a comprehensive methodology for implementing FE within UAV-integrated HetNets, executed in two sequential phases. The initial phase secures communication between User Equipment (UE) and Micro Base Station (MBS), followed by the second phase, which focuses on securing communication among MBS and UAV. The viability of the proposed approach is substantiated through validation using the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. The validation process involves the development of High-Level Protocol Specification Language (HLPSL) codes. The successful security validation outcome underscores the capacity of the proposed methodology to provide the intended security measures and robustness to the network environment.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141967986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrated graph data privacy attack framework based on graph neural networks in IoT 基于物联网图神经网络的综合图数据隐私攻击框架
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-14 DOI: 10.1002/cpe.8209
Xiaoran Zhao, Changgen Peng, Hongfa Ding, Weijie Tan

Knowledge graphs contain a large amount of entity and relational data, and graph neural networks, as a class of efficient graph representation techniques based on deep learning, excel in knowledge graph modeling. However, previous neural network architectures for the most part only learn node representations and do not fully consider the heterogeneity of data. In this article, we innovatively propose a privacy attack framework based on IoT, PAFI, which is able to classify entities and relations, learn embedding representations in multi-relational graphs, and can be applied to some existing neural network algorithms. Based on this, a fine-grained privacy attack model, FPM, is proposed, which can perform attack operations on multiple targets, achieve selectivity of target tasks, and greatly improve the generalization ability of the attack model. In this article, the effectiveness of PAFI and FPM is demonstrated by real network datasets, and compared with previous attack methods, both of which achieve good results.

知识图谱包含大量实体和关系数据,图神经网络作为一类基于深度学习的高效图表示技术,在知识图谱建模方面表现出色。然而,以往的神经网络架构大多只能学习节点表示,并不能充分考虑数据的异质性。本文创新性地提出了一种基于物联网的隐私攻击框架--PAFI,它能够对实体和关系进行分类,学习多关系图中的嵌入表征,并可应用于现有的一些神经网络算法。在此基础上,提出了细粒度隐私攻击模型FPM,它可以对多个目标进行攻击操作,实现目标任务的选择性,大大提高了攻击模型的泛化能力。本文通过真实网络数据集演示了 PAFI 和 FPM 的有效性,并与之前的攻击方法进行了比较,两者都取得了不错的效果。
{"title":"An integrated graph data privacy attack framework based on graph neural networks in IoT","authors":"Xiaoran Zhao,&nbsp;Changgen Peng,&nbsp;Hongfa Ding,&nbsp;Weijie Tan","doi":"10.1002/cpe.8209","DOIUrl":"10.1002/cpe.8209","url":null,"abstract":"<div>\u0000 \u0000 <p>Knowledge graphs contain a large amount of entity and relational data, and graph neural networks, as a class of efficient graph representation techniques based on deep learning, excel in knowledge graph modeling. However, previous neural network architectures for the most part only learn node representations and do not fully consider the heterogeneity of data. In this article, we innovatively propose a privacy attack framework based on IoT, PAFI, which is able to classify entities and relations, learn embedding representations in multi-relational graphs, and can be applied to some existing neural network algorithms. Based on this, a fine-grained privacy attack model, FPM, is proposed, which can perform attack operations on multiple targets, achieve selectivity of target tasks, and greatly improve the generalization ability of the attack model. In this article, the effectiveness of PAFI and FPM is demonstrated by real network datasets, and compared with previous attack methods, both of which achieve good results.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141343551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRL-based computing offloading approach for large-scale heterogeneous tasks in mobile edge computing 基于 DRL 的计算卸载方法,适用于移动边缘计算中的大规模异构任务
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-12 DOI: 10.1002/cpe.8156
Bingkun He, Haokun Li, Tong Chen

In the last few years, the rapid advancement of the Internet of Things (IoT) and the widespread adoption of smart cities have posed new challenges to computing services. Traditional cloud computing models fail to fulfil the rapid response requirement of latency-sensitive applications, while mobile edge computing (MEC) improves service efficiency and customer experience by transferring computing tasks to servers located at the network edge. However, designing an effective computing offloading strategy in complex scenarios involving multiple computing tasks, nodes, and services remains a pressing issue. In this paper, a computing offloading approach based on Deep Reinforcement Learning (DRL) is proposed for large-scale heterogeneous computing tasks. First, Markov Decision Processes (MDPs) is used to formulate computing offloading decision and resource allocation problems in large-scale heterogeneous MEC systems. Subsequently, a comprehensive framework comprising the "end-edge-cloud" along with the corresponding time-overhead and resource allocation models is constructed. Finally, through extensive experiments on real datasets, the proposed approach is demonstrated to outperform existing methods in enhancing service response speed, reducing latency, balancing server loads, and saving energy.

在过去几年中,物联网(IoT)的快速发展和智慧城市的广泛应用给计算服务带来了新的挑战。传统的云计算模式无法满足对延迟敏感的应用的快速响应要求,而移动边缘计算(MEC)则通过将计算任务转移到位于网络边缘的服务器来提高服务效率和客户体验。然而,在涉及多个计算任务、节点和服务的复杂场景中设计有效的计算卸载策略仍然是一个亟待解决的问题。本文针对大规模异构计算任务提出了一种基于深度强化学习(DRL)的计算卸载方法。首先,利用马尔可夫决策过程(MDP)来制定大规模异构 MEC 系统中的计算卸载决策和资源分配问题。随后,构建了一个由 "端-边-云 "以及相应的时间-开销和资源分配模型组成的综合框架。最后,通过在真实数据集上进行大量实验,证明所提出的方法在提高服务响应速度、减少延迟、平衡服务器负载和节约能源方面优于现有方法。
{"title":"DRL-based computing offloading approach for large-scale heterogeneous tasks in mobile edge computing","authors":"Bingkun He,&nbsp;Haokun Li,&nbsp;Tong Chen","doi":"10.1002/cpe.8156","DOIUrl":"10.1002/cpe.8156","url":null,"abstract":"<p>In the last few years, the rapid advancement of the Internet of Things (IoT) and the widespread adoption of smart cities have posed new challenges to computing services. Traditional cloud computing models fail to fulfil the rapid response requirement of latency-sensitive applications, while mobile edge computing (MEC) improves service efficiency and customer experience by transferring computing tasks to servers located at the network edge. However, designing an effective computing offloading strategy in complex scenarios involving multiple computing tasks, nodes, and services remains a pressing issue. In this paper, a computing offloading approach based on Deep Reinforcement Learning (DRL) is proposed for large-scale heterogeneous computing tasks. First, Markov Decision Processes (MDPs) is used to formulate computing offloading decision and resource allocation problems in large-scale heterogeneous MEC systems. Subsequently, a comprehensive framework comprising the \"end-edge-cloud\" along with the corresponding time-overhead and resource allocation models is constructed. Finally, through extensive experiments on real datasets, the proposed approach is demonstrated to outperform existing methods in enhancing service response speed, reducing latency, balancing server loads, and saving energy.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 19","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141350923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fault diagnosis of power equipment based on variational autoencoder and semi-supervised learning 基于变异自动编码器和半监督学习的电力设备故障诊断
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-12 DOI: 10.1002/cpe.8204
Bo Ye, Feng Li, Linghao Zhang, Zhengwei Chang, Bin Wang, Xiaoyu Zhang, Sayina Bodanbai

The issue of fault diagnosis in power equipment is receiving increasing attention from scholars. Due to the important role played by bearings in power equipment, bearing faults have become the main factor causing the shutdown of wind turbines units. Therefore, this paper takes bearing equipment as an example for research. In order to solve the problem of insufficient and unbalanced fault sample data of wind turbines bearings, a fault diagnosis (FD) method based on variational autoencoder and semi-supervised learning is proposed in this paper. Firstly, based on Label Propagation-random forests (LP-RFs) and a small number of labeled fault samples, a semi-supervised learning algorithm is proposed to label the original data samples. Secondly, a small number of training samples are preprocessed by the variational autoencoder to reduce the imbalance of the fault samples. Then, the RFs-based method is adopted to train the processed fault samples to obtain a mature FD classifier. Finally, the proposed method is applied to FD for bearings, and the results show that the proposed method can realize bearings fault diagnosis (BFD). And meanwhile, the proposed method can also be applied for fault diagnosis in power transmission and transformation systems.

电力设备的故障诊断问题越来越受到学者们的关注。由于轴承在电力设备中的重要作用,轴承故障已成为导致风力发电机组停机的主要因素。因此,本文以轴承设备为例进行研究。针对风力发电机轴承故障样本数据不充分、不平衡的问题,本文提出了一种基于变异自动编码器和半监督学习的故障诊断(FD)方法。首先,基于标签传播-随机森林(LP-RFs)和少量标注故障样本,提出了一种半监督学习算法来标注原始数据样本。其次,通过变异自动编码器对少量训练样本进行预处理,以减少故障样本的不平衡性。然后,采用基于 RFs 的方法来训练经过处理的故障样本,从而获得成熟的 FD 分类器。最后,将所提出的方法应用于轴承故障诊断,结果表明所提出的方法可以实现轴承故障诊断(BFD)。同时,提出的方法还可应用于输变电系统的故障诊断。
{"title":"Fault diagnosis of power equipment based on variational autoencoder and semi-supervised learning","authors":"Bo Ye,&nbsp;Feng Li,&nbsp;Linghao Zhang,&nbsp;Zhengwei Chang,&nbsp;Bin Wang,&nbsp;Xiaoyu Zhang,&nbsp;Sayina Bodanbai","doi":"10.1002/cpe.8204","DOIUrl":"10.1002/cpe.8204","url":null,"abstract":"<div>\u0000 \u0000 <p>The issue of fault diagnosis in power equipment is receiving increasing attention from scholars. Due to the important role played by bearings in power equipment, bearing faults have become the main factor causing the shutdown of wind turbines units. Therefore, this paper takes bearing equipment as an example for research. In order to solve the problem of insufficient and unbalanced fault sample data of wind turbines bearings, a fault diagnosis (FD) method based on variational autoencoder and semi-supervised learning is proposed in this paper. Firstly, based on Label Propagation-random forests (LP-RFs) and a small number of labeled fault samples, a semi-supervised learning algorithm is proposed to label the original data samples. Secondly, a small number of training samples are preprocessed by the variational autoencoder to reduce the imbalance of the fault samples. Then, the RFs-based method is adopted to train the processed fault samples to obtain a mature FD classifier. Finally, the proposed method is applied to FD for bearings, and the results show that the proposed method can realize bearings fault diagnosis (BFD). And meanwhile, the proposed method can also be applied for fault diagnosis in power transmission and transformation systems.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141352115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QSKCG: Quantum-based secure key communication and key generation scheme for outsourced data in cloud QSKCG:基于量子的云端外包数据安全密钥通信和密钥生成方案
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-11 DOI: 10.1002/cpe.8192
Vamshi Adouth, Eswari Rajagopal

In the era of digital proliferation, individuals opt for cloud servers to store their data due to the diverse advantages they offer. However, entrusting data to cloud servers relinquishes users' control, potentially compromising data confidentiality and integrity. Traditional auditing methods designed to ensure data integrity in cloud servers typically depend on Trusted Third Party Auditors. Yet, many of these existing auditing approaches grapple with intricate certificate management and key escrow issues. Furthermore, the imminent threat of powerful quantum computers poses a risk of swiftly compromising these methods in polynomial time. To overcome these challenges, this paper introduces a Quantum-based Secure Key Communication and Key Generation Scheme QSKCG for Outsourced Data in the Cloud. Leveraging Elliptic Curve Cryptography, the BB84 secure communication protocol, certificateless signature, and blockchain network, the proposed scheme is demonstrated through security analysis, affirming its robustness and high efficiency. Additionally, performance analysis underscores the practicality of the proposed scheme in achieving post-quantum security in cloud storage.

在数字化激增的时代,由于云服务器具有多种优势,个人会选择云服务器来存储数据。然而,将数据委托给云服务器会放弃用户的控制权,从而有可能损害数据的机密性和完整性。旨在确保云服务器中数据完整性的传统审计方法通常依赖于可信第三方审计员。然而,许多现有的审计方法都在努力解决错综复杂的证书管理和密钥托管问题。此外,强大量子计算机的威胁迫在眉睫,有可能在多项式时间内迅速破坏这些方法。为了克服这些挑战,本文介绍了一种用于云端外包数据的基于量子的安全密钥通信和密钥生成方案 QSKCG。利用椭圆曲线加密技术、BB84 安全通信协议、无证书签名和区块链网络,本文通过安全分析展示了所提出的方案,肯定了其稳健性和高效性。此外,性能分析强调了拟议方案在实现云存储后量子安全方面的实用性。
{"title":"QSKCG: Quantum-based secure key communication and key generation scheme for outsourced data in cloud","authors":"Vamshi Adouth,&nbsp;Eswari Rajagopal","doi":"10.1002/cpe.8192","DOIUrl":"10.1002/cpe.8192","url":null,"abstract":"<p>In the era of digital proliferation, individuals opt for cloud servers to store their data due to the diverse advantages they offer. However, entrusting data to cloud servers relinquishes users' control, potentially compromising data confidentiality and integrity. Traditional auditing methods designed to ensure data integrity in cloud servers typically depend on Trusted Third Party Auditors. Yet, many of these existing auditing approaches grapple with intricate certificate management and key escrow issues. Furthermore, the imminent threat of powerful quantum computers poses a risk of swiftly compromising these methods in polynomial time. To overcome these challenges, this paper introduces a Quantum-based Secure Key Communication and Key Generation Scheme QSKCG for Outsourced Data in the Cloud. Leveraging Elliptic Curve Cryptography, the BB84 secure communication protocol, certificateless signature, and blockchain network, the proposed scheme is demonstrated through security analysis, affirming its robustness and high efficiency. Additionally, performance analysis underscores the practicality of the proposed scheme in achieving post-quantum security in cloud storage.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141357273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU parallel processing to enable extensive criticality analysis in state estimation GPU 并行处理,可在状态估计中进行广泛的临界分析
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-11 DOI: 10.1002/cpe.8200
Ayres Nishio, Milton B. Do Coutto Filho, Julio C. Stachinni de Souza, Esteban W. G. Clua

Power system monitoring relies on the reliability of state estimation (SE) results. SE plays a dominant role in data debugging if sufficient data is available. Criticality analysis (CA) integrates SE as a module in which measurements—taken one-by-one or in groups (tuples) of minimal cardinality—are designated crucial. The combinatorial nature of extensive CA (not restricted to identifying low-cardinality critical tuples) characterizes its computational complexity and imposes challenging limits to go beyond. In simple terms, these limits are established by the number of measurements to be combined, the cardinality of tuples, and the computing time required to check the criticality condition. This paper proposes an innovative computational solution to expand CA limits found to date in the literature. A framework with multi-threads designed cleverly on a graphics processing unit (GPU) parallel processing environment is built. The conceived architecture favors evaluating massive measurement combinations of diverse cardinality in extensive CA. Numerical results reveal significant speed-ups with the proposed approach, contrasting with those reported in research efforts published so far.

电力系统监控依赖于状态估计(SE)结果的可靠性。如果有足够的数据,SE 在数据调试中发挥着主导作用。临界值分析(CA)将 SE 作为一个模块进行整合,将逐个或以最小卡数分组(元组)的测量结果指定为临界值。广泛 CA 的组合性质(不局限于识别低卡位临界元组)决定了其计算复杂性,并提出了极具挑战性的限制。简单地说,这些限制是由需要组合的测量数量、元组的卡方性以及检查临界条件所需的计算时间决定的。本文提出了一种创新的计算解决方案,以扩大迄今为止在文献中发现的 CA 限制。本文在图形处理器(GPU)并行处理环境上巧妙地设计了一个具有多线程的框架。所构想的架构有利于在广泛的 CA 中评估不同心率的大量测量组合。数值结果表明,与迄今为止发表的研究成果相比,所提出的方法大大提高了速度。
{"title":"GPU parallel processing to enable extensive criticality analysis in state estimation","authors":"Ayres Nishio,&nbsp;Milton B. Do Coutto Filho,&nbsp;Julio C. Stachinni de Souza,&nbsp;Esteban W. G. Clua","doi":"10.1002/cpe.8200","DOIUrl":"10.1002/cpe.8200","url":null,"abstract":"<div>\u0000 \u0000 <p>Power system monitoring relies on the reliability of state estimation (SE) results. SE plays a dominant role in data debugging if sufficient data is available. Criticality analysis (CA) integrates SE as a module in which measurements—taken one-by-one or in groups (tuples) of minimal cardinality—are designated crucial. The combinatorial nature of extensive CA (not restricted to identifying low-cardinality critical tuples) characterizes its computational complexity and imposes challenging limits to go beyond. In simple terms, these limits are established by the number of measurements to be combined, the cardinality of tuples, and the computing time required to check the criticality condition. This paper proposes an innovative computational solution to expand CA limits found to date in the literature. A framework with multi-threads designed cleverly on a graphics processing unit (GPU) parallel processing environment is built. The conceived architecture favors evaluating massive measurement combinations of diverse cardinality in extensive CA. Numerical results reveal significant speed-ups with the proposed approach, contrasting with those reported in research efforts published so far.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141360280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy logic-based computation offloading technique in fog computing 雾计算中基于模糊逻辑的计算卸载技术
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-10 DOI: 10.1002/cpe.8198
Dinesh Soni, Neetesh Kumar

The fog computing environment expands the capabilities of cloud computing by moving computing, storage, and networking services closer to IoT devices. These resource-constrained IoT devices often face challenges like high task failure rates and extended execution latency due to data traffic congestion. Distributing IoT services through task offloading across different layers of computing paradigms enhances QoS (Quality of Service) parameters. This endeavor aims to allocate custom workflow-based real-time tasks or jobs for processing across various cloud/fog/edge layers, optimizing QoS factors like makespan, energy consumption, and cost. In the fog computing environment, challenges arise due to uncertainties related to job execution locations and the ability to predict future user requirements. Fuzzy logic offers low-complexity solutions for handling unpredictable and rapidly changing conditions. This paper proposes a hybrid fog-cloud-based computing architecture and an intelligent fuzzy logic-based computation offloading approach. This approach effectively allocates workloads among edge, fog, and cloud layers, resulting in improvements in makespan time (7.51%), energy consumption (4.63%), and cost (13.60%). The proposed method selects suitable processing units or compute nodes for job execution, utilizing heterogeneous resources. Simulation results demonstrate that the proposed methodology outperforms current state-of-the-art algorithms.

雾计算环境通过将计算、存储和网络服务迁移到物联网设备附近,扩展了云计算的功能。这些资源受限的物联网设备经常面临任务失败率高、数据流量拥堵导致执行延迟延长等挑战。通过跨不同计算范例层的任务卸载来分配物联网服务,可以提高 QoS(服务质量)参数。这项工作旨在跨不同的云/雾/边缘层分配基于工作流的定制实时任务或工作进行处理,优化诸如时间跨度、能耗和成本等 QoS 因素。在雾计算环境中,由于作业执行位置和预测未来用户需求的能力存在不确定性,因此出现了一些挑战。模糊逻辑为处理不可预测和快速变化的条件提供了低复杂度的解决方案。本文提出了一种基于雾-云的混合计算架构和一种基于模糊逻辑的智能计算卸载方法。这种方法能有效地在边缘层、雾层和云层之间分配工作负载,从而缩短了正常运行时间(7.51%),降低了能耗(4.63%)和成本(13.60%)。建议的方法利用异构资源为作业执行选择合适的处理单元或计算节点。仿真结果表明,所提出的方法优于目前最先进的算法。
{"title":"Fuzzy logic-based computation offloading technique in fog computing","authors":"Dinesh Soni,&nbsp;Neetesh Kumar","doi":"10.1002/cpe.8198","DOIUrl":"10.1002/cpe.8198","url":null,"abstract":"<p>The fog computing environment expands the capabilities of cloud computing by moving computing, storage, and networking services closer to IoT devices. These resource-constrained IoT devices often face challenges like high task failure rates and extended execution latency due to data traffic congestion. Distributing IoT services through task offloading across different layers of computing paradigms enhances QoS (Quality of Service) parameters. This endeavor aims to allocate custom workflow-based real-time tasks or jobs for processing across various cloud/fog/edge layers, optimizing QoS factors like makespan, energy consumption, and cost. In the fog computing environment, challenges arise due to uncertainties related to job execution locations and the ability to predict future user requirements. Fuzzy logic offers low-complexity solutions for handling unpredictable and rapidly changing conditions. This paper proposes a hybrid fog-cloud-based computing architecture and an intelligent fuzzy logic-based computation offloading approach. This approach effectively allocates workloads among edge, fog, and cloud layers, resulting in improvements in makespan time (7.51%), energy consumption (4.63%), and cost (13.60%). The proposed method selects suitable processing units or compute nodes for job execution, utilizing heterogeneous resources. Simulation results demonstrate that the proposed methodology outperforms current state-of-the-art algorithms.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141362608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving ROUGE-1 by 6%: A novel multilingual transformer for abstractive news summarization 将 ROUGE-1 提高 6%:用于抽象新闻摘要的新型多语言转换器
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-10 DOI: 10.1002/cpe.8199
Sandeep Kumar, Arun Solanki

Natural language processing (NLP) has undergone a significant transformation, evolving from manually crafted rules to powerful deep learning techniques such as transformers. These advancements have revolutionized various domains including summarization, question answering, and more. Statistical models like hidden Markov models (HMMs) and supervised learning have played crucial roles in laying the foundation for this progress. Recent breakthroughs in transfer learning and the emergence of large-scale models like BERT and GPT have further pushed the boundaries of NLP research. However, news summarization remains a challenging task in NLP, often resulting in factual inaccuracies or the loss of the article's essence. In this study, we propose a novel approach to news summarization utilizing a fine-tuned Transformer architecture pre-trained on Google's mt-small tokenizer. Our model demonstrates significant performance improvements over previous methods on the Inshorts English News dataset, achieving a 6% enhancement in the ROUGE-1 score and reducing training loss by 50%. This breakthrough facilitates the generation of reliable and concise news summaries, thereby enhancing information accessibility and user experience. Additionally, we conduct a comprehensive evaluation of our model's performance using popular metrics such as ROUGE scores, with our proposed model achieving ROUGE-1: 54.6130, ROUGE-2: 31.1543, ROUGE-L: 50.7709, and ROUGE-LSum: 50.7907. Furthermore, we observe a substantial reduction in training and validation losses, underscoring the effectiveness of our proposed approach.

自然语言处理(NLP)经历了一场重大变革,从人工制定规则发展到强大的深度学习技术(如转换器)。这些进步彻底改变了各种领域,包括摘要、问题解答等。隐马尔可夫模型(HMM)和监督学习等统计模型在为这一进步奠定基础方面发挥了至关重要的作用。最近在迁移学习方面取得的突破以及 BERT 和 GPT 等大规模模型的出现,进一步推动了 NLP 研究的发展。然而,新闻摘要仍然是 NLP 中一项极具挑战性的任务,经常会造成事实不准确或文章本质的丢失。在本研究中,我们提出了一种新颖的新闻摘要方法,利用在谷歌 mt-small 标记符号化器上预先训练的微调变换器架构。在 Inshorts 英语新闻数据集上,我们的模型比以前的方法有了显著的性能提升,ROUGE-1 分数提高了 6%,训练损失减少了 50%。这一突破有助于生成可靠而简洁的新闻摘要,从而提高信息的可获取性和用户体验。此外,我们还使用 ROUGE 分数等流行指标对模型的性能进行了全面评估,结果显示我们提出的模型达到了 ROUGE-1: 54.6130、ROUGE-2: 31.1543、ROUGE-L: 50.7709 和 ROUGE-LSum: 50.7907。此外,我们还观察到训练和验证损失大幅减少,凸显了我们提出的方法的有效性。
{"title":"Improving ROUGE-1 by 6%: A novel multilingual transformer for abstractive news summarization","authors":"Sandeep Kumar,&nbsp;Arun Solanki","doi":"10.1002/cpe.8199","DOIUrl":"10.1002/cpe.8199","url":null,"abstract":"<div>\u0000 \u0000 <p>Natural language processing (NLP) has undergone a significant transformation, evolving from manually crafted rules to powerful deep learning techniques such as transformers. These advancements have revolutionized various domains including summarization, question answering, and more. Statistical models like hidden Markov models (HMMs) and supervised learning have played crucial roles in laying the foundation for this progress. Recent breakthroughs in transfer learning and the emergence of large-scale models like BERT and GPT have further pushed the boundaries of NLP research. However, news summarization remains a challenging task in NLP, often resulting in factual inaccuracies or the loss of the article's essence. In this study, we propose a novel approach to news summarization utilizing a fine-tuned Transformer architecture pre-trained on Google's mt-small tokenizer. Our model demonstrates significant performance improvements over previous methods on the Inshorts English News dataset, achieving a 6% enhancement in the ROUGE-1 score and reducing training loss by 50%. This breakthrough facilitates the generation of reliable and concise news summaries, thereby enhancing information accessibility and user experience. Additionally, we conduct a comprehensive evaluation of our model's performance using popular metrics such as ROUGE scores, with our proposed model achieving ROUGE-1: 54.6130, ROUGE-2: 31.1543, ROUGE-L: 50.7709, and ROUGE-LSum: 50.7907. Furthermore, we observe a substantial reduction in training and validation losses, underscoring the effectiveness of our proposed approach.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 20","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141362366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1