首页 > 最新文献

The Journal of Supercomputing最新文献

英文 中文
A quadratic regression model to quantify certain latest corona treatment drug molecules based on coindices of M-polynomial 基于M-多项式协系数的二次回归模型,量化某些最新的电晕治疗药物分子
Pub Date : 2024-09-18 DOI: 10.1007/s11227-024-06434-w
Shahid Zaman, Sadaf Rasheed, Ahmed Alamer

Medical researches encounter time, cost, solubility, and data challenges in new drug development. Within the realm of theory, chemical graph theory plays a crucial role in drug design. The SARS-CoV-2 pandemic prompts urgent exploration of drugs like favipiravir, baricitinib, fluvoxamine, nirmatrelvir, molnupiravir, lopinavir, and remdesivir. Developing effective treatments of COVID-19 is a top priority for health authorities, as they strive to curb the pandemic’s impact on public health and prevent future outbreaks. This article characterized CoM-polynomial and their derivatives to determine the topological characteristics of several antiviral drugs. Using this approach, we examine physicochemical properties with quadratic regression method. The results indicate significant correlation between the investigated topological coindices and the physicochemical properties of potential antiviral medicines.

医学研究在新药开发过程中会遇到时间、成本、溶解度和数据方面的挑战。在理论领域,化学图谱理论在药物设计中发挥着至关重要的作用。SARS-CoV-2 大流行促使人们对法非拉韦、巴利替尼、氟伏沙明、尼尔马特韦、莫鲁吡拉韦、洛匹那韦和雷米替韦等药物进行紧急探索。开发有效的 COVID-19 治疗方法是卫生当局的当务之急,因为他们正努力遏制该流行病对公共卫生的影响并防止未来的爆发。本文以 CoM-polynomial 及其衍生物为特征,确定了几种抗病毒药物的拓扑特征。利用这种方法,我们用二次回归法研究了理化特性。结果表明,所研究的拓扑共线性与潜在抗病毒药物的理化性质之间存在明显的相关性。
{"title":"A quadratic regression model to quantify certain latest corona treatment drug molecules based on coindices of M-polynomial","authors":"Shahid Zaman, Sadaf Rasheed, Ahmed Alamer","doi":"10.1007/s11227-024-06434-w","DOIUrl":"https://doi.org/10.1007/s11227-024-06434-w","url":null,"abstract":"<p>Medical researches encounter time, cost, solubility, and data challenges in new drug development. Within the realm of theory, chemical graph theory plays a crucial role in drug design. The SARS-CoV-2 pandemic prompts urgent exploration of drugs like favipiravir, baricitinib, fluvoxamine, nirmatrelvir, molnupiravir, lopinavir, and remdesivir. Developing effective treatments of COVID-19 is a top priority for health authorities, as they strive to curb the pandemic’s impact on public health and prevent future outbreaks. This article characterized CoM-polynomial and their derivatives to determine the topological characteristics of several antiviral drugs. Using this approach, we examine physicochemical properties with quadratic regression method. The results indicate significant correlation between the investigated topological coindices and the physicochemical properties of potential antiviral medicines.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data integration from traditional to big data: main features and comparisons of ETL approaches 从传统数据到大数据的数据整合:ETL 方法的主要特点和比较
Pub Date : 2024-09-16 DOI: 10.1007/s11227-024-06413-1
Afef Walha, Faiza Ghozzi, Faiez Gargouri

Data integration combines information from different sources to provide a comprehensive view for making informed business decisions. The ETL (Extract, Transform, and Load) process is essential in data integration. In the past two decades, modeling the ETL process has become a priority for effectively managing information. This paper aims to explore ETL approaches to help researchers and organizational stakeholders overcome challenges, especially in Big Data integration. It offers a comprehensive overview of ETL methods, from traditional to Big Data, and discusses their advantages, limitations, and the primary trends in Big Data integration. The study emphasizes that many technologies have been integrated into ETL steps for data collection, storage, processing, querying, and analysis without proper modeling. Therefore, more generic and customized design modeling of the ETL steps should be carried out to ensure reusability and flexibility. The paper summarizes the exploration of ETL modeling, focusing on Big Data scalability and processing trends. It also identifies critical dilemmas, such as ensuring compatibility across multiple sources and dealing with large volumes of Big Data. Furthermore, it suggests future directions in Big Data integration by leveraging advanced artificial intelligence processing and storage systems to ensure consistency, efficiency, and data integrity.

数据集成将来自不同来源的信息结合起来,为做出明智的业务决策提供全面的视图。ETL(提取、转换和加载)流程在数据集成中至关重要。在过去二十年里,ETL 流程建模已成为有效管理信息的当务之急。本文旨在探讨 ETL 方法,帮助研究人员和组织利益相关者克服挑战,尤其是在大数据集成方面。它全面概述了从传统到大数据的 ETL 方法,并讨论了这些方法的优势、局限性以及大数据集成的主要趋势。研究强调,许多技术已被集成到 ETL 步骤中,用于数据收集、存储、处理、查询和分析,但没有进行适当的建模。因此,应该对 ETL 步骤进行更加通用和定制化的设计建模,以确保可重用性和灵活性。本文总结了对 ETL 建模的探索,重点关注大数据的可扩展性和处理趋势。论文还指出了一些关键的困境,如确保多个数据源之间的兼容性和处理大量大数据。此外,它还提出了大数据集成的未来方向,即利用先进的人工智能处理和存储系统来确保一致性、效率和数据完整性。
{"title":"Data integration from traditional to big data: main features and comparisons of ETL approaches","authors":"Afef Walha, Faiza Ghozzi, Faiez Gargouri","doi":"10.1007/s11227-024-06413-1","DOIUrl":"https://doi.org/10.1007/s11227-024-06413-1","url":null,"abstract":"<p>Data integration combines information from different sources to provide a comprehensive view for making informed business decisions. The ETL (Extract, Transform, and Load) process is essential in data integration. In the past two decades, modeling the ETL process has become a priority for effectively managing information. This paper aims to explore ETL approaches to help researchers and organizational stakeholders overcome challenges, especially in Big Data integration. It offers a comprehensive overview of ETL methods, from traditional to Big Data, and discusses their advantages, limitations, and the primary trends in Big Data integration. The study emphasizes that many technologies have been integrated into ETL steps for data collection, storage, processing, querying, and analysis without proper modeling. Therefore, more generic and customized design modeling of the ETL steps should be carried out to ensure reusability and flexibility. The paper summarizes the exploration of ETL modeling, focusing on Big Data scalability and processing trends. It also identifies critical dilemmas, such as ensuring compatibility across multiple sources and dealing with large volumes of Big Data. Furthermore, it suggests future directions in Big Data integration by leveraging advanced artificial intelligence processing and storage systems to ensure consistency, efficiency, and data integrity.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end probability analysis method for multi-core distributed systems 多核分布式系统的端到端概率分析方法
Pub Date : 2024-09-13 DOI: 10.1007/s11227-024-06460-8
Xianchen Shi, Yian Zhu, Lian Li

Timing determinism in embedded real-time systems requires meeting timing constraints not only for individual tasks but also for chains of tasks that involve multiple messages. End-to-end analysis is a commonly used approach for solving such problems. However, the temporal properties of tasks often have uncertainty, which makes end-to-end analysis challenging and prone to errors. In this paper, we focus on enhancing the precision and safety of end-to-end timing analysis by introducing a novel probabilistic method. Our approach involves establishing a probabilistic model for end-to-end timing analysis and implementing two algorithms: the maximum data age detection algorithm and the end-to-end timing deadline miss probability detection algorithm. The experimental results indicate that our approach surpasses traditional analytical methods in terms of safety and significantly enhances the capability to detect the probability of missing end-to-end deadlines.

嵌入式实时系统中的定时确定性不仅要求满足单个任务的定时约束,还要求满足涉及多个信息的任务链的定时约束。端到端分析是解决此类问题的常用方法。然而,任务的时间属性往往具有不确定性,这使得端到端分析具有挑战性,而且容易出错。在本文中,我们通过引入一种新颖的概率方法,着重提高端到端时序分析的精确性和安全性。我们的方法包括为端到端定时分析建立一个概率模型,并实施两种算法:最大数据年龄检测算法和端到端定时截止日期错过概率检测算法。实验结果表明,我们的方法在安全性方面超越了传统的分析方法,并显著增强了检测端到端截止日期缺失概率的能力。
{"title":"End-to-end probability analysis method for multi-core distributed systems","authors":"Xianchen Shi, Yian Zhu, Lian Li","doi":"10.1007/s11227-024-06460-8","DOIUrl":"https://doi.org/10.1007/s11227-024-06460-8","url":null,"abstract":"<p>Timing determinism in embedded real-time systems requires meeting timing constraints not only for individual tasks but also for chains of tasks that involve multiple messages. End-to-end analysis is a commonly used approach for solving such problems. However, the temporal properties of tasks often have uncertainty, which makes end-to-end analysis challenging and prone to errors. In this paper, we focus on enhancing the precision and safety of end-to-end timing analysis by introducing a novel probabilistic method. Our approach involves establishing a probabilistic model for end-to-end timing analysis and implementing two algorithms: the maximum data age detection algorithm and the end-to-end timing deadline miss probability detection algorithm. The experimental results indicate that our approach surpasses traditional analytical methods in terms of safety and significantly enhances the capability to detect the probability of missing end-to-end deadlines.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cloud computing approach to superscale colored traveling salesman problems 超大规模彩色旅行推销员问题的云计算方法
Pub Date : 2024-09-11 DOI: 10.1007/s11227-024-06433-x
Zhicheng Lin, Jun Li, Yongcui Li

The colored traveling salesman problem (CTSP) generalizes the well-known multiple traveling salesman problem by utilizing colors to describe the accessibility of cities to individual salesmen. Many centralized algorithms have been developed to solve CTSP instances. This work presents a distributed solving framework and method for CTSP for the first time. The framework consists of multiple container-based computing nodes that rely on specific cloud infrastructures to perform distributed optimization in a pipeline style. In the framework, we develop a distributed Delaunay-triangulation-based variable neighborhood search (DDVNS) algorithm for solving a CTSP case decomposed into many traveling salesman problems. DDVNS exploits a two-stage initialization to generate an initial solution for all TSPs. After that, Delaunay-triangulation-based variable neighborhood search (DVNS) is employed to find local optima. Furthermore, the obtained solutions are improved by reallocating multicolor cities and iterating the search progress, ultimately leading to a group of CTSP solutions. Finally, extensive experiments show that DDVNS outperforms the state-of-the-art centralized VNS algorithms in terms of search efficiency and solution quality. Notably, we can achieve the best solution in a superscale case with 16 salesmen and 160,000 cities within 15 minutes, breaking the best record of CTSPs.

彩色旅行推销员问题(CTSP)是对著名的多重旅行推销员问题的概括,它利用颜色来描述城市对每个推销员的可达性。目前已开发出许多集中式算法来解决 CTSP 实例。本研究首次提出了 CTSP 的分布式求解框架和方法。该框架由多个基于容器的计算节点组成,这些节点依靠特定的云基础设施,以流水线方式执行分布式优化。在该框架中,我们开发了一种基于 Delaunay 三角测量的分布式变量邻域搜索(DDVNS)算法,用于解决分解为多个旅行推销员问题的 CTSP 案例。DDVNS 利用两阶段初始化为所有 TSP 生成初始解。然后,采用基于 Delaunay 三角剖分的变量邻域搜索(DVNS)来寻找局部最优解。此外,还通过重新分配多色城市和迭代搜索进度来改进所获得的解,最终得到一组 CTSP 解。最后,大量实验表明,DDVNS 在搜索效率和解决方案质量方面都优于最先进的集中式 VNS 算法。值得注意的是,我们能在 15 分钟内获得 16 个销售员和 160,000 个城市的超大规模案例的最佳解决方案,打破了 CTSP 的最佳记录。
{"title":"A cloud computing approach to superscale colored traveling salesman problems","authors":"Zhicheng Lin, Jun Li, Yongcui Li","doi":"10.1007/s11227-024-06433-x","DOIUrl":"https://doi.org/10.1007/s11227-024-06433-x","url":null,"abstract":"<p>The colored traveling salesman problem (CTSP) generalizes the well-known multiple traveling salesman problem by utilizing colors to describe the accessibility of cities to individual salesmen. Many centralized algorithms have been developed to solve CTSP instances. This work presents a distributed solving framework and method for CTSP for the first time. The framework consists of multiple container-based computing nodes that rely on specific cloud infrastructures to perform distributed optimization in a pipeline style. In the framework, we develop a distributed Delaunay-triangulation-based variable neighborhood search (DDVNS) algorithm for solving a CTSP case decomposed into many traveling salesman problems. DDVNS exploits a two-stage initialization to generate an initial solution for all TSPs. After that, Delaunay-triangulation-based variable neighborhood search (DVNS) is employed to find local optima. Furthermore, the obtained solutions are improved by reallocating multicolor cities and iterating the search progress, ultimately leading to a group of CTSP solutions. Finally, extensive experiments show that DDVNS outperforms the state-of-the-art centralized VNS algorithms in terms of search efficiency and solution quality. Notably, we can achieve the best solution in a superscale case with 16 salesmen and 160,000 cities within 15 minutes, breaking the best record of CTSPs.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximating neural distinguishers using differential-linear imbalance 利用差分线性不平衡逼近神经区分器
Pub Date : 2024-09-11 DOI: 10.1007/s11227-024-06375-4
Guangqiu Lv, Chenhui Jin, Zhen Shi, Ting Cui

At CRYPTO 2019, Gohr first proposed neural distinguishers (NDs) on SPECK32, which are superior to the distinguishers based on the differential distribution table (DDT). Benamira et al. noted that NDs rely on the differential distribution of the last three rounds, and Bao et al. pointed out that NDs depend on the strong correlations between the bit values of ciphertext pairs satisfying the expected differential. Hence, one may guess that there exist deep relations between NDs and the differential-linear imbalances. To approximate NDs under a single ciphertext pair, we utilize differential-linear imbalances to construct simplified distinguishers. These newly constructed distinguishers offer comparable distinguishing advantages to that of NDs but with reduced time complexities. For instance, one such simplified distinguisher has only (2^{-1.35}) of the original time complexity of NDs. Our experiments demonstrate that these new distinguishers achieve a matching rate of 98.2% for 5-round SPECK32 under a single ciphertext pair. Furthermore, we achieve the highest accuracies for 7-round and 8-round SPECK32 up to date by using a maximum of 512 ciphertext pairs. Finally, by replacing NDs with simplified distinguishers, we significantly reduce the time complexities of differential-neural attacks on 11–14 rounds of SPECK32.

在CRYPTO 2019上,Gohr首次提出了基于SPECK32的神经区分器(NDs),它优于基于差分分布表(DDT)的区分器。Benamira 等人指出,NDs 依赖于最后三轮的差分分布,而 Bao 等人则指出,NDs 依赖于满足预期差分的密文对位值之间的强相关性。因此,我们可以猜测 NDs 与差分线性不平衡之间存在深层关系。为了近似单一密文对下的 ND,我们利用差分线性不平衡来构建简化的区分器。这些新构建的区分器具有与 NDs 类似的区分优势,但时间复杂度更低。例如,这种简化的区分器的时间复杂度仅为NDs的(2^{-1.35})。我们的实验证明,这些新的区分器在单密码文对下的 5 轮 SPECK32 匹配率达到了 98.2%。此外,通过使用最多 512 对密码文本,我们在 7 轮和 8 轮 SPECK32 中实现了迄今为止最高的准确率。最后,通过用简化的区分器替代 ND,我们大大降低了对 11-14 轮 SPECK32 的差分神经攻击的时间复杂性。
{"title":"Approximating neural distinguishers using differential-linear imbalance","authors":"Guangqiu Lv, Chenhui Jin, Zhen Shi, Ting Cui","doi":"10.1007/s11227-024-06375-4","DOIUrl":"https://doi.org/10.1007/s11227-024-06375-4","url":null,"abstract":"<p>At CRYPTO 2019, Gohr first proposed neural distinguishers (NDs) on SPECK32, which are superior to the distinguishers based on the differential distribution table (DDT). Benamira et al. noted that NDs rely on the differential distribution of the last three rounds, and Bao et al. pointed out that NDs depend on the strong correlations between the bit values of ciphertext pairs satisfying the expected differential. Hence, one may guess that there exist deep relations between NDs and the differential-linear imbalances. To approximate NDs under a single ciphertext pair, we utilize differential-linear imbalances to construct simplified distinguishers. These newly constructed distinguishers offer comparable distinguishing advantages to that of NDs but with reduced time complexities. For instance, one such simplified distinguisher has only <span>(2^{-1.35})</span> of the original time complexity of NDs. Our experiments demonstrate that these new distinguishers achieve a matching rate of 98.2% for 5-round SPECK32 under a single ciphertext pair. Furthermore, we achieve the highest accuracies for 7-round and 8-round SPECK32 up to date by using a maximum of 512 ciphertext pairs. Finally, by replacing NDs with simplified distinguishers, we significantly reduce the time complexities of differential-neural attacks on 11–14 rounds of SPECK32.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A container optimal matching deployment algorithm based on CN-Graph for mobile edge computing 基于 CN-Graph 的移动边缘计算容器优化匹配部署算法
Pub Date : 2024-09-09 DOI: 10.1007/s11227-024-06450-w
Huanle Rao, Sheng Chen, Yuxuan Du, Xiaobin Xu, Haodong Chen, Gangyong Jia

The deployment of increasingly diverse services on edge devices is becoming increasingly prevalent. Efficiently deploying functionally heterogeneous services to resource heterogeneous edge nodes while achieving superior user experience is a challenge that every edge system must address. In this paper, we propose a container-node graph (CN-Graph)-based container optimal matching deployment algorithm, edge Kuhn-Munkres algorithm (EKM) based on container-node graph, designed for heterogeneous environment to optimize system performance. Initially, containers are categorized by functional labels, followed by construction of a CN-Graph model based on the relationship between containers and nodes. Finally, the container deployment problem is transformed into a weighted bipartite graph optimal matching problem. In comparison with the mainstream container deployment algorithms, Swarm, Kubernetes, and the recently emerged ECSched-dp algorithm, the EKM algorithm demonstrates the ability to effectively enhance the average runtime performance of containers to 3.74 times, 4.10 times, and 2.39 times, respectively.

在边缘设备上部署日益多样化的服务正变得越来越普遍。如何将功能异构的服务高效地部署到资源异构的边缘节点上,同时实现卓越的用户体验,是每个边缘系统必须应对的挑战。本文提出了一种基于容器-节点图(CN-Graph)的容器优化匹配部署算法,即基于容器-节点图的边缘库恩-蒙克雷斯算法(EKM),旨在异构环境中优化系统性能。首先,根据功能标签对容器进行分类,然后根据容器和节点之间的关系构建 CN-Graph 模型。最后,容器部署问题被转化为加权双向图最优匹配问题。与主流的容器部署算法Swarm、Kubernetes和最近出现的ECSched-dp算法相比,EKM算法能有效提高容器的平均运行性能,分别达到3.74倍、4.10倍和2.39倍。
{"title":"A container optimal matching deployment algorithm based on CN-Graph for mobile edge computing","authors":"Huanle Rao, Sheng Chen, Yuxuan Du, Xiaobin Xu, Haodong Chen, Gangyong Jia","doi":"10.1007/s11227-024-06450-w","DOIUrl":"https://doi.org/10.1007/s11227-024-06450-w","url":null,"abstract":"<p>The deployment of increasingly diverse services on edge devices is becoming increasingly prevalent. Efficiently deploying functionally heterogeneous services to resource heterogeneous edge nodes while achieving superior user experience is a challenge that every edge system must address. In this paper, we propose a container-node graph (CN-Graph)-based container optimal matching deployment algorithm, edge Kuhn-Munkres algorithm (EKM) based on container-node graph, designed for heterogeneous environment to optimize system performance. Initially, containers are categorized by functional labels, followed by construction of a CN-Graph model based on the relationship between containers and nodes. Finally, the container deployment problem is transformed into a weighted bipartite graph optimal matching problem. In comparison with the mainstream container deployment algorithms, Swarm, Kubernetes, and the recently emerged ECSched-dp algorithm, the EKM algorithm demonstrates the ability to effectively enhance the average runtime performance of containers to 3.74 times, 4.10 times, and 2.39 times, respectively.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"173 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-based cross-domain authentication in a multi-domain Internet of drones environment 多域无人机互联网环境中基于区块链的跨域身份验证
Pub Date : 2024-09-05 DOI: 10.1007/s11227-024-06447-5
Arivarasan Karmegam, Ashish Tomar, Sachin Tripathi

As a new paradigm, the Internet of drones (IoD) is making the future easy with its flexibility and wide range of applications. However, these drones are prone to security attacks during communication because of this flexibility. The traditional authentication mechanism uses a centralized server which is a single point of failure to its network and a performance bottleneck. Also, privacy-preserving mechanisms involving a single authority are vulnerable to identity attacks if compromised. Moreover, cross-domain authentication schemes are getting more costly as the security requirements increase. So, this work proposes a blockchain-based cross-domain authentication scheme to make drone communication more secure and efficient. In this work, an elliptic curve digital signature algorithm (ECDSA) based message authentication scheme and a session key generation scheme are modeled. A two-phase pseudonym generation procedure is used to secure the identity of the drones. Hyperledger Fabric is used to implement the blockchain network, and the analysis is done using Hyperledger Caliper. Blockchain analysis through caliper shows the blockchain’s performance for various loads of transactions. Security analysis of the proposed scheme shows that the scheme is secure from various security attacks. The performance analysis shows that the proposed scheme is more lightweight and efficient than most similar authentication schemes.

作为一种新的模式,无人机互联网(IoD)以其灵活性和广泛的应用为未来提供了便利。然而,由于这种灵活性,这些无人机在通信过程中很容易受到安全攻击。传统的身份验证机制使用集中式服务器,这是网络的单点故障,也是性能瓶颈。而且,涉及单一机构的隐私保护机制一旦遭到破坏,很容易受到身份攻击。此外,随着安全要求的提高,跨域身份验证方案的成本也越来越高。因此,这项工作提出了一种基于区块链的跨域身份验证方案,使无人机通信更加安全高效。在这项工作中,模拟了基于椭圆曲线数字签名算法(ECDSA)的消息认证方案和会话密钥生成方案。使用两阶段假名生成程序来确保无人机身份的安全。使用 Hyperledger Fabric 实现区块链网络,并使用 Hyperledger Caliper 进行分析。通过 Caliper 进行的区块链分析显示了区块链在各种交易负载下的性能。对所提方案的安全性分析表明,该方案可免受各种安全攻击。性能分析表明,与大多数类似的身份验证方案相比,所提出的方案更轻便、更高效。
{"title":"Blockchain-based cross-domain authentication in a multi-domain Internet of drones environment","authors":"Arivarasan Karmegam, Ashish Tomar, Sachin Tripathi","doi":"10.1007/s11227-024-06447-5","DOIUrl":"https://doi.org/10.1007/s11227-024-06447-5","url":null,"abstract":"<p>As a new paradigm, the Internet of drones (IoD) is making the future easy with its flexibility and wide range of applications. However, these drones are prone to security attacks during communication because of this flexibility. The traditional authentication mechanism uses a centralized server which is a single point of failure to its network and a performance bottleneck. Also, privacy-preserving mechanisms involving a single authority are vulnerable to identity attacks if compromised. Moreover, cross-domain authentication schemes are getting more costly as the security requirements increase. So, this work proposes a blockchain-based cross-domain authentication scheme to make drone communication more secure and efficient. In this work, an elliptic curve digital signature algorithm (ECDSA) based message authentication scheme and a session key generation scheme are modeled. A two-phase pseudonym generation procedure is used to secure the identity of the drones. Hyperledger Fabric is used to implement the blockchain network, and the analysis is done using Hyperledger Caliper. Blockchain analysis through caliper shows the blockchain’s performance for various loads of transactions. Security analysis of the proposed scheme shows that the scheme is secure from various security attacks. The performance analysis shows that the proposed scheme is more lightweight and efficient than most similar authentication schemes.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A centralized delay-sensitive hierarchical computation offloading in fog radio access networks 雾无线接入网络中的集中式延迟敏感分层计算卸载
Pub Date : 2024-09-05 DOI: 10.1007/s11227-024-06454-6
Samira Taheri, Neda Moghim, Naser Movahhedinia, Sachin Shetty

MEC (Multi-access Edge Computing) is vital in 5G and beyond (B5G) for reducing latency and enhancing network efficiency through local processing, crucial for real-time applications and improved security. This drives the adoption of advanced architectures like Fog Radio Access Network (F-RAN), which uses distributed resources from Radio Resource Heads (RRHs) or fog nodes to enable parallel computation. Each user equipment (UE) task can be processed by RRHs, fog access points, cloud servers, or the UE itself, depending on resource capacities. We propose MoNoR, a centralized approach for optimal task processing in F-RAN. MoNoR optimizes the selection of offloading modes, assignment of tasks to computation nodes, and allocation of radio resources using global network information. Given the computational complexity of this endeavor, we employ an evolutionary optimization technique rooted in Genetic Algorithms to address the problem efficiently. Simulations show MoNoR's superiority in minimizing latency over previous F-RAN offloading strategies.

MEC(多接入边缘计算)在 5G 及以后(B5G)中至关重要,可通过本地处理减少延迟并提高网络效率,这对实时应用和提高安全性至关重要。这推动了雾无线接入网(F-RAN)等先进架构的采用,该架构利用无线资源头(RRH)或雾节点的分布式资源实现并行计算。每个用户设备(UE)任务可由 RRH、雾接入点、云服务器或 UE 本身处理,具体取决于资源容量。我们提出了一种在 F-RAN 中优化任务处理的集中式方法 MoNoR。MoNoR 利用全局网络信息优化卸载模式选择、计算节点任务分配和无线电资源分配。考虑到这一工作的计算复杂性,我们采用了植根于遗传算法的进化优化技术来有效解决这一问题。仿真结果表明,与之前的 F-RAN 卸载策略相比,MoNoR 在最小化延迟方面更具优势。
{"title":"A centralized delay-sensitive hierarchical computation offloading in fog radio access networks","authors":"Samira Taheri, Neda Moghim, Naser Movahhedinia, Sachin Shetty","doi":"10.1007/s11227-024-06454-6","DOIUrl":"https://doi.org/10.1007/s11227-024-06454-6","url":null,"abstract":"<p>MEC (Multi-access Edge Computing) is vital in 5G and beyond (B5G) for reducing latency and enhancing network efficiency through local processing, crucial for real-time applications and improved security. This drives the adoption of advanced architectures like Fog Radio Access Network (F-RAN), which uses distributed resources from Radio Resource Heads (RRHs) or fog nodes to enable parallel computation. Each user equipment (UE) task can be processed by RRHs, fog access points, cloud servers, or the UE itself, depending on resource capacities. We propose MoNoR, a centralized approach for optimal task processing in F-RAN. MoNoR optimizes the selection of offloading modes, assignment of tasks to computation nodes, and allocation of radio resources using global network information. Given the computational complexity of this endeavor, we employ an evolutionary optimization technique rooted in Genetic Algorithms to address the problem efficiently. Simulations show MoNoR's superiority in minimizing latency over previous F-RAN offloading strategies.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensor node localization using nature-inspired algorithms with fuzzy logic in WSNs 在 WSN 中使用自然启发算法和模糊逻辑进行传感器节点定位
Pub Date : 2024-09-04 DOI: 10.1007/s11227-024-06464-4
Shilpi, Arvind Kumar

The node localization problem of wireless sensor networks (WSNs) is addressed in this article with a node localization algorithm designed using fuzzy logic and a nature-inspired algorithm. The coordinates of target nodes are obtained using fuzzy logic reasoning and nature-inspired algorithms. The fuzzy logic concept is used to remove the nonlinearities that arise due to signal strength measurement in the process of range estimation. The triangular and trapezoidal membership functions are used with the Mamdani fuzzy inference system for distance improvement between sensor nodes. Further, particle swarm optimization (PSO) and the Jaya algorithm (JA) determine the target nodes’ location coordinates. The comparison of the proposed fuzzy logic-based PSO (FL-PSO) and fuzzy logic-based JA (FL-JA) algorithms is made with PSO and Jaya algorithm-based node localization algorithms for localization error. The influence of anchor nodes and degree of irregularity is verified during localization analysis on the FL-PSO and FL-JA node localization algorithms. The proposed FL-PSO and FL-JA node localization algorithms are evaluated for scalability, computation time, mean absolute deviation, and complexity to determine their efficacy. The simulations are carried out on MATLAB software in addition to the fuzzy logic toolbox.

本文针对无线传感器网络(WSN)的节点定位问题,采用模糊逻辑和自然启发算法设计了一种节点定位算法。目标节点的坐标是通过模糊逻辑推理和自然启发算法获得的。模糊逻辑概念用于消除测距估计过程中因信号强度测量而产生的非线性。三角形和梯形成员函数与 Mamdani 模糊推理系统一起用于改善传感器节点之间的距离。此外,粒子群优化(PSO)和 Jaya 算法(JA)可确定目标节点的位置坐标。比较了基于模糊逻辑的 PSO(FL-PSO)和基于模糊逻辑的 JA(FL-JA)算法与基于 PSO 和 Jaya 算法的节点定位算法的定位误差。在定位分析过程中,验证了锚节点和不规则程度对 FL-PSO 和 FL-JA 节点定位算法的影响。对所提出的 FL-PSO 和 FL-JA 节点定位算法的可扩展性、计算时间、平均绝对偏差和复杂性进行了评估,以确定其有效性。模拟是在 MATLAB 软件和模糊逻辑工具箱上进行的。
{"title":"Sensor node localization using nature-inspired algorithms with fuzzy logic in WSNs","authors":"Shilpi, Arvind Kumar","doi":"10.1007/s11227-024-06464-4","DOIUrl":"https://doi.org/10.1007/s11227-024-06464-4","url":null,"abstract":"<p>The node localization problem of wireless sensor networks (WSNs) is addressed in this article with a node localization algorithm designed using fuzzy logic and a nature-inspired algorithm. The coordinates of target nodes are obtained using fuzzy logic reasoning and nature-inspired algorithms. The fuzzy logic concept is used to remove the nonlinearities that arise due to signal strength measurement in the process of range estimation. The triangular and trapezoidal membership functions are used with the Mamdani fuzzy inference system for distance improvement between sensor nodes. Further, particle swarm optimization (PSO) and the Jaya algorithm (JA) determine the target nodes’ location coordinates. The comparison of the proposed fuzzy logic-based PSO (FL-PSO) and fuzzy logic-based JA (FL-JA) algorithms is made with PSO and Jaya algorithm-based node localization algorithms for localization error. The influence of anchor nodes and degree of irregularity is verified during localization analysis on the FL-PSO and FL-JA node localization algorithms. The proposed FL-PSO and FL-JA node localization algorithms are evaluated for scalability, computation time, mean absolute deviation, and complexity to determine their efficacy. The simulations are carried out on MATLAB software in addition to the fuzzy logic toolbox.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An energy-temperature aware routing protocol in wireless body area network: a fuzzy-based approach 无线体域网络中的能量-温度感知路由协议:基于模糊的方法
Pub Date : 2024-09-03 DOI: 10.1007/s11227-024-06458-2
Sedighe Hedayati, Payam Mahmoudi-Nasr, Sekine Asadi Amiri

The development of computer technology and wireless communication has introduced the wireless body area network (WBAN) to the world. In WBAN, the patient’s vital signs are monitored by small sensors embedded in the body. Sensor nodes work with a limited energy source, so energy consumption is a major issue in these networks. The increase in temperature caused by data transmissions can cause serious damage to body tissue. This paper proposes an Energy-Temperature Aware Routing (ETAR) protocol to solve this problem. In ETAR, routing is done directly and multi-hop using relay nodes. Multi-hop data forwarding plays a significant role in reducing energy consumption. In the proposed method, relay nodes are selected using a fuzzy inference system. Energy, temperature, and distance parameters are defined as the inputs of the fuzzy system. Therefore, in each round, a node with more remaining energy, lower temperature, and less distance from its neighbors is selected as the relay node. The proposed protocol reduces the adverse effects of temperature on the body by setting temperature limits for sensors. The performance of ETAR was evaluated for homogeneous and heterogeneous networks. In homogenous network, this protocol improves energy consumption by 44% and 55% compared to THE and EEMR. Network lifetime is enhanced by 46% and 55% compared to THE and EEMR. The throughput is improved by 40% compared to THE and 34% compared to EEMR, respectively. In a heterogeneous network, this protocol improves energy consumption by 47% and 52% compared to THE and EEMR. Network lifetime is enhanced by 62% and 65% compared to THE and EEMR, respectively. The throughput is improved by 100% compared to THE and 97% compared to EEMR.

计算机技术和无线通信的发展为世界带来了无线体域网(WBAN)。在 WBAN 中,病人的生命体征由嵌入体内的小型传感器监测。传感器节点的工作能源有限,因此能耗是这些网络的一个主要问题。数据传输引起的温度升高会对人体组织造成严重损害。本文提出了一种能量-温度感知路由协议(ETAR)来解决这个问题。在 ETAR 中,路由是通过中继节点直接和多跳完成的。多跳数据转发在降低能耗方面发挥着重要作用。在所提出的方法中,中继节点是通过模糊推理系统来选择的。能量、温度和距离参数被定义为模糊系统的输入。因此,在每一轮中,都会选择剩余能量较多、温度较低、与邻居距离较远的节点作为中继节点。所提出的协议通过为传感器设定温度限制,减少了温度对人体的不利影响。在同质和异质网络中对 ETAR 的性能进行了评估。在同质网络中,与 THE 和 EEMR 相比,该协议的能耗分别降低了 44% 和 55%。与 THE 和 EEMR 相比,网络寿命分别提高了 46% 和 55%。吞吐量分别比 THE 和 EEMR 提高了 40% 和 34%。在异构网络中,与 THE 和 EEMR 相比,该协议的能耗分别提高了 47% 和 52%。与 THE 和 EEMR 相比,网络寿命分别提高了 62% 和 65%。吞吐量比 THE 提高了 100%,比 EEMR 提高了 97%。
{"title":"An energy-temperature aware routing protocol in wireless body area network: a fuzzy-based approach","authors":"Sedighe Hedayati, Payam Mahmoudi-Nasr, Sekine Asadi Amiri","doi":"10.1007/s11227-024-06458-2","DOIUrl":"https://doi.org/10.1007/s11227-024-06458-2","url":null,"abstract":"<p>The development of computer technology and wireless communication has introduced the wireless body area network (WBAN) to the world. In WBAN, the patient’s vital signs are monitored by small sensors embedded in the body. Sensor nodes work with a limited energy source, so energy consumption is a major issue in these networks. The increase in temperature caused by data transmissions can cause serious damage to body tissue. This paper proposes an Energy-Temperature Aware Routing (ETAR) protocol to solve this problem. In ETAR, routing is done directly and multi-hop using relay nodes. Multi-hop data forwarding plays a significant role in reducing energy consumption. In the proposed method, relay nodes are selected using a fuzzy inference system. Energy, temperature, and distance parameters are defined as the inputs of the fuzzy system. Therefore, in each round, a node with more remaining energy, lower temperature, and less distance from its neighbors is selected as the relay node. The proposed protocol reduces the adverse effects of temperature on the body by setting temperature limits for sensors. The performance of ETAR was evaluated for homogeneous and heterogeneous networks. In homogenous network, this protocol improves energy consumption by 44% and 55% compared to THE and EEMR. Network lifetime is enhanced by 46% and 55% compared to THE and EEMR. The throughput is improved by 40% compared to THE and 34% compared to EEMR, respectively. In a heterogeneous network, this protocol improves energy consumption by 47% and 52% compared to THE and EEMR. Network lifetime is enhanced by 62% and 65% compared to THE and EEMR, respectively. The throughput is improved by 100% compared to THE and 97% compared to EEMR.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Journal of Supercomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1