首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Uncertainty-aware scheduling for effective data collection from environmental IoT devices through LEO satellites 通过低轨道卫星从环境物联网设备有效收集数据的不确定性感知调度
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-07 DOI: 10.1016/j.future.2024.107656
Haoran Xu , Xiaodao Chen , Xiaohui Huang , Geyong Min , Yunliang Chen
Low Earth Orbit (LEO) satellites have been widely used to collect sensing data from ground-based IoT devices. Comprehensive and timely collection of sensor data is a prerequisite for conducting analysis, decision-making, and other tasks, ultimately enhancing services such as geological hazard monitoring and ecological environment monitoring. To improve the efficiency of data collection, many models and scheduling methods have been proposed, but they did not fully consider the practical scenarios of collecting data from remote areas with limited ground network coverage, particularly in addressing the uncertainties in data transmission caused by complex environments. To cope with the above challenges, this paper first presents a mathematical representation of the real-world scenario for data collection from geographically distributed IoT devices through LEO satellites, based on a full consideration of uncertainties in transmission rates. Then, a Cross-entropy-based transmission scheduling method (CETSM) and an uncertainty-aware transmission scheduling method (UATSM) are proposed to enhance the volume of collected data and mitigate the impact of uncertainty on the data uplink transmission rate. The CETSM achieved an average increase in total data collection ranging from 7.24% to 16.69% compared to the other five benchmark methods across eight scenarios. Moreover, UATSM performs excellently in the Monte Carlo-based evaluation module, achieving an average data collection completion rate of 96.1% and saving an average of 19.8% in energy costs, thereby obtaining a good balance between energy consumption and completion rate.
近地轨道(LEO)卫星已被广泛用于从地面物联网设备收集传感数据。全面、及时地采集传感器数据,是开展分析、决策等工作的前提,最终提升地质灾害监测、生态环境监测等服务水平。为了提高数据采集效率,提出了许多模型和调度方法,但没有充分考虑到地面网络覆盖有限的偏远地区采集数据的实际场景,特别是在解决复杂环境导致的数据传输的不确定性方面。为了应对上述挑战,本文首先在充分考虑传输速率不确定性的基础上,提出了通过低轨道卫星从地理分布的物联网设备收集数据的真实场景的数学表示。然后,提出了一种基于交叉熵的传输调度方法(CETSM)和一种不确定性感知的传输调度方法(UATSM),以提高采集数据量,减轻不确定性对数据上行传输速率的影响。在8个场景中,与其他5种基准方法相比,CETSM实现了数据收集总量的平均增长,从7.24%到16.69%不等。此外,UATSM在基于蒙特卡洛的评估模块中表现出色,平均数据收集完成率为96.1%,平均节约能源成本19.8%,在能耗和完成率之间取得了很好的平衡。
{"title":"Uncertainty-aware scheduling for effective data collection from environmental IoT devices through LEO satellites","authors":"Haoran Xu ,&nbsp;Xiaodao Chen ,&nbsp;Xiaohui Huang ,&nbsp;Geyong Min ,&nbsp;Yunliang Chen","doi":"10.1016/j.future.2024.107656","DOIUrl":"10.1016/j.future.2024.107656","url":null,"abstract":"<div><div>Low Earth Orbit (LEO) satellites have been widely used to collect sensing data from ground-based IoT devices. Comprehensive and timely collection of sensor data is a prerequisite for conducting analysis, decision-making, and other tasks, ultimately enhancing services such as geological hazard monitoring and ecological environment monitoring. To improve the efficiency of data collection, many models and scheduling methods have been proposed, but they did not fully consider the practical scenarios of collecting data from remote areas with limited ground network coverage, particularly in addressing the uncertainties in data transmission caused by complex environments. To cope with the above challenges, this paper first presents a mathematical representation of the real-world scenario for data collection from geographically distributed IoT devices through LEO satellites, based on a full consideration of uncertainties in transmission rates. Then, a Cross-entropy-based transmission scheduling method (CETSM) and an uncertainty-aware transmission scheduling method (UATSM) are proposed to enhance the volume of collected data and mitigate the impact of uncertainty on the data uplink transmission rate. The CETSM achieved an average increase in total data collection ranging from 7.24% to 16.69% compared to the other five benchmark methods across eight scenarios. Moreover, UATSM performs excellently in the Monte Carlo-based evaluation module, achieving an average data collection completion rate of 96.1% and saving an average of 19.8% in energy costs, thereby obtaining a good balance between energy consumption and completion rate.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107656"},"PeriodicalIF":6.2,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSPL: A Ponzi scheme smart contracts detection approach via compressed sensing oversampling-based peephole LSTM PSPL:通过基于压缩感应超采样的窥孔 LSTM 检测庞氏骗局智能合约的方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-07 DOI: 10.1016/j.future.2024.107655
Lei Wang , Hao Cheng , Zihao Sun , Aolin Tian , Zhonglian Yang
Decentralized Finance (DeFi) utilizes the key principles of blockchain to improve the traditional finance system with greater freedom in trade. However, due to the absence of access restrictions in the implementation of decentralized finance protocols, effective regulatory measures are crucial to ensuring the healthy development of DeFi ecosystems. As a prominent DeFi platform, Ethereum has witnessed an increase in fraudulent activities, with the Ponzi schemes causing significant user losses. With the growing sophistication of Ponzi scheme fraud methods, existing detection techniques fail to effectively identify Ponzi schemes timely. To mitigate the risk of investor deception, we propose PSPL, a compressed sensing oversampling-based Peephole LSTM approach for detecting Ethereum Ponzi schemes. First, we identify Ethereum representative Ponzi schemes’ features by analyzing smart contracts’ codes and user accounts’ temporal transaction information based on the popular XBlock dataset. Second, to address the class imbalance and few-shot learning challenges, we leverage the compressed sensing approach to oversample the Ponzi Scheme samples. Third, peephole LSTM is employed to effectively capture long sequence variations in the fraud features of Ponzi schemes, accurately identifying hidden Ponzi schemes during the transaction process in case fraudulent features are exposed. Finally, experimental results demonstrate the effectiveness and efficiency of PSPL.
去中心化金融(DeFi)利用b区块链的关键原则,以更大的贸易自由来改善传统的金融体系。然而,由于去中心化金融协议的实施缺乏准入限制,有效的监管措施对于确保DeFi生态系统的健康发展至关重要。作为一个著名的DeFi平台,以太坊见证了欺诈活动的增加,庞氏骗局造成了重大的用户损失。随着庞氏骗局欺诈手段的日益复杂,现有的检测技术无法及时有效地识别庞氏骗局。为了降低投资者欺骗的风险,我们提出了PSPL,一种基于压缩感知过采样的窥视孔LSTM方法,用于检测以太坊庞氏骗局。首先,我们基于流行的XBlock数据集,通过分析智能合约代码和用户账户的临时交易信息,识别以太坊代表性庞氏骗局的特征。其次,为了解决类不平衡和少镜头学习的挑战,我们利用压缩感知方法对庞氏骗局样本进行过采样。第三,利用窥视孔LSTM有效捕捉庞氏骗局欺诈特征的长序列变化,在交易过程中发现欺诈特征时,准确识别隐藏的庞氏骗局。最后,通过实验验证了PSPL的有效性和效率。
{"title":"PSPL: A Ponzi scheme smart contracts detection approach via compressed sensing oversampling-based peephole LSTM","authors":"Lei Wang ,&nbsp;Hao Cheng ,&nbsp;Zihao Sun ,&nbsp;Aolin Tian ,&nbsp;Zhonglian Yang","doi":"10.1016/j.future.2024.107655","DOIUrl":"10.1016/j.future.2024.107655","url":null,"abstract":"<div><div>Decentralized Finance (DeFi) utilizes the key principles of blockchain to improve the traditional finance system with greater freedom in trade. However, due to the absence of access restrictions in the implementation of decentralized finance protocols, effective regulatory measures are crucial to ensuring the healthy development of DeFi ecosystems. As a prominent DeFi platform, Ethereum has witnessed an increase in fraudulent activities, with the Ponzi schemes causing significant user losses. With the growing sophistication of Ponzi scheme fraud methods, existing detection techniques fail to effectively identify Ponzi schemes timely. To mitigate the risk of investor deception, we propose PSPL, a compressed sensing oversampling-based Peephole LSTM approach for detecting Ethereum Ponzi schemes. First, we identify Ethereum representative Ponzi schemes’ features by analyzing smart contracts’ codes and user accounts’ temporal transaction information based on the popular XBlock dataset. Second, to address the class imbalance and few-shot learning challenges, we leverage the compressed sensing approach to oversample the Ponzi Scheme samples. Third, peephole LSTM is employed to effectively capture long sequence variations in the fraud features of Ponzi schemes, accurately identifying hidden Ponzi schemes during the transaction process in case fraudulent features are exposed. Finally, experimental results demonstrate the effectiveness and efficiency of PSPL.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107655"},"PeriodicalIF":6.2,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AISAW: An adaptive interference-aware scheduling algorithm for acceleration of deep learning workloads training on distributed heterogeneous systems 分布式异构系统上深度学习工作负载训练加速的自适应干扰感知调度算法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-06 DOI: 10.1016/j.future.2024.107642
Yushen Bi , Yupeng Xi , Chao Jing
Owing to the widespread application of artificial intelligence, deep learning (DL) has attracted considerable attention from both academia and industry. The DL workload-training process is a key step in determining the quality of DL-based applications. However, owing to the limited computational power of conventionally centralized clusters, it is more beneficial to accelerate workload training while placing them in distributed heterogeneous systems. Unfortunately, current scheduling algorithms do not account for the various capabilities of nodes and the limited network bandwidth, which leads to poor performance in distributed heterogeneous systems. To address this problem, we propose an adaptive interference-aware scheduling algorithm for accelerating DL workloads (called AISAW). By doing so, we initially established a predictive model consisting of a job performance model and an interference-aware model to reduce the impact of job co-location. Subsequently, to improve the system efficiency, we developed an adaptive priority-aware allocation scheme (APS) to find the optimal performance match in terms of adaptively allocating DL jobs to computing nodes. In addition, under the constraint of network bandwidth, we devised a deadline-aware overhead minimization dynamic migration scheme (DOMS) to avoid the high overhead caused by frequent job migration. Finally, we conducted experiments on real distributed heterogeneous systems deployed with several GPU-based servers. The results demonstrate that AISAW is capable of improving the system efficiency by decreasing the makespan and average JCT by at least 23.86% and 13.02%, respectively, compared to state-of-the-art algorithms such as Gandiva, Tiresias, and MLF-H.
由于人工智能的广泛应用,深度学习(DL)引起了学术界和工业界的广泛关注。深度学习的工作量训练过程是决定基于深度学习的应用质量的关键步骤。然而,由于传统集中式集群的计算能力有限,将其置于分布式异构系统中更有利于加速工作负载训练。遗憾的是,目前的调度算法没有考虑到节点的各种能力和有限的网络带宽,导致分布式异构系统性能低下。为解决这一问题,我们提出了一种用于加速 DL 工作负载的自适应干扰感知调度算法(称为 AISAW)。为此,我们首先建立了一个由作业性能模型和干扰感知模型组成的预测模型,以减少作业共址的影响。随后,为了提高系统效率,我们开发了一种自适应优先级感知分配方案(APS),以便在将 DL 作业自适应分配到计算节点方面找到最佳性能匹配。此外,在网络带宽的限制下,我们还设计了一种具有截止日期意识的开销最小化动态迁移方案(DOMS),以避免频繁迁移作业造成的高开销。最后,我们在部署了多台基于 GPU 的服务器的真实分布式异构系统上进行了实验。实验结果表明,与 Gandiva、Tiresias 和 MLF-H 等最先进的算法相比,AISAW 能够提高系统效率,将作业时间和平均 JCT 分别减少至少 23.86% 和 13.02%。
{"title":"AISAW: An adaptive interference-aware scheduling algorithm for acceleration of deep learning workloads training on distributed heterogeneous systems","authors":"Yushen Bi ,&nbsp;Yupeng Xi ,&nbsp;Chao Jing","doi":"10.1016/j.future.2024.107642","DOIUrl":"10.1016/j.future.2024.107642","url":null,"abstract":"<div><div>Owing to the widespread application of artificial intelligence, deep learning (DL) has attracted considerable attention from both academia and industry. The DL workload-training process is a key step in determining the quality of DL-based applications. However, owing to the limited computational power of conventionally centralized clusters, it is more beneficial to accelerate workload training while placing them in distributed heterogeneous systems. Unfortunately, current scheduling algorithms do not account for the various capabilities of nodes and the limited network bandwidth, which leads to poor performance in distributed heterogeneous systems. To address this problem, we propose an adaptive interference-aware scheduling algorithm for accelerating DL workloads (called AISAW). By doing so, we initially established a predictive model consisting of a job performance model and an interference-aware model to reduce the impact of job co-location. Subsequently, to improve the system efficiency, we developed an adaptive priority-aware allocation scheme (APS) to find the optimal performance match in terms of adaptively allocating DL jobs to computing nodes. In addition, under the constraint of network bandwidth, we devised a deadline-aware overhead minimization dynamic migration scheme (DOMS) to avoid the high overhead caused by frequent job migration. Finally, we conducted experiments on real distributed heterogeneous systems deployed with several GPU-based servers. The results demonstrate that AISAW is capable of improving the system efficiency by decreasing the makespan and average JCT by at least 23.86% and 13.02%, respectively, compared to state-of-the-art algorithms such as Gandiva, Tiresias, and MLF-H.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107642"},"PeriodicalIF":6.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated generation of deployment descriptors for managing microservices-based applications in the cloud to edge continuum 自动生成部署描述符,用于管理云到边缘连续体中基于微服务的应用程序
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-05 DOI: 10.1016/j.future.2024.107628
James DesLauriers , Jozsef Kovacs , Tamas Kiss , André Stork , Sebastian Pena Serna , Amjad Ullah
With the emergence of Internet of Things (IoT) devices collecting large amounts of data at the edges of the network, a new generation of hyper-distributed applications is emerging, spanning cloud, fog, and edge computing resources. The automated deployment and management of such applications requires orchestration tools that take a deployment descriptor (e.g. Kubernetes manifest, Helm chart or TOSCA) as input, and deploy and manage the execution of applications at run-time. While most deployment descriptors are prepared by a single person or organisation at one specific time, there are notable scenarios where such descriptors need to be created collaboratively by different roles or organisations, and at different times of the application’s life cycle. An example of this scenario is the modular development of digital twins, composed of the basic building blocks of data, model and algorithm. Each of these building blocks can be created independently from each other, by different individuals or companies, at different times. The challenge here is to compose and build a deployment descriptor from these individual components automatically. This paper presents a novel solution to automate the collaborative composition and generation of deployment descriptors for distributed applications within the cloud-to-edge continuum. The implemented solution has been prototyped in over 25 industrial use cases within the DIGITbrain project, one of which is described in the paper as a representative example.
随着在网络边缘收集大量数据的物联网(IoT)设备的出现,跨越云、雾和边缘计算资源的新一代超分布式应用正在兴起。此类应用的自动部署和管理需要协调工具,将部署描述符(如 Kubernetes 清单、Helm 图表或 TOSCA)作为输入,并在运行时部署和管理应用程序的执行。虽然大多数部署描述符都是由单个人员或组织在某个特定时间准备的,但在一些值得注意的场景中,这些描述符需要由不同角色或组织在应用程序生命周期的不同时间协作创建。数字孪生的模块化开发就是这种情况的一个例子,数字孪生由数据、模型和算法等基本构件组成。其中的每一个构件都可以由不同的个人或公司在不同的时间独立创建。这里的挑战是如何从这些单独的组件中自动组成和构建部署描述符。本文提出了一种新颖的解决方案,可在云到边缘的连续统一体中为分布式应用程序自动协同组合和生成部署描述符。该解决方案已在 DIGITbrain 项目的超过 25 个工业用例中进行了原型验证,本文将以其中一个用例为代表进行介绍。
{"title":"Automated generation of deployment descriptors for managing microservices-based applications in the cloud to edge continuum","authors":"James DesLauriers ,&nbsp;Jozsef Kovacs ,&nbsp;Tamas Kiss ,&nbsp;André Stork ,&nbsp;Sebastian Pena Serna ,&nbsp;Amjad Ullah","doi":"10.1016/j.future.2024.107628","DOIUrl":"10.1016/j.future.2024.107628","url":null,"abstract":"<div><div>With the emergence of Internet of Things (IoT) devices collecting large amounts of data at the edges of the network, a new generation of hyper-distributed applications is emerging, spanning cloud, fog, and edge computing resources. The automated deployment and management of such applications requires orchestration tools that take a deployment descriptor (e.g. Kubernetes manifest, Helm chart or TOSCA) as input, and deploy and manage the execution of applications at run-time. While most deployment descriptors are prepared by a single person or organisation at one specific time, there are notable scenarios where such descriptors need to be created collaboratively by different roles or organisations, and at different times of the application’s life cycle. An example of this scenario is the modular development of digital twins, composed of the basic building blocks of data, model and algorithm. Each of these building blocks can be created independently from each other, by different individuals or companies, at different times. The challenge here is to compose and build a deployment descriptor from these individual components automatically. This paper presents a novel solution to automate the collaborative composition and generation of deployment descriptors for distributed applications within the cloud-to-edge continuum. The implemented solution has been prototyped in over 25 industrial use cases within the DIGITbrain project, one of which is described in the paper as a representative example.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107628"},"PeriodicalIF":6.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A sampling-based acceleration method for heterogeneous chiplet NoC simulations 一种基于采样的非均匀晶片NoC模拟加速方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-04 DOI: 10.1016/j.future.2024.107643
Ruoting Xiong , Wei Ren , Chengzhuo Zhang , Tao Li , Geyong Min
To tackle the challenges posed by Moore’s Law, Chiplet technology emerges as a promising solution. Chiplets comprising CPUs and accelerators are connected by Networks-on-Chip (NoC) for large-scale integration and efficient communications. However, the slow simulation speed of NoCs has become a bottleneck, limiting the overall performance of chiplet simulations. Existing solutions only focus on accelerating NoC simulation in homogeneous architecture. In this paper, we introduce a novel TOPSIS-based Heterogeneous Trace Score-sampling method (THTS) for faster NoC simulation in heterogeneous architecture. THTS enables quick and accurate sampling of representative NoC traces. Additionally, we propose a weight exploration model to further enhance sampling accuracy. Compared with the traditional NoC sampling method (NoCLabs), THTS reduces the error of the average packet latency by 22.17% and the total simulation time by 1.6 folds. THTS estimates the NoC performance with an average loss less than 5%, while speeding up the NoC simulation by up to 3 times. In addition, under different weight space sizes, the time required for the weight exploration model to solve the optimal weight vector is within seconds, remarkably speeding up the solution process. Notably, the predicted NoC simulation error under the optimal weight is only 1.42%.
为应对摩尔定律带来的挑战,芯片组技术成为一种前景广阔的解决方案。由 CPU 和加速器组成的芯片通过片上网络(NoC)连接,实现大规模集成和高效通信。然而,NoC 的仿真速度慢已成为瓶颈,限制了芯片组仿真的整体性能。现有的解决方案仅侧重于加速同构架构中的 NoC 仿真。在本文中,我们介绍了一种新颖的基于 TOPSIS 的异构轨迹分数采样方法(THTS),用于加快异构架构中的 NoC 仿真。THTS 能够快速准确地抽取具有代表性的 NoC 迹线。此外,我们还提出了一个权重探索模型,以进一步提高采样精度。与传统的 NoC 采样方法(NoCLabs)相比,THTS 将平均数据包延迟误差减少了 22.17%,总仿真时间减少了 1.6 倍。THTS 估算 NoC 性能的平均损失小于 5%,同时将 NoC 仿真速度提高了 3 倍。此外,在不同权重空间大小的情况下,权重探索模型求解最优权重向量所需的时间均在几秒之内,大大加快了求解过程。值得注意的是,在最优权重下,预测的 NoC 仿真误差仅为 1.42%。
{"title":"A sampling-based acceleration method for heterogeneous chiplet NoC simulations","authors":"Ruoting Xiong ,&nbsp;Wei Ren ,&nbsp;Chengzhuo Zhang ,&nbsp;Tao Li ,&nbsp;Geyong Min","doi":"10.1016/j.future.2024.107643","DOIUrl":"10.1016/j.future.2024.107643","url":null,"abstract":"<div><div>To tackle the challenges posed by Moore’s Law, Chiplet technology emerges as a promising solution. Chiplets comprising CPUs and accelerators are connected by Networks-on-Chip (NoC) for large-scale integration and efficient communications. However, the slow simulation speed of NoCs has become a bottleneck, limiting the overall performance of chiplet simulations. Existing solutions only focus on accelerating NoC simulation in homogeneous architecture. In this paper, we introduce a novel TOPSIS-based Heterogeneous Trace Score-sampling method (THTS) for faster NoC simulation in heterogeneous architecture. THTS enables quick and accurate sampling of representative NoC traces. Additionally, we propose a weight exploration model to further enhance sampling accuracy. Compared with the traditional NoC sampling method (NoCLabs), THTS reduces the error of the average packet latency by 22.17% and the total simulation time by 1.6 folds. THTS estimates the NoC performance with an average loss less than 5%, while speeding up the NoC simulation by up to 3 times. In addition, under different weight space sizes, the time required for the weight exploration model to solve the optimal weight vector is within seconds, remarkably speeding up the solution process. Notably, the predicted NoC simulation error under the optimal weight is only 1.42%.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107643"},"PeriodicalIF":6.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoVST: An anomaly detection method for IoV based on spatiotemporal feature fusion 基于时空特征融合的车联网异常检测方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-04 DOI: 10.1016/j.future.2024.107636
Jinhui Cao , Xiaoqiang Di , Jinqing Li , Keping Yu , Liang Zhao
In the Internet of Vehicles (IoV) based on Cellular Vehicle-to-Everything (C-V2X) wireless communication, vehicles inform surrounding vehicles and infrastructure of their status by broadcasting basic safety messages, enhancing traffic management capabilities. Since anomalous vehicles can broadcast false traffic messages, anomaly detection is crucial for IoV. State-of-the-art methods typically utilize deep detection models to capture the internal spatial features of each message and the timing relationships of all messages in a sequence. However, since existing work neglects the local spatiotemporal relationship between messages broadcasted by the same vehicle, the spatiotemporal features of message sequences are not accurately described and extracted, resulting in inaccurate anomaly detection. To tackle these issues, a message attribute graph model (MAGM) is proposed, which accurately describes the spatiotemporal relationship of messages in the sequence using attribute graphs, including the internal spatial features of messages, the temporal order relationship of all messages, and the temporal order relationship of messages from the same vehicle. Furthermore, an anomaly detection method for IoV based on spatiotemporal feature fusion (IoVST) is proposed to detect anomalies accurately. IoVST aggregates the local spatiotemporal features of MAGM based on Transformer and extracts the global spatiotemporal features of message sequences through global time encoding and the self-attention mechanism. We conducted experimental evaluations on the VeReMi extension dataset. The F1 score and accuracy of IoVST are 1.68% and 1.92% higher than the optimal baseline method. The detection of every message can be accomplished in 0.7185 ms. In addition, the average accuracy of IoVST in four publicly available network intrusion detection datasets is 7.77% higher than the best baseline method, proving that our method can be applied well to other networks such as traditional IT networks, the Internet of Things, and industrial control networks.
在以C-V2X (Cellular Vehicle-to-Everything)无线通信为基础的车联网(IoV)中,车辆通过广播基本的安全信息,向周围车辆和基础设施通报自己的状态,从而提高交通管理能力。由于异常车辆可能会广播错误的交通信息,因此异常检测对车联网至关重要。最先进的方法通常利用深度检测模型来捕获每个消息的内部空间特征以及序列中所有消息的时间关系。然而,由于现有工作忽略了同一车辆广播消息之间的局部时空关系,无法准确描述和提取消息序列的时空特征,导致异常检测不准确。针对这些问题,提出了一种消息属性图模型(MAGM),该模型利用属性图准确地描述了消息在序列中的时空关系,包括消息的内部空间特征、所有消息的时间顺序关系以及来自同一车辆的消息的时间顺序关系。在此基础上,提出了一种基于时空特征融合(IoVST)的车联网异常检测方法。IoVST对基于Transformer的MAGM的局部时空特征进行聚合,并通过全局时间编码和自注意机制提取消息序列的全局时空特征。我们对VeReMi扩展数据集进行了实验评估。IoVST的F1评分和准确率分别比最优基线法提高1.68%和1.92%。每条消息的检测可以在0.7185 ms内完成。此外,在四个公开的网络入侵检测数据集上,IoVST的平均准确率比最佳基线方法高出7.77%,证明我们的方法可以很好地应用于传统IT网络、物联网和工业控制网络等其他网络。
{"title":"IoVST: An anomaly detection method for IoV based on spatiotemporal feature fusion","authors":"Jinhui Cao ,&nbsp;Xiaoqiang Di ,&nbsp;Jinqing Li ,&nbsp;Keping Yu ,&nbsp;Liang Zhao","doi":"10.1016/j.future.2024.107636","DOIUrl":"10.1016/j.future.2024.107636","url":null,"abstract":"<div><div>In the Internet of Vehicles (IoV) based on Cellular Vehicle-to-Everything (C-V2X) wireless communication, vehicles inform surrounding vehicles and infrastructure of their status by broadcasting basic safety messages, enhancing traffic management capabilities. Since anomalous vehicles can broadcast false traffic messages, anomaly detection is crucial for IoV. State-of-the-art methods typically utilize deep detection models to capture the internal spatial features of each message and the timing relationships of all messages in a sequence. However, since existing work neglects the local spatiotemporal relationship between messages broadcasted by the same vehicle, the spatiotemporal features of message sequences are not accurately described and extracted, resulting in inaccurate anomaly detection. To tackle these issues, a message attribute graph model (MAGM) is proposed, which accurately describes the spatiotemporal relationship of messages in the sequence using attribute graphs, including the internal spatial features of messages, the temporal order relationship of all messages, and the temporal order relationship of messages from the same vehicle. Furthermore, an anomaly detection method for IoV based on spatiotemporal feature fusion (IoVST) is proposed to detect anomalies accurately. IoVST aggregates the local spatiotemporal features of MAGM based on Transformer and extracts the global spatiotemporal features of message sequences through global time encoding and the self-attention mechanism. We conducted experimental evaluations on the VeReMi extension dataset. The F1 score and accuracy of IoVST are 1.68% and 1.92% higher than the optimal baseline method. The detection of every message can be accomplished in 0.7185 ms. In addition, the average accuracy of IoVST in four publicly available network intrusion detection datasets is 7.77% higher than the best baseline method, proving that our method can be applied well to other networks such as traditional IT networks, the Internet of Things, and industrial control networks.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107636"},"PeriodicalIF":6.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure integration of 5G in industrial networks: State of the art, challenges and opportunities 5G在工业网络中的安全集成:最新技术、挑战和机遇
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-04 DOI: 10.1016/j.future.2024.107645
Sotiris Michaelides , Stefan Lenz , Thomas Vogt , Martin Henze
The industrial landscape is undergoing a significant transformation, moving away from traditional wired fieldbus networks to cutting-edge 5G mobile networks. This transition, extending from local applications to company-wide use and spanning multiple factories, is driven by the promise of low-latency communication and seamless connectivity for various devices in industrial settings. However, besides these tremendous benefits, the integration of 5G as the communication infrastructure in industrial networks introduces a new set of risks and threats to the security of industrial systems. The inherent complexity of 5G systems poses unique challenges for ensuring a secure integration, surpassing those encountered with any technology previously utilized in industrial networks. Most importantly, the distinct characteristics of industrial networks, such as real-time operation, required safety guarantees, and high availability requirements, further complicate this task. As the industrial transition from wired to wireless networks is a relatively new concept, a lack of guidance and recommendations on securely integrating 5G renders many industrial systems vulnerable and exposed to threats associated with 5G. To address this situation, in this paper, we summarize the state-of-the-art and derive a set of recommendations for the secure integration of 5G into industrial networks based on a thorough analysis of the research landscape. Furthermore, we identify opportunities to utilize 5G to enhance security and indicate remaining challenges, identifying future academic directions.
工业格局正在经历一场重大变革,从传统的有线现场总线网络转向尖端的5G移动网络。这种从本地应用扩展到公司范围内使用并跨越多个工厂的转变,是由工业环境中各种设备的低延迟通信和无缝连接的承诺推动的。然而,除了这些巨大的好处之外,5G作为工业网络中的通信基础设施的融合也给工业系统的安全带来了一系列新的风险和威胁。5G系统固有的复杂性为确保安全集成带来了独特的挑战,超过了以前在工业网络中使用的任何技术所遇到的挑战。最重要的是,工业网络的独特特征,如实时操作、所需的安全保证和高可用性要求,使这项任务进一步复杂化。由于从有线网络到无线网络的产业转型是一个相对较新的概念,缺乏关于安全集成5G的指导和建议,使许多工业系统容易受到5G相关的威胁。为了解决这种情况,在本文中,我们总结了最先进的技术,并在对研究前景进行全面分析的基础上,得出了一组将5G安全集成到工业网络中的建议。此外,我们确定了利用5G增强安全性的机会,并指出了仍然存在的挑战,确定了未来的学术方向。
{"title":"Secure integration of 5G in industrial networks: State of the art, challenges and opportunities","authors":"Sotiris Michaelides ,&nbsp;Stefan Lenz ,&nbsp;Thomas Vogt ,&nbsp;Martin Henze","doi":"10.1016/j.future.2024.107645","DOIUrl":"10.1016/j.future.2024.107645","url":null,"abstract":"<div><div>The industrial landscape is undergoing a significant transformation, moving away from traditional wired fieldbus networks to cutting-edge 5G mobile networks. This transition, extending from local applications to company-wide use and spanning multiple factories, is driven by the promise of low-latency communication and seamless connectivity for various devices in industrial settings. However, besides these tremendous benefits, the integration of 5G as the communication infrastructure in industrial networks introduces a new set of risks and threats to the security of industrial systems. The inherent complexity of 5G systems poses unique challenges for ensuring a secure integration, surpassing those encountered with any technology previously utilized in industrial networks. Most importantly, the distinct characteristics of industrial networks, such as real-time operation, required safety guarantees, and high availability requirements, further complicate this task. As the industrial transition from wired to wireless networks is a relatively new concept, a lack of guidance and recommendations on securely integrating 5G renders many industrial systems vulnerable and exposed to threats associated with 5G. To address this situation, in this paper, we summarize the state-of-the-art and derive a set of recommendations for the secure integration of 5G into industrial networks based on a thorough analysis of the research landscape. Furthermore, we identify opportunities to utilize 5G to enhance security and indicate remaining challenges, identifying future academic directions.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107645"},"PeriodicalIF":6.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing anomaly detection in computational workflows with active learning
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-04 DOI: 10.1016/j.future.2024.107608
Krishnan Raghavan , George Papadimitriou , Hongwei Jin , Anirban Mandal , Mariam Kiran , Prasanna Balaprakash , Ewa Deelman
A computational workflow, also known as workflow, consists of tasks that are executed in a certain order to attain a specific computational campaign. Computational workflows are commonly employed in science domains, such as physics, chemistry, genomics, to complete large-scale experiments in distributed and heterogeneous computing environments. However, running computations at such a large scale makes the workflow applications prone to failures and performance degradation, which can slowdown, stall, and ultimately lead to workflow failure. Learning how these workflows behave under normal and anomalous conditions can help us identify the causes of degraded performance and subsequently trigger appropriate actions to resolve them. However, learning in such circumstances is a challenging task because of the large volume of high-quality historical data needed to train accurate and reliable models. Generating such datasets not only takes a lot of time and effort but it also requires a lot of resources to be devoted to data generation for training purposes. Active learning is a promising approach to this problem. It is an approach where the data is generated as required by the machine learning model and thus it can potentially reduce the training data needed to derive accurate models. In this work, we present an active learning approach that is supported by an experimental framework, Poseidon-X, that utilizes a modern workflow management system and two cloud testbeds. We evaluate our approach using three computational workflows. For one workflow we run an end-to-end live active learning experiment, for the other two we evaluate our active learning algorithms using pre-captured data traces provided by the Flow-Bench benchmark. Our findings indicate that active learning not only saves resources, but it also improves the accuracy of the detection of anomalies.
{"title":"Advancing anomaly detection in computational workflows with active learning","authors":"Krishnan Raghavan ,&nbsp;George Papadimitriou ,&nbsp;Hongwei Jin ,&nbsp;Anirban Mandal ,&nbsp;Mariam Kiran ,&nbsp;Prasanna Balaprakash ,&nbsp;Ewa Deelman","doi":"10.1016/j.future.2024.107608","DOIUrl":"10.1016/j.future.2024.107608","url":null,"abstract":"<div><div>A computational workflow, also known as workflow, consists of tasks that are executed in a certain order to attain a specific computational campaign. Computational workflows are commonly employed in science domains, such as physics, chemistry, genomics, to complete large-scale experiments in distributed and heterogeneous computing environments. However, running computations at such a large scale makes the workflow applications prone to failures and performance degradation, which can slowdown, stall, and ultimately lead to workflow failure. Learning how these workflows behave under normal and anomalous conditions can help us identify the causes of degraded performance and subsequently trigger appropriate actions to resolve them. However, learning in such circumstances is a challenging task because of the large volume of high-quality historical data needed to train accurate and reliable models. Generating such datasets not only takes a lot of time and effort but it also requires a lot of resources to be devoted to data generation for training purposes. Active learning is a promising approach to this problem. It is an approach where the data is generated as required by the machine learning model and thus it can potentially reduce the training data needed to derive accurate models. In this work, we present an active learning approach that is supported by an experimental framework, Poseidon-X, that utilizes a modern workflow management system and two cloud testbeds. We evaluate our approach using three computational workflows. For one workflow we run an end-to-end live active learning experiment, for the other two we evaluate our active learning algorithms using pre-captured data traces provided by the Flow-Bench benchmark. Our findings indicate that active learning not only saves resources, but it also improves the accuracy of the detection of anomalies.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107608"},"PeriodicalIF":6.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143167013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient distributed matrix for resolving computational intensity in remote sensing 基于分布式矩阵的遥感计算强度求解方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-03 DOI: 10.1016/j.future.2024.107644
Weitao Zou , Wei Li , Zeyu Wang , Jiaming Pei , Tongtong Lou , Guangsheng Chen , Weipeng Jing , Albert Y. Zomaya
Remote sensing analysis is a dominant yet time-consuming part of geospatial applications. The performance can be optimized based on distributed computing, but current systems still face significant challenges. Firstly, the spatial characteristics of remote sensing data lead to an uneven distribution of computational intensity (CIT), which characterizes computing loads, including computation and IO, in different spatial domains. Secondly, it is hard to achieve load-balancing without introducing new computational costs, thus increasing the CIT and reducing the overall performance. Therefore, resolving CIT by decreasing and balancing it is an important research issue for distributed remote sensing computing. This paper proposes LBM-RS, an efficient distributed framework based on load-balancing matrix computing for remote sensing. It implements remote sensing applications based on the distributed matrix, representing the algorithms with a matrix computation and constructing multi-dimensional spatial domains to model computational costs for matrix operation tasks. It resolves the CIT with the minimum computation load and dynamic spatial domain decomposition strategy to support global load balancing. We also exploit the IO efficiency from the task staging strategy and the cache-aware memory structure for remote sensing data. In this way, it can reduce the bandwidth burden and memory access frequency, thus decreasing the overall CIT. Finally, we evaluate the proposed approach on both real and synthetic datasets, and the results demonstrate significant advantages in computation and communication efficiency compared to the benchmarks.
遥感分析是地理空间应用中一个主要但耗时的部分。在分布式计算的基础上,性能可以得到优化,但目前的系统仍然面临着很大的挑战。首先,遥感数据的空间特征导致计算强度(CIT)分布不均匀,CIT是计算负荷(包括计算和IO)在不同空间域中的特征。其次,如果不引入新的计算成本,很难实现负载均衡,从而增加了CIT,降低了整体性能。因此,通过减小和平衡来解决CIT问题是分布式遥感计算的重要研究课题。提出了一种基于负载均衡矩阵计算的高效分布式遥感框架LBM-RS。它基于分布式矩阵实现遥感应用,用矩阵计算表示算法,构建多维空间域来模拟矩阵运算任务的计算成本。以最小的计算量求解CIT,采用动态空间域分解策略,支持全局负载均衡。我们还从任务分级策略和缓存感知内存结构中挖掘了遥感数据的IO效率。最后,我们在真实数据集和合成数据集上对该方法进行了评估,结果表明,与基准测试相比,该方法在计算和通信效率方面具有显著优势。
{"title":"Efficient distributed matrix for resolving computational intensity in remote sensing","authors":"Weitao Zou ,&nbsp;Wei Li ,&nbsp;Zeyu Wang ,&nbsp;Jiaming Pei ,&nbsp;Tongtong Lou ,&nbsp;Guangsheng Chen ,&nbsp;Weipeng Jing ,&nbsp;Albert Y. Zomaya","doi":"10.1016/j.future.2024.107644","DOIUrl":"10.1016/j.future.2024.107644","url":null,"abstract":"<div><div>Remote sensing analysis is a dominant yet time-consuming part of geospatial applications. The performance can be optimized based on distributed computing, but current systems still face significant challenges. Firstly, the spatial characteristics of remote sensing data lead to an uneven distribution of computational intensity (CIT), which characterizes computing loads, including computation and IO, in different spatial domains. Secondly, it is hard to achieve load-balancing without introducing new computational costs, thus increasing the CIT and reducing the overall performance. Therefore, resolving CIT by decreasing and balancing it is an important research issue for distributed remote sensing computing. This paper proposes LBM-RS, an efficient distributed framework based on load-balancing matrix computing for remote sensing. It implements remote sensing applications based on the distributed matrix, representing the algorithms with a matrix computation and constructing multi-dimensional spatial domains to model computational costs for matrix operation tasks. It resolves the CIT with the minimum computation load and dynamic spatial domain decomposition strategy to support global load balancing. We also exploit the IO efficiency from the task staging strategy and the cache-aware memory structure for remote sensing data. In this way, it can reduce the bandwidth burden and memory access frequency, thus decreasing the overall CIT. Finally, we evaluate the proposed approach on both real and synthetic datasets, and the results demonstrate significant advantages in computation and communication efficiency compared to the benchmarks.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107644"},"PeriodicalIF":6.2,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible hybrid post-quantum bidirectional multi-factor authentication and key agreement framework using ECC and KEM 基于ECC和KEM的灵活混合后量子双向多因素认证和密钥协议框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-11-30 DOI: 10.1016/j.future.2024.107634
A. Braeken
Post-quantum computing becomes a real threat in the coming years, resulting in vulnerable security protocols that rely on traditional public key algorithms. It is not evident to provide protection against it in a cost-efficient manner, especially for Internet of Things (IoT) devices with limited capabilities. There is a high variety of IoT applications, some require only short-term security (e.g. agriculture) and others long-term security (e.g. healthcare). In order to provide a unified security approach for such heterogeneity in IoT, we propose a flexible hybrid authentication and key agreement framework for a client–server architecture, which relies both on the classical elliptic curve cryptography (ECC) and on a quantum secure key encapsulation mechanism (KEM). There are five versions that can be derived from the framework, going from a fully hybrid, and partial hybrid to classical construction. The trade-off between performance and security strength is demonstrated for each of these versions. The overall cost of the protocols is highly reduced thanks to the usage of multifactors in the authentication process, both on the user side by means of biometrics and the device side by means of physically unclonable functions (PUFs). We show that both Kyber and Mc Elience as KEM can offer reasonable performance, depending on the situation. The unified framework offers optimal security protection against the most well-known attacks.
后量子计算在未来几年将成为一个真正的威胁,导致依赖传统公钥算法的安全协议容易受到攻击。以经济有效的方式提供针对它的保护并不明显,特别是对于功能有限的物联网(IoT)设备。物联网应用种类繁多,有些只需要短期安全(如农业),而其他则需要长期安全(如医疗保健)。为了为物联网中的这种异质性提供统一的安全方法,我们提出了一种灵活的混合身份验证和密钥协议框架,用于客户端-服务器架构,该架构既依赖于经典椭圆曲线加密(ECC),也依赖于量子安全密钥封装机制(KEM)。从框架中可以衍生出五个版本,从完全混合,部分混合到经典结构。每个版本都演示了性能和安全强度之间的权衡。由于在身份验证过程中使用了多因素,用户端通过生物识别技术和设备端通过物理不可克隆功能(puf)大大降低了协议的总体成本。我们表明,Kyber和Mc Elience作为KEM都可以根据具体情况提供合理的性能。统一的框架提供最优的安全保护,防止最常见的攻击。
{"title":"Flexible hybrid post-quantum bidirectional multi-factor authentication and key agreement framework using ECC and KEM","authors":"A. Braeken","doi":"10.1016/j.future.2024.107634","DOIUrl":"10.1016/j.future.2024.107634","url":null,"abstract":"<div><div>Post-quantum computing becomes a real threat in the coming years, resulting in vulnerable security protocols that rely on traditional public key algorithms. It is not evident to provide protection against it in a cost-efficient manner, especially for Internet of Things (IoT) devices with limited capabilities. There is a high variety of IoT applications, some require only short-term security (e.g. agriculture) and others long-term security (e.g. healthcare). In order to provide a unified security approach for such heterogeneity in IoT, we propose a flexible hybrid authentication and key agreement framework for a client–server architecture, which relies both on the classical elliptic curve cryptography (ECC) and on a quantum secure key encapsulation mechanism (KEM). There are five versions that can be derived from the framework, going from a fully hybrid, and partial hybrid to classical construction. The trade-off between performance and security strength is demonstrated for each of these versions. The overall cost of the protocols is highly reduced thanks to the usage of multifactors in the authentication process, both on the user side by means of biometrics and the device side by means of physically unclonable functions (PUFs). We show that both Kyber and Mc Elience as KEM can offer reasonable performance, depending on the situation. The unified framework offers optimal security protection against the most well-known attacks.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"166 ","pages":"Article 107634"},"PeriodicalIF":6.2,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142790059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1