首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
RoWD: Automated rogue workload detector for HPC security RoWD:用于高性能计算安全的自动流氓工作负载检测器
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-22 DOI: 10.1016/j.future.2026.108392
Francesco Antici , Jens Domke , Andrea Bartolini , Zeynep Kiziltan , Satoshi Matsuoka
The increasing reliance on High-Performance Computing (HPC) systems to execute complex scientific and industrial workloads raises significant security concerns related to the misuse of HPC resources for unauthorized or malicious activities. Rogue job executions can threaten the integrity, confidentiality, and availability of HPC infrastructures. Given the scale and heterogeneity of HPC job submissions, manual or ad hoc monitoring is inadequate to effectively detect such misuse. Therefore, automated solutions capable of systematically analyzing job submissions are essential to detect rogue workloads. To address this challenge, we present RoWD (Rogue Workload Detector), the first framework for automated and systematic security screening of the HPC job-submission pipeline. RoWD is composed of modular plug-ins that classify different types of workloads and enable the detection of rogue jobs through the analysis of job scripts and associated metadata. We deploy RoWD on the Supercomputer Fugaku to classify AI workloads and release SCRIPT-AI, the first dataset of annotated job scripts labeled with workload characteristics. We evaluate RoWD on approximately 50K previously unseen jobs executed on Fugaku between 2021 and 2025. Our results show that RoWD accurately classifies AI jobs (achieving an F1 score of 95%), is robust against adversarial behavior, and incurs low runtime overhead, making it suitable for strengthening the security of HPC environments and for real-time deployment in production systems.
越来越多地依赖高性能计算(HPC)系统来执行复杂的科学和工业工作负载,引发了与滥用HPC资源进行未经授权或恶意活动相关的重大安全问题。恶意作业执行会威胁到HPC基础架构的完整性、机密性和可用性。鉴于高性能计算作业提交的规模和异质性,手工或特别监测不足以有效地检测此类滥用。因此,能够系统地分析作业提交的自动化解决方案对于检测非法工作负载至关重要。为了应对这一挑战,我们提出了RoWD(流氓工作负载检测器),这是第一个对高性能计算作业提交管道进行自动化和系统安全筛选的框架。RoWD由模块化插件组成,这些插件对不同类型的工作负载进行分类,并通过分析作业脚本和相关元数据来检测流氓作业。我们在超级计算机Fugaku上部署了RoWD,对人工智能工作负载进行分类,并发布了SCRIPT-AI,这是第一个标有工作负载特征的注释作业脚本数据集。我们评估了2021年至2025年间在Fugaku执行的约5万个以前未见过的作业的RoWD。我们的研究结果表明,RoWD可以准确地对人工智能作业进行分类(达到95%的F1分数),对对抗行为具有鲁棒性,并且运行时开销低,适用于增强高性能计算环境的安全性和在生产系统中的实时部署。
{"title":"RoWD: Automated rogue workload detector for HPC security","authors":"Francesco Antici ,&nbsp;Jens Domke ,&nbsp;Andrea Bartolini ,&nbsp;Zeynep Kiziltan ,&nbsp;Satoshi Matsuoka","doi":"10.1016/j.future.2026.108392","DOIUrl":"10.1016/j.future.2026.108392","url":null,"abstract":"<div><div>The increasing reliance on High-Performance Computing (HPC) systems to execute complex scientific and industrial workloads raises significant security concerns related to the misuse of HPC resources for unauthorized or malicious activities. Rogue job executions can threaten the integrity, confidentiality, and availability of HPC infrastructures. Given the scale and heterogeneity of HPC job submissions, manual or ad hoc monitoring is inadequate to effectively detect such misuse. Therefore, automated solutions capable of systematically analyzing job submissions are essential to detect rogue workloads. To address this challenge, we present RoWD (Rogue Workload Detector), the first framework for automated and systematic security screening of the HPC job-submission pipeline. RoWD is composed of modular plug-ins that classify different types of workloads and enable the detection of rogue jobs through the analysis of job scripts and associated metadata. We deploy RoWD on the Supercomputer Fugaku to classify AI workloads and release SCRIPT-AI, the first dataset of annotated job scripts labeled with workload characteristics. We evaluate RoWD on approximately 50K previously unseen jobs executed on Fugaku between 2021 and 2025. Our results show that RoWD accurately classifies AI jobs (achieving an F1 score of 95%), is robust against adversarial behavior, and incurs low runtime overhead, making it suitable for strengthening the security of HPC environments and for real-time deployment in production systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108392"},"PeriodicalIF":6.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum-resistant blockchain architecture for secure vehicular networks: A ML-KEM-enabled approach with PoA and PoP consensus 安全车联网的抗量子区块链架构:基于PoA和PoP共识的ml - kemm支持方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-22 DOI: 10.1016/j.future.2026.108391
Muhammad Asim , Wu Junsheng , Li Weigang , Lin Zhijun , Zhang Peng , He Hao , Wei Dong , Ghulam Mohi-ud-Din
The increasing interconnectivity within modern transportation ecosystems, a cornerstone of Intelligent Transportation Systems (ITS), creates critical vulnerabilities, demanding stronger security measures to prevent unauthorized access to vehicles and private data. Existing blockchain implementations for Vehicular Ad Hoc Networks (VANETs) are fundamentally flawed, exhibiting inefficiency with traditional consensus mechanisms, vulnerability to quantum attacks, or often both. To overcome these critical limitations, this study introduces a novel Quantum-Resistant Blockchain Architecture. The core objectives are to achieve highly efficient vehicular data storage, ensure robust confidentiality through post-quantum cryptography, and automate secure transactions. The proposed methodology employs a dual-blockchain structure: a Registration Blockchain (RBC) using Proof-of-Authority (PoA) for secure identity management, and a Message Blockchain (MBC) using Proof-of-Position (PoP) for low-latency message dissemination. A key innovation is the integration of smart contracts with the NIST-approved Module Lattice-Based Key Encapsulation Mechanism (ML-KEM) to automate and secure all processes. The framework is rigorously evaluated using a realistic 5G-VANET Multi-access Edge Computing(MEC) dataset, which includes key parameters like vehicle ID, speed, and location. The results are compelling, demonstrating an Average Block Processing Time of 0.0326 s and a Transactional Throughput of 30.64 TPS, significantly outperforming RSA and AES benchmarks. This research’s primary contribution is a comprehensive framework that substantially improves data security and scalability while future-proofing VANETs against the imminent and evolving threat of quantum computing.
作为智能交通系统(ITS)的基石,现代交通生态系统的互联性日益增强,也带来了严重的漏洞,需要更强有力的安全措施来防止对车辆和私人数据的未经授权访问。现有的车载自组织网络(vanet)的区块链实现从根本上存在缺陷,与传统的共识机制相比效率低下,容易受到量子攻击,或者两者兼有。为了克服这些关键的限制,本研究引入了一种新的抗量子区块链架构。核心目标是实现高效的车载数据存储,通过后量子加密确保强大的机密性,并自动化安全交易。所提出的方法采用双区块链结构:使用权威证明(PoA)进行安全身份管理的注册区块链(RBC)和使用位置证明(PoP)进行低延迟消息传播的消息区块链(MBC)。一个关键的创新是将智能合约与nist批准的基于模块格的密钥封装机制(ML-KEM)集成在一起,以实现所有流程的自动化和安全。该框架使用现实的5G-VANET多接入边缘计算(MEC)数据集进行严格评估,其中包括车辆ID、速度和位置等关键参数。结果令人信服,平均块处理时间为0.0326秒,事务吞吐量为30.64 TPS,显著优于RSA和AES基准。这项研究的主要贡献是一个全面的框架,大大提高了数据的安全性和可扩展性,同时使vanet能够抵御量子计算迫在眉睫和不断发展的威胁。
{"title":"Quantum-resistant blockchain architecture for secure vehicular networks: A ML-KEM-enabled approach with PoA and PoP consensus","authors":"Muhammad Asim ,&nbsp;Wu Junsheng ,&nbsp;Li Weigang ,&nbsp;Lin Zhijun ,&nbsp;Zhang Peng ,&nbsp;He Hao ,&nbsp;Wei Dong ,&nbsp;Ghulam Mohi-ud-Din","doi":"10.1016/j.future.2026.108391","DOIUrl":"10.1016/j.future.2026.108391","url":null,"abstract":"<div><div>The increasing interconnectivity within modern transportation ecosystems, a cornerstone of Intelligent Transportation Systems (ITS), creates critical vulnerabilities, demanding stronger security measures to prevent unauthorized access to vehicles and private data. Existing blockchain implementations for Vehicular Ad Hoc Networks (VANETs) are fundamentally flawed, exhibiting inefficiency with traditional consensus mechanisms, vulnerability to quantum attacks, or often both. To overcome these critical limitations, this study introduces a novel Quantum-Resistant Blockchain Architecture. The core objectives are to achieve highly efficient vehicular data storage, ensure robust confidentiality through post-quantum cryptography, and automate secure transactions. The proposed methodology employs a dual-blockchain structure: a Registration Blockchain (RBC) using Proof-of-Authority (PoA) for secure identity management, and a Message Blockchain (MBC) using Proof-of-Position (PoP) for low-latency message dissemination. A key innovation is the integration of smart contracts with the NIST-approved Module Lattice-Based Key Encapsulation Mechanism (ML-KEM) to automate and secure all processes. The framework is rigorously evaluated using a realistic 5G-VANET Multi-access Edge Computing(MEC) dataset, which includes key parameters like vehicle ID, speed, and location. The results are compelling, demonstrating an Average Block Processing Time of 0.0326 s and a Transactional Throughput of 30.64 TPS, significantly outperforming RSA and AES benchmarks. This research’s primary contribution is a comprehensive framework that substantially improves data security and scalability while future-proofing VANETs against the imminent and evolving threat of quantum computing.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108391"},"PeriodicalIF":6.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A message-driven system for processing highly skewed graphs 一种处理高倾斜图的消息驱动系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-22 DOI: 10.1016/j.future.2026.108394
Bibrak Qamar Chandio, Maciej Brodowicz, Thomas Sterling
The paper provides a unified co-design of: 1) a non-Von Neumann architecture for fine-grain irregular memory computations, 2) a programming and execution model that allows spawning tasks from within the graph vertex data at runtime, 3) language constructs for actions that send work to where the data resides, combining parallel expressiveness of local control objects (LCOs) to implement asynchronous graph processing primitives, 4) and an innovative vertex-centric data-structure, using the concept of Rhizomes, that parallelizes both the out and in-degree load of vertex objects across many cores and yet provides a single programming abstraction to the vertex objects. The data structure hierarchically parallelizes the out-degree load of vertices and the in-degree load laterally. The rhizomes internally communicate and remain consistent, using event-driven synchronization mechanisms, to provide a unified and correct view of the vertex.
Simulated experimental results show performance gains for BFS, SSSP, and Page Rank on large chip sizes for the tested input graph datasets containing highly skewed degree distributions. The improvements come from the ability to express and create fine-grain dynamic computing task in the form of actions, language constructs that aid the compiler to generate code that the runtime system uses to optimally schedule tasks, and the data structure that shares both in and out-degree compute workload among memory-processing elements.
本文提供了一个统一的协同设计:1)用于细粒度不规则内存计算的非冯·诺伊曼架构,2)允许在运行时从图顶点数据中生成任务的编程和执行模型,3)用于将工作发送到数据所在位置的操作的语言结构,结合本地控制对象(LCOs)的并行表达性来实现异步图处理原语,4)和创新的以顶点为中心的数据结构,使用根状茎的概念,这使得顶点对象的出度和入度负载在多个内核上并行化,并为顶点对象提供了一个单一的编程抽象。该数据结构分层并行化顶点的外度负载和内度负载。根茎内部通信并保持一致,使用事件驱动的同步机制,以提供统一和正确的顶点视图。模拟实验结果显示,对于包含高度偏斜度分布的测试输入图数据集,在大芯片尺寸上,BFS、SSSP和Page Rank的性能有所提高。这些改进来自于以动作的形式表达和创建细粒度动态计算任务的能力、帮助编译器生成运行时系统用于优化调度任务的代码的语言构造,以及在内存处理元素之间共享度内和度外计算工作负载的数据结构。
{"title":"A message-driven system for processing highly skewed graphs","authors":"Bibrak Qamar Chandio,&nbsp;Maciej Brodowicz,&nbsp;Thomas Sterling","doi":"10.1016/j.future.2026.108394","DOIUrl":"10.1016/j.future.2026.108394","url":null,"abstract":"<div><div>The paper provides a unified co-design of: 1) a non-Von Neumann architecture for fine-grain irregular memory computations, 2) a programming and execution model that allows spawning tasks from within the graph vertex data at runtime, 3) language constructs for <em>actions</em> that send work to where the data resides, combining parallel expressiveness of local control objects (LCOs) to implement asynchronous graph processing primitives, 4) and an innovative vertex-centric data-structure, using the concept of Rhizomes, that parallelizes both the out and in-degree load of vertex objects across many cores and yet provides a single programming abstraction to the vertex objects. The data structure hierarchically parallelizes the out-degree load of vertices and the in-degree load laterally. The rhizomes internally communicate and remain consistent, using event-driven synchronization mechanisms, to provide a unified and correct view of the vertex.</div><div>Simulated experimental results show performance gains for BFS, SSSP, and Page Rank on large chip sizes for the tested input graph datasets containing highly skewed degree distributions. The improvements come from the ability to express and create fine-grain dynamic computing task in the form of <em>actions</em>, language constructs that aid the compiler to generate code that the runtime system uses to optimally schedule tasks, and the data structure that shares both in and out-degree compute workload among memory-processing elements.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108394"},"PeriodicalIF":6.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability analysis of hardware accelerators for decision tree-based classifier systems 基于决策树的分类器系统硬件加速器可靠性分析
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-20 DOI: 10.1016/j.future.2026.108378
Mario Barbareschi , Salvatore Barone , Alberto Bosio , Antonio Emmanuele
The increasing adoption of AI models has driven applications toward the use of hardware accelerators to meet high computational demands and strict performance requirements. Beyond consideration of performance and energy efficiency, explainability and reliability have emerged as pivotal requirements, particularly for critical applications such as automotive, medical, and aerospace systems. Among the various AI models, Decision Tree Ensembles (DTEs) are particularly notable for their high accuracy and explainability. Moreover, they are particularly well-suited for hardware implementations, enabling high-performance and improved energy efficiency. However, a frequently overlooked aspect of DTEs is their reliability in the presence of hardware malfunctions. While DTEs are generally regarded as robust by design, due to their redundancy and voting mechanisms, hardware faults can still have catastrophic consequences. To address this gap, we present an in-depth reliability analysis of two types of DTE hardware accelerators: classical and approximate implementations. Specifically, we conduct a comprehensive fault injection campaign, varying the number of trees involved in the classification task, the approximation technique used, and the tolerated accuracy loss, while evaluating several benchmark datasets. The results of this study demonstrate that approximation techniques have to be carefully designed, as they can significantly impact resilience. However, techniques that target the representation of features and thresholds appear to be better suited for fault tolerance.
人工智能模型的日益普及推动了应用程序对硬件加速器的使用,以满足高计算需求和严格的性能要求。除了考虑性能和能源效率之外,可解释性和可靠性已成为关键要求,特别是对于汽车、医疗和航空航天系统等关键应用。在各种人工智能模型中,决策树集成(dte)以其高精度和可解释性而特别引人注目。此外,它们特别适合硬件实现,从而实现高性能和改进的能源效率。然而,dte的一个经常被忽视的方面是它们在出现硬件故障时的可靠性。虽然dte在设计上通常被认为是健壮的,但由于它们的冗余和投票机制,硬件故障仍然可能产生灾难性的后果。为了解决这一差距,我们对两种类型的DTE硬件加速器进行了深入的可靠性分析:经典实现和近似实现。具体来说,我们进行了全面的故障注入活动,改变了分类任务中涉及的树的数量、使用的近似技术和可容忍的精度损失,同时评估了几个基准数据集。本研究的结果表明,近似技术必须仔细设计,因为它们可以显著影响弹性。然而,以特征和阈值表示为目标的技术似乎更适合于容错。
{"title":"Reliability analysis of hardware accelerators for decision tree-based classifier systems","authors":"Mario Barbareschi ,&nbsp;Salvatore Barone ,&nbsp;Alberto Bosio ,&nbsp;Antonio Emmanuele","doi":"10.1016/j.future.2026.108378","DOIUrl":"10.1016/j.future.2026.108378","url":null,"abstract":"<div><div>The increasing adoption of AI models has driven applications toward the use of hardware accelerators to meet high computational demands and strict performance requirements. Beyond consideration of performance and energy efficiency, explainability and reliability have emerged as pivotal requirements, particularly for critical applications such as automotive, medical, and aerospace systems. Among the various AI models, Decision Tree Ensembles (DTEs) are particularly notable for their high accuracy and explainability. Moreover, they are particularly well-suited for hardware implementations, enabling high-performance and improved energy efficiency. However, a frequently overlooked aspect of DTEs is their reliability in the presence of hardware malfunctions. While DTEs are generally regarded as robust by design, due to their redundancy and voting mechanisms, hardware faults can still have catastrophic consequences. To address this gap, we present an in-depth reliability analysis of two types of DTE hardware accelerators: classical and approximate implementations. Specifically, we conduct a comprehensive fault injection campaign, varying the number of trees involved in the classification task, the approximation technique used, and the tolerated accuracy loss, while evaluating several benchmark datasets. The results of this study demonstrate that approximation techniques have to be carefully designed, as they can significantly impact resilience. However, techniques that target the representation of features and thresholds appear to be better suited for fault tolerance.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108378"},"PeriodicalIF":6.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraalDoss: Direct object snapshotting and sharing for cloud-native applications GraalDoss:云原生应用程序的直接对象快照和共享
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-20 DOI: 10.1016/j.future.2026.108375
Ivan Ristović , Vojin Jovanović , Peter Hofer , Milena Vujošević Janičić
Modern cloud-computing providers operate on a pay-as-you-use billing model, with computing power and memory being the most important and expensive resources. Due to resource costs, cloud-native applications should start fast while minimizing startup time and memory footprint over multiple application instances. However, modern workloads consist of large amounts of data, often requiring initialization which introduces repeated CPU work across application instances. Current cloud-native solutions that pre-initialize application code and data operate at application-build time to enable sharing during execution. However, these solutions do not consider data that becomes available or can only be initialized during application execution.
We present Doss, a direct object snapshotting and sharing mechanism for cloud-native applications. Doss snapshots the state of the object graph directly from the executing language-runtime heap. This allows Doss to achieve constant deserialization overhead with memory mappings. Doss shares warmed-up data snapshots across compatible language-runtime instances, reducing the memory overhead of the system, and avoiding cold starts. We implement GraalDoss in Java as part of GraalVM. GraalDoss maintains a constant data-cache memory overhead across multiple application instances, eliminating costly data initialization. In microservice applications, GraalDoss reduces the total memory footprint by 44% for 8 microservice instances and improves first-response times by 51%. In natural language processing applications, GraalDoss improves total execution times by several orders of magnitude.
现代云计算提供商采用按使用量付费的计费模式,计算能力和内存是最重要也是最昂贵的资源。由于资源成本,云原生应用程序应该快速启动,同时最小化多个应用程序实例的启动时间和内存占用。然而,现代工作负载由大量数据组成,通常需要初始化,这会在应用程序实例之间引入重复的CPU工作。当前的云原生解决方案在构建应用程序时对应用程序代码和数据进行预初始化,以便在执行期间实现共享。但是,这些解决方案不考虑在应用程序执行期间变得可用或只能初始化的数据。我们介绍了Doss,一个用于云原生应用程序的直接对象快照和共享机制。Doss直接从执行的语言运行时堆中快照对象图的状态。这允许Doss通过内存映射实现恒定的反序列化开销。Doss在兼容的语言运行时实例之间共享预热的数据快照,从而减少了系统的内存开销,并避免了冷启动。我们在Java中实现GraalDoss作为GraalVM的一部分。GraalDoss在多个应用程序实例之间保持恒定的数据缓存内存开销,从而消除了昂贵的数据初始化。在微服务应用程序中,GraalDoss为8个微服务实例减少了44%的总内存占用,并将首次响应时间提高了51%。在自然语言处理应用程序中,GraalDoss将总执行时间提高了几个数量级。
{"title":"GraalDoss: Direct object snapshotting and sharing for cloud-native applications","authors":"Ivan Ristović ,&nbsp;Vojin Jovanović ,&nbsp;Peter Hofer ,&nbsp;Milena Vujošević Janičić","doi":"10.1016/j.future.2026.108375","DOIUrl":"10.1016/j.future.2026.108375","url":null,"abstract":"<div><div>Modern cloud-computing providers operate on a pay-as-you-use billing model, with computing power and memory being the most important and expensive resources. Due to resource costs, cloud-native applications should start fast while minimizing startup time and memory footprint over multiple application instances. However, modern workloads consist of large amounts of data, often requiring initialization which introduces repeated CPU work across application instances. Current cloud-native solutions that pre-initialize application code and data operate at application-build time to enable sharing during execution. However, these solutions do not consider data that becomes available or can only be initialized during application execution.</div><div>We present <span>Doss</span>, a direct object snapshotting and sharing mechanism for cloud-native applications. <span>Doss</span> snapshots the state of the object graph directly from the executing language-runtime heap. This allows <span>Doss</span> to achieve constant deserialization overhead with memory mappings. <span>Doss</span> shares warmed-up data snapshots across compatible language-runtime instances, reducing the memory overhead of the system, and avoiding cold starts. We implement Graal<span>Doss</span> in Java as part of GraalVM. Graal<span>Doss</span> maintains a constant data-cache memory overhead across multiple application instances, eliminating costly data initialization. In microservice applications, Graal<span>Doss</span> reduces the total memory footprint by 44% for 8 microservice instances and improves first-response times by 51%. In natural language processing applications, Graal<span>Doss</span> improves total execution times by several orders of magnitude.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108375"},"PeriodicalIF":6.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeHP-CSQ: A secure, high-performance cross-shard queuing model SeHP-CSQ:一个安全、高性能的跨分片队列模型
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-20 DOI: 10.1016/j.future.2026.108376
Hui Dai , Lingyun Yuan , Haochen Bao , Han Chen
Blockchain sharding parallelises processing to boost throughput. Cross-shard transactions’ low transmission efficiency and security risks limit system scalability. We propose a secure cross-shard high-performance processing queuing model. First, we model hybrid multi-distribution batch arrival-processing and accurately depict transaction arrival and processing dynamics. Second, we construct a cross-shard transaction processing queuing model based on M/M/1/N queuing, along with a metric system for key performance indicators. Modifying the queue capacity to regulate batch control of cross-shard transactions directed at the target shard, thereby improving robustness and scalability. Third, we design a dynamic adaptive malicious transaction analysis bound, which derives an upper bound on the real-time tail probability via Chernoff’s inequality and Hoeffding’s inequality, and prove that the analysis bound can converge at an exponential rate under any shard size, thus effectively limiting the impact of malicious behaviours on the security of the shard system. Experimental results show that the proposed queuing model can reach a maximum throughput of about 8.0 × 104 TPS and achieve load balancing in high concurrency scenarios. The queuing waiting time is less than 0.5 ms, with the overload probability and the system failure probability converging to 0%, which verifies that the model has adequate security While ensuring high processing efficiency.
区块链分片并行处理以提高吞吐量。跨分片交易传输效率低,存在安全风险,限制了系统的可扩展性。提出了一种安全的跨分片高性能处理队列模型。首先,建立了混合多分布批量到达处理模型,准确描述了事务到达和处理动态。其次,构建了基于M/M/1/N排队的跨分片事务处理排队模型,并给出了关键性能指标的度量体系。修改队列容量,以调节针对目标分片的跨分片事务的批处理控制,从而提高鲁棒性和可伸缩性。第三,设计了一个动态自适应的恶意交易分析界,通过Chernoff不等式和Hoeffding不等式推导出实时尾概率的上界,并证明了分析界在任意分片大小下都能以指数速率收敛,从而有效地限制了恶意行为对分片系统安全性的影响。实验结果表明,所提出的队列模型可以达到8.0 × 104 TPS左右的最大吞吐量,并在高并发场景下实现负载均衡。排队等待时间小于0.5 ms,过载概率和系统故障概率收敛到0%,验证了该模型在保证高处理效率的同时具有足够的安全性。
{"title":"SeHP-CSQ: A secure, high-performance cross-shard queuing model","authors":"Hui Dai ,&nbsp;Lingyun Yuan ,&nbsp;Haochen Bao ,&nbsp;Han Chen","doi":"10.1016/j.future.2026.108376","DOIUrl":"10.1016/j.future.2026.108376","url":null,"abstract":"<div><div>Blockchain sharding parallelises processing to boost throughput. Cross-shard transactions’ low transmission efficiency and security risks limit system scalability. We propose a secure cross-shard high-performance processing queuing model. First, we model hybrid multi-distribution batch arrival-processing and accurately depict transaction arrival and processing dynamics. Second, we construct a cross-shard transaction processing queuing model based on M/M/1/N queuing, along with a metric system for key performance indicators. Modifying the queue capacity to regulate batch control of cross-shard transactions directed at the target shard, thereby improving robustness and scalability. Third, we design a dynamic adaptive malicious transaction analysis bound, which derives an upper bound on the real-time tail probability via Chernoff’s inequality and Hoeffding’s inequality, and prove that the analysis bound can converge at an exponential rate under any shard size, thus effectively limiting the impact of malicious behaviours on the security of the shard system. Experimental results show that the proposed queuing model can reach a maximum throughput of about 8.0 × 10<sup>4</sup> TPS and achieve load balancing in high concurrency scenarios. The queuing waiting time is less than 0.5 ms, with the overload probability and the system failure probability converging to 0%, which verifies that the model has adequate security While ensuring high processing efficiency.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108376"},"PeriodicalIF":6.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSCD : A privacy-preserving framework for structural constraint mitigation in deep neural networks on encrypted distributed datasets PSCD:加密分布式数据集上深度神经网络结构约束缓解的隐私保护框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-20 DOI: 10.1016/j.future.2026.108390
Yuhao Zhang , Weiwei Zhao , Changhui Hu
The proliferation of deep neural networks (DNNs) drives the need for collaborative data processing across distributed nodes in next-generation systems. This mode poses a potential threat to distributed data privacy, necessitating the development of more reliable privacy-preserving machine learning (PPML) solutions. The functional encryption (FE) provides a new paradigm for PPML due to its unique advantages. Unfortunately, privacy requirements in existing FE-based schemes impose a priori constraints on permissible neural architectures, highlighting a fundamental tension with model expressiveness. To close this gap, we design a privacy-preserving DNN framework (PSCD) based on FE, mitigating structural constraints on model by integrating three independent modules. Specifically, we first design a secure aggregation module SAM with FE to ensure the confidentiality of local data upload. Then, we introduce FM Sketch to propose a query control module QCM to control the number of times ciphertext vectors are queried by cloud server. Finally, we develop a privacy-preserving training mechanism PPTM, which incorporates Dropout to flexibly adjust the network structure and synchronously enhance the robustness of model. Formal security analysis proves that PSCD can against semi-honest attacks and collusion attacks. Experiments on real-world datasets demonstrate that PSCD achieves at least a 48.5% improvement in operational efficiency and a 38.9% reduction in communication overhead compared to benchmark PPML schemes, while maintaining model accuracy comparable to that of a plaintext DNN.
在下一代系统中,深度神经网络(dnn)的激增推动了跨分布式节点协作数据处理的需求。这种模式对分布式数据隐私构成潜在威胁,需要开发更可靠的隐私保护机器学习(PPML)解决方案。功能加密(functional encryption, FE)以其独特的优势为PPML提供了一种新的范式。不幸的是,现有的基于fe的方案中的隐私要求对允许的神经架构施加了先验约束,突出了模型表达性的基本张力。为了缩小这一差距,我们设计了一个基于FE的隐私保护DNN框架(PSCD),通过集成三个独立模块来减轻模型的结构约束。具体而言,我们首先设计了一个安全聚合模块SAM和FE,以确保本地数据上传的保密性。然后,我们引入FM Sketch,提出了一个查询控制模块QCM来控制云服务器对密文向量的查询次数。最后,我们开发了一种隐私保护训练机制PPTM,该机制结合Dropout来灵活调整网络结构,同时增强模型的鲁棒性。正式的安全性分析证明了PSCD可以抵御半诚实攻击和合谋攻击。在真实数据集上的实验表明,与基准PPML方案相比,PSCD在操作效率方面至少提高了48.5%,在通信开销方面降低了38.9%,同时保持了与明文DNN相当的模型准确性。
{"title":"PSCD : A privacy-preserving framework for structural constraint mitigation in deep neural networks on encrypted distributed datasets","authors":"Yuhao Zhang ,&nbsp;Weiwei Zhao ,&nbsp;Changhui Hu","doi":"10.1016/j.future.2026.108390","DOIUrl":"10.1016/j.future.2026.108390","url":null,"abstract":"<div><div>The proliferation of deep neural networks (DNNs) drives the need for collaborative data processing across distributed nodes in next-generation systems. This mode poses a potential threat to distributed data privacy, necessitating the development of more reliable privacy-preserving machine learning (PPML) solutions. The functional encryption (FE) provides a new paradigm for PPML due to its unique advantages. Unfortunately, privacy requirements in existing FE-based schemes impose a priori constraints on permissible neural architectures, highlighting a fundamental tension with model expressiveness. To close this gap, we design a privacy-preserving DNN framework (PSCD) based on FE, mitigating structural constraints on model by integrating three independent modules. Specifically, we first design a secure aggregation module SAM with FE to ensure the confidentiality of local data upload. Then, we introduce FM Sketch to propose a query control module QCM to control the number of times ciphertext vectors are queried by cloud server. Finally, we develop a privacy-preserving training mechanism PPTM, which incorporates Dropout to flexibly adjust the network structure and synchronously enhance the robustness of model. Formal security analysis proves that PSCD can against semi-honest attacks and collusion attacks. Experiments on real-world datasets demonstrate that PSCD achieves at least a 48.5% improvement in operational efficiency and a 38.9% reduction in communication overhead compared to benchmark PPML schemes, while maintaining model accuracy comparable to that of a plaintext DNN.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108390"},"PeriodicalIF":6.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long integer NTT execution on UPMEM-PIM for 128-bit secure fully homomorphic encryption 128位安全全同态加密的upmemi - pim长整数NTT执行
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-20 DOI: 10.1016/j.future.2026.108386
Tathagata Barik , Priyam Mehta , Zaira Pindado , Harshita Gupta , Mayank Kabra , Mohammad Sadrosadati , Onur Mutlu , Antonio J. Peña
Fully Homomorphic Encryption (FHE) enables secure computations on encrypted data, hence becoming an appealing technology for privacy-preserving data processing. A core kernel in many cryptographic and FHE workloads is the Number Theoretic Transform (NTT). While NTT involves frequent non-contiguous data accesses, limiting overall performance, processing–in–memory (PIM) has the potential to address this limitation. PIM, performing computations close to the data, reduces the need for extensive data transfers between memory and compute units. However, the performance of current PIM solutions is limited by inherent factors related to the integration of processing capabilities within memory modules.
In this article we analyze the performance trade-offs of NTT kernel designs along with optimized modular multiplication algorithms on PIM systems based on UPMEM hardware. Our results include significant performance improvements of up to 4.3 ×  over baseline approaches on UPMEM-PIM, while preserving, for the first time in the literature, 128-bit security at high precision.
完全同态加密(FHE)实现了对加密数据的安全计算,因此成为保护隐私数据处理的一种有吸引力的技术。数论变换(NTT)是许多密码学和FHE工作负载的核心。虽然NTT涉及频繁的非连续数据访问,限制了整体性能,但内存中处理(PIM)有可能解决这一限制。PIM在数据附近执行计算,减少了在内存和计算单元之间进行大量数据传输的需要。然而,当前PIM解决方案的性能受到与内存模块内处理能力集成相关的固有因素的限制。在本文中,我们分析了NTT内核设计的性能权衡以及基于UPMEM硬件的PIM系统上优化的模块化乘法算法。我们的研究结果包括显著的性能改进,在upmemm - pim上比基线方法提高了4.3 × ,同时在文献中首次保持了高精度的128位安全性。
{"title":"Long integer NTT execution on UPMEM-PIM for 128-bit secure fully homomorphic encryption","authors":"Tathagata Barik ,&nbsp;Priyam Mehta ,&nbsp;Zaira Pindado ,&nbsp;Harshita Gupta ,&nbsp;Mayank Kabra ,&nbsp;Mohammad Sadrosadati ,&nbsp;Onur Mutlu ,&nbsp;Antonio J. Peña","doi":"10.1016/j.future.2026.108386","DOIUrl":"10.1016/j.future.2026.108386","url":null,"abstract":"<div><div>Fully Homomorphic Encryption (FHE) enables secure computations on encrypted data, hence becoming an appealing technology for privacy-preserving data processing. A core kernel in many cryptographic and FHE workloads is the Number Theoretic Transform (NTT). While NTT involves frequent non-contiguous data accesses, limiting overall performance, processing–in–memory (PIM) has the potential to address this limitation. PIM, performing computations close to the data, reduces the need for extensive data transfers between memory and compute units. However, the performance of current PIM solutions is limited by inherent factors related to the integration of processing capabilities within memory modules.</div><div>In this article we analyze the performance trade-offs of NTT kernel designs along with optimized modular multiplication algorithms on PIM systems based on UPMEM hardware. Our results include significant performance improvements of up to 4.3 ×  over baseline approaches on UPMEM-PIM, while preserving, for the first time in the literature, 128-bit security at high precision.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108386"},"PeriodicalIF":6.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LOTL-hunter: Detecting multi-stage living-off-the-land attacks in cyber-physical systems using decision fusion techniques with digital twins LOTL-Hunter:利用数字孪生决策融合技术检测网络物理系统中的多阶段攻击
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-20 DOI: 10.1016/j.future.2026.108382
Carol Lo , Thu Yein Win , Zeinab Rezaeifar , Zaheer Khan , Phil Legg
The integration of smart sensors and actuators in industrial environments has expanded the cyber-physical attack surface, making it increasingly difficult to distinguish anomalies caused by cyberattacks from those due to mechanical or electrical faults. This challenge is exacerbated by stealthy, multi-stage attacks leveraging Living off the Land (LOTL) techniques, which often evade conventional anomaly detection or intrusion detection systems (IDS).
This study presents a Digital Twin-based testbed for safe, repeatable simulation of multi-stage cyber-physical attacks targeting Cyber-Physical Systems (CPS) and Industrial Control Systems (ICS). We propose a two-level decision fusion method that aggregates and aligns anomalies across network, process, and host domains in synchronized 1-minute intervals. The first-level fusion improves OT-layer detection by applying confidence-aware decision logic to outputs combined from (a) a supervised deep learning model (LSTM-FCN) for process anomalies, (b) an unsupervised model (Isolation Forest) for OPC UA network anomalies, and (c) process alarm signals. The second-level fusion integrates these results with host-based anomalies, computed through point-based scoring of Wazuh alerts, to provide comprehensive IT/OT situational awareness. Experimental results demonstrate improved detection of stealthy, multi-stage APT attack behaviours. Additionally, Large Language Models (LLM) provide summarization of the integrated IT/OT anomaly logs into human-readable insights, enhancing interpretability and supporting cyber threat hunting.
工业环境中智能传感器和执行器的集成扩大了网络物理攻击面,使得越来越难以区分网络攻击引起的异常与机械或电气故障引起的异常。利用“陆上生存”(LOTL)技术的隐蔽的多阶段攻击加剧了这一挑战,这些攻击通常会避开常规的异常检测或入侵检测系统(IDS)。本研究提出了一个基于数字孪生的测试平台,用于安全、可重复地模拟针对网络物理系统(CPS)和工业控制系统(ICS)的多阶段网络物理攻击。我们提出了一种两级决策融合方法,该方法在同步的1分钟间隔内聚合和对齐跨网络、进程和主机域的异常。第一级融合通过将信任感知决策逻辑应用于以下输出组合来改进ot层检测:(a)过程异常的监督深度学习模型(LSTM-FCN), (b) OPC UA网络异常的无监督模型(隔离森林),以及(c)过程报警信号。第二级融合将这些结果与基于主机的异常情况结合起来,通过基于点的Wazuh警报评分计算,提供全面的IT/OT态势感知。实验结果表明,改进的检测隐身,多阶段APT攻击行为。此外,大型语言模型(LLM)将集成的IT/OT异常日志总结为人类可读的见解,增强了可解释性并支持网络威胁搜索。
{"title":"LOTL-hunter: Detecting multi-stage living-off-the-land attacks in cyber-physical systems using decision fusion techniques with digital twins","authors":"Carol Lo ,&nbsp;Thu Yein Win ,&nbsp;Zeinab Rezaeifar ,&nbsp;Zaheer Khan ,&nbsp;Phil Legg","doi":"10.1016/j.future.2026.108382","DOIUrl":"10.1016/j.future.2026.108382","url":null,"abstract":"<div><div>The integration of smart sensors and actuators in industrial environments has expanded the cyber-physical attack surface, making it increasingly difficult to distinguish anomalies caused by cyberattacks from those due to mechanical or electrical faults. This challenge is exacerbated by stealthy, multi-stage attacks leveraging Living off the Land (LOTL) techniques, which often evade conventional anomaly detection or intrusion detection systems (IDS).</div><div>This study presents a Digital Twin-based testbed for safe, repeatable simulation of multi-stage cyber-physical attacks targeting Cyber-Physical Systems (CPS) and Industrial Control Systems (ICS). We propose a two-level decision fusion method that aggregates and aligns anomalies across network, process, and host domains in synchronized 1-minute intervals. The first-level fusion improves OT-layer detection by applying confidence-aware decision logic to outputs combined from (a) a supervised deep learning model (LSTM-FCN) for process anomalies, (b) an unsupervised model (Isolation Forest) for OPC UA network anomalies, and (c) process alarm signals. The second-level fusion integrates these results with host-based anomalies, computed through point-based scoring of Wazuh alerts, to provide comprehensive IT/OT situational awareness. Experimental results demonstrate improved detection of stealthy, multi-stage APT attack behaviours. Additionally, Large Language Models (LLM) provide summarization of the integrated IT/OT anomaly logs into human-readable insights, enhancing interpretability and supporting cyber threat hunting.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108382"},"PeriodicalIF":6.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge graph-driven framework for deploying AI-powered patient digital twins 用于部署人工智能患者数字双胞胎的知识图谱驱动框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-20 DOI: 10.1016/j.future.2026.108380
Alberto Marfoglia , Christian D’Errico , Sabato Mellone , Antonella Carbonaro
Background: The healthcare sector faces diverse challenges, including poor interoperability and a lack of personalized approaches, which limit patient outcomes. Ineffective data exchange and one-size-fits-all treatments fail to meet individual needs. Emerging technologies like digital twins (DTs), the semantic web, and AI show promise in tackling these obstacles. For this reason, we introduced CONNECTED, a conceptual multi-level framework that combines these techniques to deploy general-purpose patient DTs. Objective: This study assesses CONNECTED’s comprehensiveness, applicability, and utility for developing intelligent, personalized healthcare applications. Specifically, we deliver a preliminary version of the framework to predict future patient states and demonstrate its automation benefits in deploying semantically enriched, AI-powered patient DTs. Methods: We enhanced the CONNECTED architecture by providing a formal definition of DT and modularizing its core functionalities into four microservices (Properties, State, Capabilities, and Manifest). The Manifest service facilitates AI model integration through the Model Interface Manifest Ontology (MIMO), enabling automatic data-to-model binding via a reasoner. Using the HeartBeatKG quality assessment tool, we validated MIMO and tested the internal logic by integrating a well-established stroke-risk model. Results: Our implementation comprehends: (1) deploying a FHIR-compliant, patient-centric API for clinical history access, real-time monitoring, and predictive simulation; (2) publishing MIMO; (3) establishing the Manifest protocol for seamless, general-purpose AI model integration tailored to individual patient profiles; and (4) a proof-of-concept benchmarking application comparing multiple stroke risk classifiers. Conclusion: CONNECTED establishes a flexible, scalable foundation for interoperable semantic patient DTs. Automation reduces technical overhead and enables users to focus on delivering personalized, insight-driven care.
背景:医疗保健行业面临着各种各样的挑战,包括互操作性差和缺乏个性化的方法,这限制了患者的治疗效果。无效的数据交换和一刀切的处理不能满足个人需求。数字孪生(DTs)、语义网和人工智能等新兴技术有望解决这些障碍。出于这个原因,我们引入了CONNECTED,这是一个概念性的多层框架,它结合了这些技术来部署通用的患者DTs。目的:本研究评估CONNECTED在开发智能、个性化医疗保健应用方面的全面性、适用性和实用性。具体来说,我们提供了一个框架的初步版本,以预测未来的患者状态,并展示其在部署语义丰富、人工智能驱动的患者DTs方面的自动化优势。方法:我们通过提供DT的正式定义并将其核心功能模块化为四个微服务(属性、状态、能力和清单)来增强CONNECTED架构。Manifest服务通过模型接口清单本体(model Interface Manifest Ontology, MIMO)促进AI模型集成,通过推理器实现数据到模型的自动绑定。使用HeartBeatKG质量评估工具,我们验证了MIMO,并通过整合一个完善的卒中风险模型测试了内部逻辑。结果:我们的实施包括:(1)部署符合fhir的、以患者为中心的API,用于临床病史访问、实时监测和预测模拟;(2)发布MIMO;(3)建立Manifest协议,实现针对个体患者情况的无缝通用AI模型集成;(4)一个比较多个中风风险分类器的概念验证基准应用程序。结论:CONNECTED为可互操作的语义患者DTs建立了一个灵活、可扩展的基础。自动化减少了技术开销,使用户能够专注于提供个性化的、洞察力驱动的护理。
{"title":"A knowledge graph-driven framework for deploying AI-powered patient digital twins","authors":"Alberto Marfoglia ,&nbsp;Christian D’Errico ,&nbsp;Sabato Mellone ,&nbsp;Antonella Carbonaro","doi":"10.1016/j.future.2026.108380","DOIUrl":"10.1016/j.future.2026.108380","url":null,"abstract":"<div><div>Background: The healthcare sector faces diverse challenges, including poor interoperability and a lack of personalized approaches, which limit patient outcomes. Ineffective data exchange and one-size-fits-all treatments fail to meet individual needs. Emerging technologies like digital twins (DTs), the semantic web, and AI show promise in tackling these obstacles. For this reason, we introduced CONNECTED, a conceptual multi-level framework that combines these techniques to deploy general-purpose patient DTs. Objective: This study assesses CONNECTED’s comprehensiveness, applicability, and utility for developing intelligent, personalized healthcare applications. Specifically, we deliver a preliminary version of the framework to predict future patient states and demonstrate its automation benefits in deploying semantically enriched, AI-powered patient DTs. Methods: We enhanced the CONNECTED architecture by providing a formal definition of DT and modularizing its core functionalities into four microservices (Properties, State, Capabilities, and Manifest). The Manifest service facilitates AI model integration through the Model Interface Manifest Ontology (MIMO), enabling automatic data-to-model binding via a reasoner. Using the HeartBeatKG quality assessment tool, we validated MIMO and tested the internal logic by integrating a well-established stroke-risk model. Results: Our implementation comprehends: (1) deploying a FHIR-compliant, patient-centric API for clinical history access, real-time monitoring, and predictive simulation; (2) publishing MIMO; (3) establishing the Manifest protocol for seamless, general-purpose AI model integration tailored to individual patient profiles; and (4) a proof-of-concept benchmarking application comparing multiple stroke risk classifiers. Conclusion: CONNECTED establishes a flexible, scalable foundation for interoperable semantic patient DTs. Automation reduces technical overhead and enables users to focus on delivering personalized, insight-driven care.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108380"},"PeriodicalIF":6.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1