首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
An incentive mechanism based on elastic task partitioning for latency sensitive mobile crowdsourcing 基于弹性任务划分的延迟敏感移动众包激励机制
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-12-16 DOI: 10.1016/j.future.2025.108327
Yang Liu
Mobile crowdsourcing (MCS) has emerged as a discipline which consists of mobile computing, distributed computing, and social computing in recent years. As a ‘crowd’ participating system, the incentive mechanism will impact the performance of the system because a rational participant may change their behavior based on rewards. The incentive goal is to improve the latency performance for many MCS applications alongwith edge computing rapidly developed. Unilateral contract, auction, and game theory are the three main approaches in related works but these researches confronting the complexity of data parallelism using heterogenous edge resources and wireless networking are weak in compatibility. Heterogenous resources leverage on the complexity of data partitioning which generally becomes elastic; wireless networking makes data distribution complicated due to the non-negligible network latencies and characteristics of wireless channels. Therefore, a latency model containing allocation and scheduling issues which can be seen as an optimization problem is studied based on an elastic task partitioning. An auction-based incentive mechanism is presented, and is involved into the optimization problems. A novel method using shadow Dirichlet sampling under a genetic algorithm framework is proposed and several optimizers are derived from the proposed method for the solution. Using a simulation, the comparison among these optimizers are illustrated. The best optimizer can achieve 5 % improvement of makespan in general. The auction model is also tested from different perspectives. The proposed model can make approximated 5 % gain in makespan compared to state-of-the-art models which combines the transmission mode with multiple isolated wireless channels. If excluding the strategy with multiple isolated wireless channels, the proposed model can save about 2/3 time compared with those cost.
移动众包(MCS)是近年来兴起的一门集移动计算、分布式计算和社会计算于一体的学科。作为一个“群体”参与系统,激励机制会影响系统的绩效,因为理性参与者可能会根据奖励改变自己的行为。随着边缘计算的快速发展,其激励目标是改善许多MCS应用程序的延迟性能。单边契约、拍卖和博弈论是目前相关研究的主要方法,但这些针对异构边缘资源和无线网络的数据并行复杂性的研究在兼容性上较弱。异构资源利用数据分区的复杂性,使其变得具有弹性;无线网络由于其不可忽略的网络延迟和无线信道的特性,使得数据分发变得复杂。因此,研究了一个包含分配和调度问题的延迟模型,该模型可以看作是一个基于弹性任务划分的优化问题。提出了一种基于拍卖的激励机制,并涉及到优化问题。提出了一种基于遗传算法框架的阴影狄利克雷采样方法,并在此基础上推导出若干优化算法。通过仿真,说明了这些优化器之间的比较。一般来说,最好的优化器可以将完工时间提高5%。拍卖模式也从不同的角度进行了检验。与最先进的将传输模式与多个隔离无线信道相结合的模型相比,所提出的模型可以使makespan获得约5%的增益。如果不考虑具有多个隔离无线信道的策略,所提出的模型可以节省约2/3的时间。
{"title":"An incentive mechanism based on elastic task partitioning for latency sensitive mobile crowdsourcing","authors":"Yang Liu","doi":"10.1016/j.future.2025.108327","DOIUrl":"10.1016/j.future.2025.108327","url":null,"abstract":"<div><div>Mobile crowdsourcing (MCS) has emerged as a discipline which consists of mobile computing, distributed computing, and social computing in recent years. As a ‘crowd’ participating system, the incentive mechanism will impact the performance of the system because a rational participant may change their behavior based on rewards. The incentive goal is to improve the latency performance for many MCS applications alongwith edge computing rapidly developed. Unilateral contract, auction, and game theory are the three main approaches in related works but these researches confronting the complexity of data parallelism using heterogenous edge resources and wireless networking are weak in compatibility. Heterogenous resources leverage on the complexity of data partitioning which generally becomes elastic; wireless networking makes data distribution complicated due to the non-negligible network latencies and characteristics of wireless channels. Therefore, a latency model containing allocation and scheduling issues which can be seen as an optimization problem is studied based on an elastic task partitioning. An auction-based incentive mechanism is presented, and is involved into the optimization problems. A novel method using shadow Dirichlet sampling under a genetic algorithm framework is proposed and several optimizers are derived from the proposed method for the solution. Using a simulation, the comparison among these optimizers are illustrated. The best optimizer can achieve 5 % improvement of makespan in general. The auction model is also tested from different perspectives. The proposed model can make approximated 5 % gain in makespan compared to state-of-the-art models which combines the transmission mode with multiple isolated wireless channels. If excluding the strategy with multiple isolated wireless channels, the proposed model can save about 2/3 time compared with those cost.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108327"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145785042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a standardized secure MPC outsourcing and management framework 建立标准化、安全的MPC外包和管理框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-09-27 DOI: 10.1016/j.future.2025.108164
Oscar G. Bautista , Kemal Akkaya , Soamar Homsi
Secure Multiparty Computation (MPC) has emerged as a promising solution for processing privacy-sensitive data in diverse domains (e.g., health, finance, agriculture, smart cities, and more). However, there still exist challenges to use this technology in real-world use cases. For instance, establish secure communication among players who do not know each other, outsource the computation to untrusted servers, orchestrate its execution, and verify the correctness of the results without revealing them to untrusted participants. To this end, we propose a flexible end-to-end MPC management framework along with a) a protocol for handling and orchestrating MPC job requests, b) an efficient Kerberos-like protocol for authentication among clients (i.e., data sources and consumers) and MPC servers, and c) output correctness verification in another environment by introducing Verification Servers that prevent the untrusted MPC servers from knowing the computation output while enabling cheating detection when assuming a malicious attack model with a dishonest majority. We implemented and tested the proposed framework using the SPDZ protocol in the outsourced setting as a proof-of-concept with various participants. The experimental evaluation demonstrates the suitability of our approach to quickly enable secure communication among participants without previous knowledge of each other’s identity.
安全多方计算(MPC)已经成为处理不同领域(例如,健康、金融、农业、智能城市等)隐私敏感数据的一种有前途的解决方案。然而,在实际用例中使用该技术仍然存在挑战。例如,在互不认识的玩家之间建立安全通信,将计算外包给不受信任的服务器,编排其执行,并在不向不受信任的参与者泄露结果的情况下验证结果的正确性。为此,我们提出了一个灵活的端到端MPC管理框架,包括a)处理和协调MPC作业请求的协议,b)在客户端(即数据源和消费者)和MPC服务器之间进行身份验证的高效的类似kerberos的协议,c)通过引入验证服务器在另一个环境中进行输出正确性验证,防止不受信任的MPC服务器知道计算输出,同时在假设恶意攻击模型具有不诚实多数时启用欺骗检测。我们在外包设置中使用SPDZ协议实现并测试了提议的框架,作为与各种参与者的概念验证。实验评估表明,我们的方法可以快速实现参与者之间的安全通信,而无需事先了解彼此的身份。
{"title":"Towards a standardized secure MPC outsourcing and management framework","authors":"Oscar G. Bautista ,&nbsp;Kemal Akkaya ,&nbsp;Soamar Homsi","doi":"10.1016/j.future.2025.108164","DOIUrl":"10.1016/j.future.2025.108164","url":null,"abstract":"<div><div>Secure Multiparty Computation (MPC) has emerged as a promising solution for processing privacy-sensitive data in diverse domains (e.g., health, finance, agriculture, smart cities, and more). However, there still exist challenges to use this technology in real-world use cases. For instance, establish secure communication among players who do not know each other, outsource the computation to untrusted servers, orchestrate its execution, and verify the correctness of the results without revealing them to untrusted participants. To this end, we propose a flexible end-to-end MPC management framework along with a) a protocol for handling and orchestrating MPC job requests, b) an efficient Kerberos-like protocol for authentication among clients (i.e., data sources and consumers) and MPC servers, and c) output correctness verification in another environment by introducing Verification Servers that prevent the untrusted MPC servers from knowing the computation output while enabling cheating detection when assuming a malicious attack model with a dishonest majority. We implemented and tested the proposed framework using the SPDZ protocol in the outsourced setting as a proof-of-concept with various participants. The experimental evaluation demonstrates the suitability of our approach to quickly enable secure communication among participants without previous knowledge of each other’s identity.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108164"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145748725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPI malleability validation under replayed real-world HPC conditions 重放真实世界HPC条件下MPI延展性验证
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-12-13 DOI: 10.1016/j.future.2025.108305
Sergio Iserte , Maël Madon , Georges Da Costa , Jean-Marc Pierson , Antonio J. Peña
Dynamic Resource Management (DRM) techniques can be leveraged to maximize throughput and resource utilization in computational clusters. Although DRM has been extensively studied through analytical workloads and simulations, skepticism persists among end administrators and users regarding their feasibility under real-world conditions. To address this problem, we propose a novel methodology for validating DRM techniques, such as malleability, in realistic scenarios that reproduce actual cluster conditions of jobs and users by replaying workload logs on a High-performance Computing (HPC) infrastructure. Our methodology is capable of adapting the workload to the target cluster. We evaluate our methodology in a malleability-enabled 125-node partition of the Marenostrum 5 supercomputer. Our results validate the proposed method and assess the benefits of MPI malleability on a novel use case of a pioneer user of malleability (our “PhD Student”): parallel-efficiency-aware malleability reduced a malleable workload time by 27 % without delaying the baseline workload, although introducing queueing delays for individual jobs, but maintaining the resource utilization rate.
可以利用动态资源管理(DRM)技术来最大化计算集群中的吞吐量和资源利用率。尽管已经通过分析工作负载和模拟对DRM进行了广泛的研究,但最终管理员和用户仍然对其在实际条件下的可行性持怀疑态度。为了解决这个问题,我们提出了一种新的方法,用于在实际场景中验证DRM技术(例如延展性),这些场景通过在高性能计算(HPC)基础设施上重播工作负载日志来再现作业和用户的实际集群条件。我们的方法能够使工作负载适应目标集群。我们在Marenostrum 5超级计算机的一个启用延展性的125节点分区中评估了我们的方法。我们的结果验证了所提出的方法,并评估了MPI延展性在延展性先锋用户(我们的“博士生”)的新用例中的好处:并行效率感知的延展性在不延迟基准工作负载的情况下减少了27%的延展性工作负载时间,尽管为单个作业引入了排队延迟,但保持了资源利用率。
{"title":"MPI malleability validation under replayed real-world HPC conditions","authors":"Sergio Iserte ,&nbsp;Maël Madon ,&nbsp;Georges Da Costa ,&nbsp;Jean-Marc Pierson ,&nbsp;Antonio J. Peña","doi":"10.1016/j.future.2025.108305","DOIUrl":"10.1016/j.future.2025.108305","url":null,"abstract":"<div><div>Dynamic Resource Management (DRM) techniques can be leveraged to maximize throughput and resource utilization in computational clusters. Although DRM has been extensively studied through analytical workloads and simulations, skepticism persists among end administrators and users regarding their feasibility under real-world conditions. To address this problem, we propose a novel methodology for validating DRM techniques, such as malleability, in realistic scenarios that reproduce actual cluster conditions of jobs and users by replaying workload logs on a High-performance Computing (HPC) infrastructure. Our methodology is capable of adapting the workload to the target cluster. We evaluate our methodology in a malleability-enabled 125-node partition of the Marenostrum 5 supercomputer. Our results validate the proposed method and assess the benefits of MPI malleability on a novel use case of a pioneer user of malleability (our “PhD Student”): parallel-efficiency-aware malleability reduced a malleable workload time by 27 % without delaying the baseline workload, although introducing queueing delays for individual jobs, but maintaining the resource utilization rate.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108305"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Offloading artificial intelligence workloads across the computing continuum by means of active storage systems 通过主动存储系统卸载跨计算连续体的人工智能工作负载
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-12-01 DOI: 10.1016/j.future.2025.108271
Alex Barceló , Sebastián A. Cajas Ordoñez , Jaydeep Samanta , Andrés L. Suárez-Cetrulo , Romila Ghosh , Ricardo Simón Carbajo , Anna Queralt
The increasing demand for artificial intelligence (AI) workloads across diverse computing environments has driven the need for more efficient data management strategies. Traditional cloud-based architectures struggle to handle the sheer volume and velocity of AI-driven data, leading to inefficiencies in storage, computation, and data movement. This paper explores the integration of active storage systems within the computing continuum to optimize AI workload distribution.
By embedding computation directly into storage architectures, active storage is able to reduce data transfer overhead, enhancing performance and improving resource utilization. Other existing frameworks and architectures offer mechanisms to distribute certain AI processes across distributed environments; however, they lack the flexibility and adaptability that the continuum requires, both regarding the heterogeneity of devices and the rapid-changing algorithms and models being used by domain experts and researchers.
This article proposes a software architecture aimed at seamlessly distributing AI workloads across the computing continuum, and presents its implementation using mainstream Python libraries and dataClay, an active storage platform. The evaluation shows the benefits and trade-offs regarding memory consumption, storage requirements, training times, and execution efficiency across different devices. Experimental results demonstrate that the process of offloading workloads through active storage significantly improves memory efficiency and training speeds while maintaining accuracy. Our findings highlight the potential of active storage to revolutionize AI workload management, making distributed AI deployments more scalable and resource-efficient with a very low entry barrier for domain experts and application developers.
对不同计算环境中人工智能(AI)工作负载的需求不断增长,推动了对更有效的数据管理策略的需求。传统的基于云的架构难以处理人工智能驱动数据的庞大数量和速度,导致存储、计算和数据移动效率低下。本文探讨了在计算连续体中集成主动存储系统以优化人工智能工作负载分配。通过将计算直接嵌入到存储体系结构中,活动存储能够减少数据传输开销,增强性能并提高资源利用率。其他现有的框架和架构提供了跨分布式环境分发某些AI过程的机制;然而,它们缺乏连续体所需的灵活性和适应性,无论是设备的异质性还是领域专家和研究人员使用的快速变化的算法和模型。本文提出了一种旨在跨计算连续体无缝分布AI工作负载的软件架构,并介绍了其使用主流Python库和dataClay(一种主动存储平台)的实现。评估显示了在不同设备上关于内存消耗、存储需求、训练时间和执行效率的好处和权衡。实验结果表明,通过主动存储卸载工作负载的过程显著提高了记忆效率和训练速度,同时保持了准确性。我们的研究结果强调了主动存储在人工智能工作负载管理方面的潜力,使分布式人工智能部署更具可扩展性和资源效率,对领域专家和应用程序开发人员来说,进入门槛非常低。
{"title":"Offloading artificial intelligence workloads across the computing continuum by means of active storage systems","authors":"Alex Barceló ,&nbsp;Sebastián A. Cajas Ordoñez ,&nbsp;Jaydeep Samanta ,&nbsp;Andrés L. Suárez-Cetrulo ,&nbsp;Romila Ghosh ,&nbsp;Ricardo Simón Carbajo ,&nbsp;Anna Queralt","doi":"10.1016/j.future.2025.108271","DOIUrl":"10.1016/j.future.2025.108271","url":null,"abstract":"<div><div>The increasing demand for artificial intelligence (AI) workloads across diverse computing environments has driven the need for more efficient data management strategies. Traditional cloud-based architectures struggle to handle the sheer volume and velocity of AI-driven data, leading to inefficiencies in storage, computation, and data movement. This paper explores the integration of active storage systems within the computing continuum to optimize AI workload distribution.</div><div>By embedding computation directly into storage architectures, active storage is able to reduce data transfer overhead, enhancing performance and improving resource utilization. Other existing frameworks and architectures offer mechanisms to distribute certain AI processes across distributed environments; however, they lack the flexibility and adaptability that the continuum requires, both regarding the heterogeneity of devices and the rapid-changing algorithms and models being used by domain experts and researchers.</div><div>This article proposes a software architecture aimed at seamlessly distributing AI workloads across the computing continuum, and presents its implementation using mainstream Python libraries and dataClay, an active storage platform. The evaluation shows the benefits and trade-offs regarding memory consumption, storage requirements, training times, and execution efficiency across different devices. Experimental results demonstrate that the process of offloading workloads through active storage significantly improves memory efficiency and training speeds while maintaining accuracy. Our findings highlight the potential of active storage to revolutionize AI workload management, making distributed AI deployments more scalable and resource-efficient with a very low entry barrier for domain experts and application developers.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108271"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145657740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LuGo: An enhanced quantum phase estimation implementation LuGo:一个增强的量子相位估计实现
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-12-03 DOI: 10.1016/j.future.2025.108270
Chao Lu, Muralikrishnan Gopalakrishnan Meena, Kalyana C. Gottiparthi
Quantum Phase Estimation (QPE) is a cardinal algorithm in quantum computing that plays a crucial role in various applications, including cryptography, molecular simulation, and solving systems of linear equations. However, the standard implementation of QPE faces challenges related to time complexity and circuit depth, which limit its practicality for large-scale computations. We introduce LuGo, a novel framework designed to enhance the performance of QPE by reducing circuit duplication, as well as using parallelization techniques to achieve faster generation of the QPE circuit and gate reduction. We validate the effectiveness of our framework by generating quantum linear solver circuits, which require both QPE and inverse QPE, to solve linear systems of equations. LuGo achieves significant improvements in both computational efficiency and hardware requirements without compromising on accuracy. Compared to a standard QPE implementation, LuGo reduces time consumption to generate a circuit that solves a 26 × 26 system matrix by a factor of 50.68 and over 31 ×  reduction of quantum gates and circuit depth, with no fidelity loss on an ideal quantum simulator. We demonstrated the versatility and scalability of LuGo enabled HHL algorithm by simulating a canonical Hele-Shaw fluid problem using a quantum simulator. With these advantages, LuGo paves the way for more efficient implementations of QPE, enabling broader applications across several quantum computing domains.
量子相位估计(QPE)是量子计算中的一种基本算法,在密码学、分子模拟和求解线性方程组等各种应用中起着至关重要的作用。然而,QPE的标准实现面临着与时间复杂度和电路深度相关的挑战,这限制了其在大规模计算中的实用性。我们介绍了LuGo,一个新的框架,旨在通过减少电路重复来提高QPE的性能,并使用并行化技术来实现更快的QPE电路生成和栅极减少。我们通过生成量子线性求解器电路来验证我们框架的有效性,该电路需要QPE和逆QPE来求解线性方程组。LuGo在计算效率和硬件要求方面都取得了显著的改进,同时又不影响精度。与标准QPE实现相比,LuGo将生成解决26 × 26系统矩阵的电路的时间消耗减少了50.68倍,量子门和电路深度减少了31 × 以上,在理想的量子模拟器上没有保真度损失。通过使用量子模拟器模拟一个典型的Hele-Shaw流体问题,我们展示了LuGo支持的HHL算法的多功能性和可扩展性。凭借这些优势,LuGo为更有效地实现QPE铺平了道路,从而在多个量子计算领域实现了更广泛的应用。
{"title":"LuGo: An enhanced quantum phase estimation implementation","authors":"Chao Lu,&nbsp;Muralikrishnan Gopalakrishnan Meena,&nbsp;Kalyana C. Gottiparthi","doi":"10.1016/j.future.2025.108270","DOIUrl":"10.1016/j.future.2025.108270","url":null,"abstract":"<div><div>Quantum Phase Estimation (QPE) is a cardinal algorithm in quantum computing that plays a crucial role in various applications, including cryptography, molecular simulation, and solving systems of linear equations. However, the standard implementation of QPE faces challenges related to time complexity and circuit depth, which limit its practicality for large-scale computations. We introduce LuGo, a novel framework designed to enhance the performance of QPE by reducing circuit duplication, as well as using parallelization techniques to achieve faster generation of the QPE circuit and gate reduction. We validate the effectiveness of our framework by generating quantum linear solver circuits, which require both QPE and inverse QPE, to solve linear systems of equations. LuGo achieves significant improvements in both computational efficiency and hardware requirements without compromising on accuracy. Compared to a standard QPE implementation, LuGo reduces time consumption to generate a circuit that solves a 2<sup>6</sup> × 2<sup>6</sup> system matrix by a factor of 50.68 and over 31 ×  reduction of quantum gates and circuit depth, with no fidelity loss on an ideal quantum simulator. We demonstrated the versatility and scalability of LuGo enabled HHL algorithm by simulating a canonical Hele-Shaw fluid problem using a quantum simulator. With these advantages, LuGo paves the way for more efficient implementations of QPE, enabling broader applications across several quantum computing domains.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108270"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145689750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High self-adaptive task offloading framework in vehicular fog networks: A hybrid approach leveraging case-based reasoning and integer linear programming 车辆雾网络中的高自适应任务卸载框架:基于案例推理和整数线性规划的混合方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-12-05 DOI: 10.1016/j.future.2025.108293
Chia-Cheng Hu
This paper proposes a self-adaptive task offloading framework for Vehicular Fog Networks (VFNs) that effectively addresses the challenges of high vehicular mobility, dynamic connectivity, and fluctuating computational demands. The framework integrates Case-Based Reasoning (CBR) with an Integer Linear Programming (ILP) model to deliver real-time, mobility-aware offloading decisions. Offline, the ILP-based rounding algorithm generates a decision database of near-optimal task allocation strategies under diverse network conditions. Online, the Decision Script Determination (DSD) algorithm employs CBR to retrieve and adapt strategies in response to environmental changes, triggered by event-driven and periodic mechanisms with adaptive thresholds. Extensive evaluations using real vehicular mobility traces demonstrate that the proposed framework consistently reduces service latency and energy consumption while improving task success rates compared with heuristic and learning-based benchmarks. Specifically, the MTPR algorithm achieves near-optimal performance, approaching the results of exact ILP solutions while significantly outperforming a Greedy baseline. Furthermore, the DSD mechanism outperforms deep reinforcement learning methods, offering superior decision accuracy and adaptability without incurring training overhead. The main contributions are threefold: 1) development of a hybrid optimization–reasoning framework that ensures scalable and efficient task offloading, 2) construction of a comprehensive decision database of precomputed strategies to support low-latency real-time operation, and 3) empirical validation in realistic VFN scenarios, confirming superior adaptability and efficiency over state-of-the-art methods.
本文提出了一种自适应的车辆雾网络任务卸载框架,该框架有效地解决了车辆高移动性、动态连接性和波动计算需求的挑战。该框架将基于案例的推理(CBR)与整数线性规划(ILP)模型集成在一起,以提供实时的、机动感知的卸载决策。离线时,基于ilp的四舍五入算法生成了不同网络条件下近乎最优任务分配策略的决策数据库。在线,决策脚本确定(DSD)算法采用CBR来检索和调整策略,以响应由事件驱动和具有自适应阈值的周期机制触发的环境变化。使用真实车辆移动轨迹的广泛评估表明,与启发式和基于学习的基准测试相比,所提出的框架一致地降低了服务延迟和能源消耗,同时提高了任务成功率。具体来说,MTPR算法实现了近乎最优的性能,接近精确ILP解决方案的结果,同时显著优于贪婪基线。此外,DSD机制优于深度强化学习方法,在不产生训练开销的情况下提供卓越的决策准确性和适应性。主要贡献有三个方面:1)开发了一种混合优化推理框架,确保了可扩展和高效的任务卸载;2)构建了一个综合的预先计算策略决策数据库,以支持低延迟的实时操作;3)在现实的VFN场景中进行了经验验证,确认了比最先进的方法更优越的适应性和效率。
{"title":"High self-adaptive task offloading framework in vehicular fog networks: A hybrid approach leveraging case-based reasoning and integer linear programming","authors":"Chia-Cheng Hu","doi":"10.1016/j.future.2025.108293","DOIUrl":"10.1016/j.future.2025.108293","url":null,"abstract":"<div><div>This paper proposes a self-adaptive task offloading framework for Vehicular Fog Networks (VFNs) that effectively addresses the challenges of high vehicular mobility, dynamic connectivity, and fluctuating computational demands. The framework integrates Case-Based Reasoning (CBR) with an Integer Linear Programming (ILP) model to deliver real-time, mobility-aware offloading decisions. Offline, the ILP-based rounding algorithm generates a decision database of near-optimal task allocation strategies under diverse network conditions. Online, the Decision Script Determination (DSD) algorithm employs CBR to retrieve and adapt strategies in response to environmental changes, triggered by event-driven and periodic mechanisms with adaptive thresholds. Extensive evaluations using real vehicular mobility traces demonstrate that the proposed framework consistently reduces service latency and energy consumption while improving task success rates compared with heuristic and learning-based benchmarks. Specifically, the MTPR algorithm achieves near-optimal performance, approaching the results of exact ILP solutions while significantly outperforming a Greedy baseline. Furthermore, the DSD mechanism outperforms deep reinforcement learning methods, offering superior decision accuracy and adaptability without incurring training overhead. The main contributions are threefold: 1) development of a hybrid optimization–reasoning framework that ensures scalable and efficient task offloading, 2) construction of a comprehensive decision database of precomputed strategies to support low-latency real-time operation, and 3) empirical validation in realistic VFN scenarios, confirming superior adaptability and efficiency over state-of-the-art methods.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108293"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145689745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engineering opportunistic digital twins with lingua franca 工程机会数字双胞胎与通用语
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-11-27 DOI: 10.1016/j.future.2025.108262
Vincenzo Barbuto , Claudio Savaglio , Edward A. Lee , Giancarlo Fortino
Digital Twins (DTs) have emerged as essential tools for virtualizing and enhancing Cyber-Physical Systems (CPS) by providing synchronized digital counterparts that enable monitoring, control, prediction, and optimization. Initially conceived as passive digital shadows, DTs are increasingly evolving into intelligent and proactive entities, enabled by the integration of Artificial Intelligence (AI). Among these advancements, Opportunistic Digital Twins (ODTs) represent a novel class of DTs: living, AI-aided, and actionable models that opportunistically exploit edge-cloud resources to deliver enriched and adaptive representations of physical entities and processes. However, despite their promise, current research lacks systematic engineering methods to ensure reliable coordination, determinism, and real-time responsiveness of ODTs in distributed and resource-constrained CPS. This article addresses this gap by introducing an engineering approach to build dependable and efficient ODTs by leveraging the deterministic concurrency, explicit timing semantics, and disciplined event handling of Lingua Franca (LF). The approach is exemplified through a Smart Traffic Management case study centered on Emergency Vehicle Preemption (EVP), where the ODT dynamically selects AI models based on runtime conditions while ensuring deterministic coordination across distributed nodes. Experimental results confirm the feasibility and effectiveness of our methodology, underscoring the potential of LF-based ODT engineering to enhance reliability, adaptability, and scalability in intelligent and distributed CPS deployments.
数字孪生(DTs)已经成为虚拟化和增强网络物理系统(CPS)的重要工具,通过提供同步的数字对象物来实现监控、控制、预测和优化。DTs最初被认为是被动的数字阴影,但在人工智能(AI)的整合下,DTs正日益演变为智能和主动的实体。在这些进步中,机会主义数字孪生(odt)代表了一种新型的数字孪生:活生生的、人工智能辅助的、可操作的模型,这些模型可以机会主义地利用边缘云资源来提供丰富的、自适应的物理实体和过程表示。然而,尽管他们有希望,目前的研究缺乏系统的工程方法来确保分布式和资源受限的CPS中odt的可靠协调、确定性和实时响应。本文通过引入一种工程方法来解决这一问题,该方法利用Lingua Franca (LF)的确定性并发性、显式计时语义和有纪律的事件处理来构建可靠且高效的odt。该方法通过以紧急车辆抢占(EVP)为中心的智能交通管理案例研究进行了举例说明,其中ODT根据运行时条件动态选择人工智能模型,同时确保分布式节点之间的确定性协调。实验结果证实了我们的方法的可行性和有效性,强调了基于lf的ODT工程在智能和分布式CPS部署中提高可靠性、适应性和可扩展性的潜力。
{"title":"Engineering opportunistic digital twins with lingua franca","authors":"Vincenzo Barbuto ,&nbsp;Claudio Savaglio ,&nbsp;Edward A. Lee ,&nbsp;Giancarlo Fortino","doi":"10.1016/j.future.2025.108262","DOIUrl":"10.1016/j.future.2025.108262","url":null,"abstract":"<div><div>Digital Twins (DTs) have emerged as essential tools for virtualizing and enhancing Cyber-Physical Systems (CPS) by providing synchronized digital counterparts that enable monitoring, control, prediction, and optimization. Initially conceived as passive digital shadows, DTs are increasingly evolving into intelligent and proactive entities, enabled by the integration of Artificial Intelligence (AI). Among these advancements, Opportunistic Digital Twins (ODTs) represent a novel class of DTs: living, AI-aided, and actionable models that <em>opportunistically</em> exploit edge-cloud resources to deliver enriched and adaptive representations of physical entities and processes. However, despite their promise, current research lacks systematic engineering methods to ensure reliable coordination, determinism, and real-time responsiveness of ODTs in distributed and resource-constrained CPS. This article addresses this gap by introducing an engineering approach to build dependable and efficient ODTs by leveraging the deterministic concurrency, explicit timing semantics, and disciplined event handling of <span>Lingua Franca</span> (LF). The approach is exemplified through a Smart Traffic Management case study centered on Emergency Vehicle Preemption (EVP), where the ODT dynamically selects AI models based on runtime conditions while ensuring deterministic coordination across distributed nodes. Experimental results confirm the feasibility and effectiveness of our methodology, underscoring the potential of LF-based ODT engineering to enhance reliability, adaptability, and scalability in intelligent and distributed CPS deployments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108262"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145609216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
REX: A remote execution model for continuos scalability in multi-chiplet-module GPUs REX:多芯片模块gpu中持续可扩展性的远程执行模型
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-11-27 DOI: 10.1016/j.future.2025.108268
Mario Ibáñez Bolado, Borja Pérez Pavón, Jose Luis Bosque Orero
Monolithic GPU architectures face growing limitations due to power density, yield issues, and manufacturing complexity, motivating a shift toward multi-chiplet designs. While promising, these architectures struggle with workloads exhibiting irregular memory access patterns, where static data placement is often insufficient. Though data locality can help, it does not adapt well to dynamic access behaviour, leading to performance degradation. This paper introduces REX, a runtime mechanism that migrates threads to the chiplet where their data resides, adapting dynamically to the generated memory access patterns with a fine granularity. By relocating computation instead of data, REX improves locality and minimises remote memory accesses, which are especially costly in multi-chiplet environments. As a result, it reduces inter-chiplet traffic and scales efficiently with the number of chiplets. On irregular workloads, the solution demonstrates consistent performance gains, averaging a 13 % speedup, with improvements reaching up to 38 %. Moreover, its scalability with chiplet count is particularly noteworthy, delivering a 25 % average gain, and peaking at an impressive 84 % in the most favourable scenarios.
由于功率密度、良率问题和制造复杂性,单片GPU架构面临越来越多的限制,促使向多芯片设计转变。虽然这些体系结构很有前途,但它们与表现出不规则内存访问模式的工作负载作斗争,其中静态数据放置通常是不够的。虽然数据局部性可以提供帮助,但它不能很好地适应动态访问行为,从而导致性能下降。REX是一种运行时机制,它将线程迁移到数据所在的芯片上,并以精细的粒度动态适应生成的内存访问模式。通过重定位计算而不是数据,REX提高了局部性并最大限度地减少了远程内存访问,这在多芯片环境中尤其昂贵。因此,它减少了小片间的流量,并随着小片数量的增加而有效地扩展。在不规则的工作负载上,该解决方案表现出一致的性能提升,平均速度提升13%,改进幅度最高可达38%。此外,其芯片数量的可扩展性特别值得注意,提供25%的平均增益,在最有利的情况下达到令人印象深刻的84%。
{"title":"REX: A remote execution model for continuos scalability in multi-chiplet-module GPUs","authors":"Mario Ibáñez Bolado,&nbsp;Borja Pérez Pavón,&nbsp;Jose Luis Bosque Orero","doi":"10.1016/j.future.2025.108268","DOIUrl":"10.1016/j.future.2025.108268","url":null,"abstract":"<div><div>Monolithic GPU architectures face growing limitations due to power density, yield issues, and manufacturing complexity, motivating a shift toward multi-chiplet designs. While promising, these architectures struggle with workloads exhibiting irregular memory access patterns, where static data placement is often insufficient. Though data locality can help, it does not adapt well to dynamic access behaviour, leading to performance degradation. This paper introduces REX, a runtime mechanism that migrates threads to the chiplet where their data resides, adapting dynamically to the generated memory access patterns with a fine granularity. By relocating computation instead of data, REX improves locality and minimises remote memory accesses, which are especially costly in multi-chiplet environments. As a result, it reduces inter-chiplet traffic and scales efficiently with the number of chiplets. On irregular workloads, the solution demonstrates consistent performance gains, averaging a 13 % speedup, with improvements reaching up to 38 %. Moreover, its scalability with chiplet count is particularly noteworthy, delivering a 25 % average gain, and peaking at an impressive 84 % in the most favourable scenarios.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108268"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145611661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated clustering: An unsupervised cluster-wise training for decentralized data distributions 联邦聚类:分散数据分布的无监督聚类训练
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-12-07 DOI: 10.1016/j.future.2025.108294
Mirko Nardi , Lorenzo Valerio , Andrea Passarella
Federated Learning (FL) enables decentralized machine learning while preserving data privacy, making it ideal for sensitive applications where data cannot be shared. While FL has been widely studied in supervised contexts, its application to unsupervised learning remains underdeveloped. This work introduces FedCRef, a novel unsupervised federated learning method designed to uncover all underlying data distributions across decentralized clients without requiring labels. This task, known as Federated Clustering, presents challenges due to heterogeneous, non-uniform data distributions and the lack of centralized coordination. Unlike previous methods that assume a one-cluster-per-client setup or require prior knowledge of the number of clusters, FedCRef generalizes to multi-cluster-per-client scenarios. Clients iteratively refine their data partitions while discovering all distinct distributions in the system. The process combines local clustering, model exchange and evaluation via reconstruction error analysis, and collaborative refinement within federated groups of similar distributions to enhance clustering accuracy. Extensive evaluations on four public datasets (EMNIST, KMNIST, Fashion-MNIST and KMNIST49) show that FedCRef successfully identifies true global data distributions, achieving an average local accuracy of up to 95 %. The method is also robust to noisy conditions, scalable, and lightweight, making it suitable for resource-constrained edge devices.
联邦学习(FL)支持分散的机器学习,同时保护数据隐私,使其成为无法共享数据的敏感应用程序的理想选择。虽然FL在有监督环境中得到了广泛的研究,但它在无监督学习中的应用仍然不发达。这项工作介绍了FedCRef,这是一种新颖的无监督联邦学习方法,旨在发现分散客户端的所有底层数据分布,而不需要标签。这项任务被称为联邦集群,由于异构、非统一的数据分布和缺乏集中协调而面临挑战。与之前假设每个客户端一个集群设置或需要事先了解集群数量的方法不同,FedCRef适用于每个客户端多个集群的场景。客户端在发现系统中所有不同的分布的同时迭代地改进它们的数据分区。该过程结合了局部聚类、模型交换和通过重建误差分析进行评估,以及在相似分布的联邦组内进行协作细化,以提高聚类精度。对四个公共数据集(EMNIST、KMNIST、Fashion-MNIST和KMNIST49)的广泛评估表明,FedCRef成功地识别了真实的全球数据分布,实现了高达95%的平均局部精度。该方法对噪声条件具有鲁棒性、可扩展性和轻量级,适用于资源受限的边缘设备。
{"title":"Federated clustering: An unsupervised cluster-wise training for decentralized data distributions","authors":"Mirko Nardi ,&nbsp;Lorenzo Valerio ,&nbsp;Andrea Passarella","doi":"10.1016/j.future.2025.108294","DOIUrl":"10.1016/j.future.2025.108294","url":null,"abstract":"<div><div>Federated Learning (FL) enables decentralized machine learning while preserving data privacy, making it ideal for sensitive applications where data cannot be shared. While FL has been widely studied in supervised contexts, its application to unsupervised learning remains underdeveloped. This work introduces FedCRef, a novel unsupervised federated learning method designed to uncover all underlying data distributions across decentralized clients without requiring labels. This task, known as Federated Clustering, presents challenges due to heterogeneous, non-uniform data distributions and the lack of centralized coordination. Unlike previous methods that assume a one-cluster-per-client setup or require prior knowledge of the number of clusters, FedCRef generalizes to multi-cluster-per-client scenarios. Clients iteratively refine their data partitions while discovering all distinct distributions in the system. The process combines local clustering, model exchange and evaluation via reconstruction error analysis, and collaborative refinement within federated groups of similar distributions to enhance clustering accuracy. Extensive evaluations on four public datasets (EMNIST, KMNIST, Fashion-MNIST and KMNIST49) show that FedCRef successfully identifies true global data distributions, achieving an average local accuracy of up to 95 %. The method is also robust to noisy conditions, scalable, and lightweight, making it suitable for resource-constrained edge devices.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108294"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145704948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A pattern-aware LSTM-based approach for APT detection leveraging a realistic dataset for critical infrastructure security 基于模式感知的lstm的APT检测方法,利用关键基础设施安全的现实数据集
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-05-01 Epub Date: 2025-12-14 DOI: 10.1016/j.future.2025.108308
Eider Iturbe , Christos Dalamagkas , Panagiotis Radoglou-Grammatikis , Erkuden Rios , Nerea Toledo
Advanced Persistent Threats (APTs) represent some of the most sophisticated and coordinated cyberattacks, often targeting critical infrastructure with stealthy, multi-stage techniques. Despite the availability of numerous intrusion detection datasets, most fail to capture the sequential and strategic nature of APT campaigns as outlined in frameworks like MITRE ATT&CK. This paper introduces a novel dataset based on a realistic emulation of the Sandworm APT group targeting the Supervisory Control and Data Acquisition (SCADA) system of a Wide Area Measurement System (WAMS). The dataset captures the full lifecycle of an APT attack, from initial access to impact, in a structured and time-ordered manner, enabling the study of both atomic and multi-step intrusion behaviours. We train and evaluate supervised multiclass sequence-aware models, specifically Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) architectures, to detect these behaviours using network flow data, assessing their performance and analysing their strengths and limitations. Our results show that BiLSTM models offer greater stability and generalization, while LSTM models achieve competitive performance with optimal configurations. These findings highlight the importance of realistic, sequence-aware datasets for developing robust intrusion detection systems tailored to modern APT threats.
高级持续性威胁(apt)代表了一些最复杂和最协调的网络攻击,通常通过隐形的多阶段技术针对关键基础设施。尽管有大量的入侵检测数据集,但大多数都无法捕捉到MITRE ATT&;CK等框架中概述的APT活动的顺序和战略性质。针对广域测量系统(WAMS)的监控与数据采集(SCADA)系统,介绍了一种基于真实仿真的沙虫APT群数据集。该数据集以结构化和时间顺序的方式捕获了APT攻击的整个生命周期,从初始访问到影响,从而可以研究原子和多步骤入侵行为。我们训练和评估有监督的多类序列感知模型,特别是长短期记忆(LSTM)和双向LSTM (BiLSTM)架构,使用网络流数据检测这些行为,评估它们的性能并分析它们的优势和局限性。我们的研究结果表明,BiLSTM模型具有更好的稳定性和泛化能力,而LSTM模型在最优配置下具有竞争力。这些发现强调了现实的、序列感知的数据集对于开发针对现代APT威胁的强大入侵检测系统的重要性。
{"title":"A pattern-aware LSTM-based approach for APT detection leveraging a realistic dataset for critical infrastructure security","authors":"Eider Iturbe ,&nbsp;Christos Dalamagkas ,&nbsp;Panagiotis Radoglou-Grammatikis ,&nbsp;Erkuden Rios ,&nbsp;Nerea Toledo","doi":"10.1016/j.future.2025.108308","DOIUrl":"10.1016/j.future.2025.108308","url":null,"abstract":"<div><div>Advanced Persistent Threats (APTs) represent some of the most sophisticated and coordinated cyberattacks, often targeting critical infrastructure with stealthy, multi-stage techniques. Despite the availability of numerous intrusion detection datasets, most fail to capture the sequential and strategic nature of APT campaigns as outlined in frameworks like MITRE ATT&amp;CK. This paper introduces a novel dataset based on a realistic emulation of the Sandworm APT group targeting the Supervisory Control and Data Acquisition (SCADA) system of a Wide Area Measurement System (WAMS). The dataset captures the full lifecycle of an APT attack, from initial access to impact, in a structured and time-ordered manner, enabling the study of both atomic and multi-step intrusion behaviours. We train and evaluate supervised multiclass sequence-aware models, specifically Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) architectures, to detect these behaviours using network flow data, assessing their performance and analysing their strengths and limitations. Our results show that BiLSTM models offer greater stability and generalization, while LSTM models achieve competitive performance with optimal configurations. These findings highlight the importance of realistic, sequence-aware datasets for developing robust intrusion detection systems tailored to modern APT threats.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"178 ","pages":"Article 108308"},"PeriodicalIF":6.2,"publicationDate":"2026-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1