首页 > 最新文献

Journal of Systems Architecture最新文献

英文 中文
ECDPA: An enhanced concurrent differentially private algorithm in electric vehicles for parallel queries ECDPA:一种用于电动汽车并行查询的增强并发差分私有算法
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-16 DOI: 10.1016/j.sysarc.2025.103665
Mohsin Ali , Muneeb Ul Hassan , Pei-Wei Tsai , Jinjun Chen
As the adoption of electric vehicles (EVs) has skyrocketed in the past few decades, data-dependent services integrated into charging stations (CS) raise additional alarming concerns. Adversaries exploiting the privacy of individuals have been taken care of extensively by deploying techniques such as differential privacy (DP) and encryption-based approaches. However, these previous approaches worked effectively with sequential or single query, but were not useful for parallel queries. This paper proposed a novel and interactive approach termed CDP-INT, which aimed to tackle the multiple queries targeted at the same dataset, precluding exploitation of sensitive information of the user. This proposed mechanism is effectively tailored for EVs and CS in which the total privacy budget ϵ is distributed among a number of parallel queries. This research ensures the robust protection of privacy in response to multiple queries, maintaining the optimum trade-off between utility and privacy by implementing dynamic allocation of the ϵ in a concurrent model. Furthermore, the experimental evaluation section showcased the efficacy of CDP-INT in comparison to other approaches working on the sequential mechanism to tackle the queries. Thus, the experimental evaluation has also vouched that CDP-INT is a viable solution offering privacy to sensitive information in response to multiple queries.
在过去的几十年里,随着电动汽车(ev)的普及,与充电站(CS)相结合的数据依赖服务引发了额外的担忧。利用个人隐私的攻击者已经通过部署差分隐私(DP)和基于加密的方法等技术得到了广泛的关注。但是,这些以前的方法对顺序查询或单个查询有效,但对并行查询无效。本文提出了一种新的交互式方法,称为CDP-INT,旨在解决针对同一数据集的多个查询,防止利用用户的敏感信息。这种提议的机制有效地为ev和CS量身定制,其中总隐私预算λ分布在许多并行查询中。本研究确保了在响应多个查询时对隐私的鲁棒保护,通过在并发模型中实现动态分配的御柱,保持了效用和隐私之间的最佳权衡。此外,实验评估部分展示了与使用顺序机制处理查询的其他方法相比,CDP-INT的有效性。因此,实验评估也证明了CDP-INT是一种可行的解决方案,可以在响应多个查询时为敏感信息提供隐私。
{"title":"ECDPA: An enhanced concurrent differentially private algorithm in electric vehicles for parallel queries","authors":"Mohsin Ali ,&nbsp;Muneeb Ul Hassan ,&nbsp;Pei-Wei Tsai ,&nbsp;Jinjun Chen","doi":"10.1016/j.sysarc.2025.103665","DOIUrl":"10.1016/j.sysarc.2025.103665","url":null,"abstract":"<div><div>As the adoption of electric vehicles (EVs) has skyrocketed in the past few decades, data-dependent services integrated into charging stations (CS) raise additional alarming concerns. Adversaries exploiting the privacy of individuals have been taken care of extensively by deploying techniques such as differential privacy (DP) and encryption-based approaches. However, these previous approaches worked effectively with sequential or single query, but were not useful for parallel queries. This paper proposed a novel and interactive approach termed <em>CDP-INT</em>, which aimed to tackle the multiple queries targeted at the same dataset, precluding exploitation of sensitive information of the user. This proposed mechanism is effectively tailored for EVs and CS in which the total privacy budget <span><math><mi>ϵ</mi></math></span> is distributed among a number of parallel queries. This research ensures the robust protection of privacy in response to multiple queries, maintaining the optimum trade-off between utility and privacy by implementing dynamic allocation of the <span><math><mi>ϵ</mi></math></span> in a concurrent model. Furthermore, the experimental evaluation section showcased the efficacy of CDP-INT in comparison to other approaches working on the sequential mechanism to tackle the queries. Thus, the experimental evaluation has also vouched that CDP-INT is a viable solution offering privacy to sensitive information in response to multiple queries.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"172 ","pages":"Article 103665"},"PeriodicalIF":4.1,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145760624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeSpa: Heterogeneous multi-core accelerators for energy-efficient dense and sparse computation at the tile level in Deep Neural Networks 基于异构多核加速器的深度神经网络层级高效密集稀疏计算
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-12 DOI: 10.1016/j.sysarc.2025.103650
Hyungjun Jang , Dongho Ha , Hyunwuk Lee , Won Woo Ro
The rapid evolution of Deep Neural Networks (DNNs) has driven significant advances in Domain-Specific Accelerators (DSAs). However, efficiently exploiting DSAs across diverse workloads remains challenging because complementary techniques—from sparsity-aware computation to system-level innovations such as multi-core architectures—have progressed independently. Our analysis reveals pronounced tile-level sparsity variations within the DNNs, which cause efficiency fluctuations on homogeneous accelerators built solely from dense or sparsity-oriented cores. To address this challenge, we present DeSpa, a novel heterogeneous multi-core accelerator architecture that integrates both dense and sparse cores to dynamically adapt to tile-level sparsity variations. DeSpa is paired with a heterogeneity-aware scheduler that employs a tile-stealing mechanism to maximize core utilization and minimize idle time. Compared to a homogeneous sparse multi-core baseline, DeSpa reduces energy consumption by 33% and improves energy-delay product (EDP) by 14%, albeit at the cost of a 35% latency increase. Relative to a homogeneous dense baseline, it reduces EDP by 44%, cuts energy consumption by 42%, and delivers a 1.34× speed-up.
深度神经网络(dnn)的快速发展推动了特定领域加速器(dsa)的重大进展。然而,跨不同工作负载有效地利用dsa仍然具有挑战性,因为互补技术——从稀疏感知计算到系统级创新(如多核体系结构)——已经独立发展。我们的分析揭示了dnn内明显的瓷砖级稀疏性变化,这导致仅由密集或稀疏性导向核心构建的均匀加速器的效率波动。为了应对这一挑战,我们提出了DeSpa,这是一种新型的异构多核加速器架构,它集成了密集核和稀疏核,以动态适应瓷砖级稀疏度变化。DeSpa与一个异构感知调度器配合使用,该调度器使用一种磁片窃取机制来最大化核心利用率并最小化空闲时间。与同质稀疏多核基线相比,DeSpa减少了33%的能耗,并将能量延迟产品(EDP)提高了14%,但代价是延迟增加了35%。相对于均匀的密集基线,它可以降低44%的EDP,降低42%的能耗,并提供1.34倍的加速。
{"title":"DeSpa: Heterogeneous multi-core accelerators for energy-efficient dense and sparse computation at the tile level in Deep Neural Networks","authors":"Hyungjun Jang ,&nbsp;Dongho Ha ,&nbsp;Hyunwuk Lee ,&nbsp;Won Woo Ro","doi":"10.1016/j.sysarc.2025.103650","DOIUrl":"10.1016/j.sysarc.2025.103650","url":null,"abstract":"<div><div>The rapid evolution of Deep Neural Networks (DNNs) has driven significant advances in Domain-Specific Accelerators (DSAs). However, efficiently exploiting DSAs across diverse workloads remains challenging because complementary techniques—from sparsity-aware computation to system-level innovations such as multi-core architectures—have progressed independently. Our analysis reveals pronounced tile-level sparsity variations within the DNNs, which cause efficiency fluctuations on homogeneous accelerators built solely from dense or sparsity-oriented cores. To address this challenge, we present DeSpa, a novel heterogeneous multi-core accelerator architecture that integrates both dense and sparse cores to dynamically adapt to tile-level sparsity variations. DeSpa is paired with a heterogeneity-aware scheduler that employs a tile-stealing mechanism to maximize core utilization and minimize idle time. Compared to a homogeneous sparse multi-core baseline, DeSpa reduces energy consumption by 33% and improves energy-delay product (EDP) by 14%, albeit at the cost of a 35% latency increase. Relative to a homogeneous dense baseline, it reduces EDP by 44%, cuts energy consumption by 42%, and delivers a <span><math><mrow><mn>1</mn><mo>.</mo><mn>34</mn><mo>×</mo></mrow></math></span> speed-up.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"172 ","pages":"Article 103650"},"PeriodicalIF":4.1,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145792174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dependency-aware microservices offloading in ICN-based edge computing testbed 基于icn边缘计算试验台的依赖感知微服务卸载
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-08 DOI: 10.1016/j.sysarc.2025.103663
Muhammad Nadeem Ali , Ihsan Ullah , Muhammad Imran , Muhammad Salah ud din , Byung-Seo Kim
Information-Centric Networking (ICN)-based edge computing has demonstrated remarkable potential in meeting ultra-low latency and reliable communication for offloading compute-intensive applications. Such applications are often composed of interdependent microservices that demand abundant communication and intensive computing resources. To avoid dependency conflict, these microservices are typically arranged in a predefined sequence prior to offloading; however, this introduces waiting time for each microservice in the sequence. This paper presents an ICN-edge computing-based testbed framework to demonstrate the practical applicability of a study named IFCNS, which proposes a unique solution to reduce the offloading time of dependent microservices compared to an existing scheme, named OTOOA. In the testbed, the IFCNS and OTOOA schemes are implemented on the Raspberry Pi devices, Named Data Network (NDN) codebase in a Python script. Furthermore, this paper outlined the comprehensive testbed development procedure, including hardware and software configuration. To evaluate the effectiveness of the IFCNS scheme, modifications are applied to the NDN naming, microservice tracking functions, and forwarding strategy. The experimental results corroborate the effectiveness of the IFCNS as compared to OTOOA, demonstrating superior performance in time consumption, average interest satisfaction delay, energy consumption, FIB table load, and average naming overhead.
基于信息中心网络(ICN)的边缘计算在满足超低延迟和可靠通信以卸载计算密集型应用方面显示出巨大的潜力。此类应用程序通常由相互依赖的微服务组成,这些微服务需要大量的通信和密集的计算资源。为了避免依赖冲突,这些微服务通常在卸载之前按照预定义的顺序排列;但是,这会引入序列中每个微服务的等待时间。本文提出了一个基于icn边缘计算的测试平台框架,以证明名为IFCNS的研究的实际适用性,该研究提出了一种独特的解决方案,与现有的名为OTOOA的方案相比,可以减少依赖微服务的卸载时间。在测试平台中,IFCNS和OTOOA方案在树莓派设备上实现,命名数据网络(NDN)代码库在Python脚本中实现。此外,本文还概述了综合试验台的开发过程,包括硬件配置和软件配置。为了评估IFCNS方案的有效性,对NDN命名、微服务跟踪功能和转发策略进行了修改。实验结果证实了IFCNS与OTOOA相比的有效性,在时间消耗、平均兴趣满足延迟、能量消耗、FIB表负载和平均命名开销方面表现出更高的性能。
{"title":"Dependency-aware microservices offloading in ICN-based edge computing testbed","authors":"Muhammad Nadeem Ali ,&nbsp;Ihsan Ullah ,&nbsp;Muhammad Imran ,&nbsp;Muhammad Salah ud din ,&nbsp;Byung-Seo Kim","doi":"10.1016/j.sysarc.2025.103663","DOIUrl":"10.1016/j.sysarc.2025.103663","url":null,"abstract":"<div><div>Information-Centric Networking (ICN)-based edge computing has demonstrated remarkable potential in meeting ultra-low latency and reliable communication for offloading compute-intensive applications. Such applications are often composed of interdependent microservices that demand abundant communication and intensive computing resources. To avoid dependency conflict, these microservices are typically arranged in a predefined sequence prior to offloading; however, this introduces waiting time for each microservice in the sequence. This paper presents an ICN-edge computing-based testbed framework to demonstrate the practical applicability of a study named IFCNS, which proposes a unique solution to reduce the offloading time of dependent microservices compared to an existing scheme, named OTOOA. In the testbed, the IFCNS and OTOOA schemes are implemented on the Raspberry Pi devices, Named Data Network (NDN) codebase in a Python script. Furthermore, this paper outlined the comprehensive testbed development procedure, including hardware and software configuration. To evaluate the effectiveness of the IFCNS scheme, modifications are applied to the NDN naming, microservice tracking functions, and forwarding strategy. The experimental results corroborate the effectiveness of the IFCNS as compared to OTOOA, demonstrating superior performance in time consumption, average interest satisfaction delay, energy consumption, FIB table load, and average naming overhead.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103663"},"PeriodicalIF":4.1,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145748386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-based multi-user dynamic verifiable searchable encryption for secure data storage and query on malicious cloud server 基于区块链的多用户动态可验证可搜索加密,用于恶意云服务器上的安全数据存储和查询
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-03 DOI: 10.1016/j.sysarc.2025.103647
Chunhua Jin , Wanru Lu , Lingwen Kong , Jiahao Wang , Xinying Liu , Guanhua Chen , Hao Zhang , Jian Weng
Dynamic Searchable Symmetric Encryption (DSSE) is a prominent research area that enables users to search and update encrypted data stored in the cloud, thereby preserving data privacy. However, due to key distribution challenges, most existing DSSE schemes are primarily designed for single-user scenarios, which limits the broader applicability of searchable encryption. Moreover, assuming that cloud servers and third-party auditors (TPA) are semi-honest is overly optimistic, as TPAs are often difficult to supervise and may collude with cloud servers to forge verification information. In order to address the above issues, we propose a blockchain-based multi-user dynamic verifiable searchable encryption scheme. In our design, only the indexes constructed using a Cuckoo filter are stored on the blockchain, significantly reducing on-chain storage overhead. The core algorithm is fully implemented through chaincode, ensuring honest execution of all operations and preventing potential collusion between TPAs and cloud servers. To address key distribution, we employ the Diffie–Hellman key exchange protocol to securely generate user-specific keys. Furthermore, we introduce a novel verification mechanism that integrates the Cuckoo filter with a Merkle hash tree to enhance verifiability. Finally, we conduct a comprehensive security analysis and implement a system prototype on Hyperledger Fabric. Experimental results demonstrate that our scheme achieves excellent performance in terms of both storage efficiency and computational overhead.
动态可搜索对称加密(DSSE)是一个突出的研究领域,使用户能够搜索和更新存储在云中的加密数据,从而保护数据隐私。然而,由于密钥分布的挑战,大多数现有的DSSE方案主要是为单用户场景设计的,这限制了可搜索加密的更广泛适用性。此外,假设云服务器和第三方审计机构(TPA)是半诚实的过于乐观,因为TPA往往难以监管,并可能与云服务器串通伪造验证信息。为了解决上述问题,我们提出了一种基于区块链的多用户动态可验证可搜索加密方案。在我们的设计中,只有使用Cuckoo过滤器构造的索引存储在区块链上,这大大减少了链上存储开销。核心算法完全通过链码实现,确保所有操作的诚实执行,防止tpa与云服务器之间的潜在勾结。为了解决密钥分发问题,我们采用Diffie-Hellman密钥交换协议来安全地生成用户特定的密钥。此外,我们引入了一种新的验证机制,该机制将布谷鸟过滤器与默克尔哈希树相结合,以增强可验证性。最后,我们进行了全面的安全性分析,并在Hyperledger Fabric上实现了系统原型。实验结果表明,该方案在存储效率和计算开销方面都取得了优异的性能。
{"title":"Blockchain-based multi-user dynamic verifiable searchable encryption for secure data storage and query on malicious cloud server","authors":"Chunhua Jin ,&nbsp;Wanru Lu ,&nbsp;Lingwen Kong ,&nbsp;Jiahao Wang ,&nbsp;Xinying Liu ,&nbsp;Guanhua Chen ,&nbsp;Hao Zhang ,&nbsp;Jian Weng","doi":"10.1016/j.sysarc.2025.103647","DOIUrl":"10.1016/j.sysarc.2025.103647","url":null,"abstract":"<div><div>Dynamic Searchable Symmetric Encryption (DSSE) is a prominent research area that enables users to search and update encrypted data stored in the cloud, thereby preserving data privacy. However, due to key distribution challenges, most existing DSSE schemes are primarily designed for single-user scenarios, which limits the broader applicability of searchable encryption. Moreover, assuming that cloud servers and third-party auditors (TPA) are semi-honest is overly optimistic, as TPAs are often difficult to supervise and may collude with cloud servers to forge verification information. In order to address the above issues, we propose a blockchain-based multi-user dynamic verifiable searchable encryption scheme. In our design, only the indexes constructed using a Cuckoo filter are stored on the blockchain, significantly reducing on-chain storage overhead. The core algorithm is fully implemented through chaincode, ensuring honest execution of all operations and preventing potential collusion between TPAs and cloud servers. To address key distribution, we employ the Diffie–Hellman key exchange protocol to securely generate user-specific keys. Furthermore, we introduce a novel verification mechanism that integrates the Cuckoo filter with a Merkle hash tree to enhance verifiability. Finally, we conduct a comprehensive security analysis and implement a system prototype on Hyperledger Fabric. Experimental results demonstrate that our scheme achieves excellent performance in terms of both storage efficiency and computational overhead.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103647"},"PeriodicalIF":4.1,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
privXCA: An efficient and privacy-preserving auditing architecture for cross-chain transfers privXCA:跨链传输的高效且保护隐私的审计架构
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-02 DOI: 10.1016/j.sysarc.2025.103662
Jitao Wang, Changhao Wu, Yakun Chen, Weili Han
As cross-chain protocols mature, asset transfers across blockchains are becoming increasingly common, posing new auditing challenges. On the one hand, cross-chain transfers involve multiple single-chain transactions distributed across different blockchains, and collecting such multi-chain transaction data is cumbersome, which makes traditional query-based audit methods highly inefficient. On the other hand, revealing full transaction details during audits poses serious privacy risks. To date, there is still a lack of efficient and privacy-preserving auditing methods for cross-chain transfers.
This paper proposes privXCA, an efficient and privacy-preserving auditing architecture for cross-chain transfers. Built upon the prevalent burn-to-mint cross-chain transfer model, privXCA introduces a multistage audit proof mechanism for cross-chain transfers using zero-knowledge proof. Firstly, we design a single-chain transaction batch audit proof, allowing compliance verification without exposing transaction details. Secondly, we design a compliance audit method for batch cross-chain transfers, which associates the burn and mint single-chain transactions of multiple cross-chain transfers scattered on the source and the destination blockchains under encryption and outputs a concise batch audit proof. Thirdly, to achieve efficient time-range auditing for cross-chain transfers, we design a recursive audit proof for cross-chain transfers and construct a time-ordered audit state linked list composed of state nodes containing recursive audit proofs, which is stored on an independent audit blockchain. Additionally, we introduce a distributed audit proof generation network to accelerate proof updates. Experimental results show that privXCA audits all cross-chain transfers within any time range in a constant time of approximately 3 s. For 10,000 cross-chain transfers, privXCA is almost 800× faster than traditional query-based methods, which demonstrates the practicality of privXCA. Therefore, privXCA provides a practical and flexible approach for compliance regulation in cross-chain ecosystems.
随着跨链协议的成熟,跨区块链的资产转移变得越来越普遍,带来了新的审计挑战。一方面,跨链转账涉及分布在不同区块链上的多个单链交易,收集这些多链交易数据非常繁琐,使得传统的基于查询的审计方法效率低下。另一方面,在审计期间披露完整的交易细节会带来严重的隐私风险。到目前为止,仍然缺乏有效的、保护隐私的跨链转账审计方法。本文提出了一种高效且保护隐私的跨链转账审计体系——privXCA。privXCA建立在流行的burn-to-mint跨链转移模型之上,为使用零知识证明的跨链转移引入了一种多阶段审计证明机制。首先,我们设计了一个单链交易批审计证明,允许在不暴露交易细节的情况下进行合规性验证。其次,我们设计了一种批量跨链转账合规性审计方法,该方法将分散在源和目的区块链上的多个跨链转账的burn和mint单链交易进行加密关联,并输出简洁的批量审计证明。第三,为了实现跨链传输的高效时间范围审计,我们设计了跨链传输的递归审计证明,并构造了一个由包含递归审计证明的状态节点组成的时间顺序审计状态链表,该链表存储在独立的审计区块链上。此外,我们还引入了分布式审计证明生成网络,以加速证明更新。实验结果表明,privXCA可以在大约3秒的恒定时间内对任意时间范围内的所有跨链转账进行审计。对于10,000次跨链传输,privXCA比传统的基于查询的方法快近800倍,这证明了privXCA的实用性。因此,privXCA为跨链生态系统中的合规性监管提供了一种实用而灵活的方法。
{"title":"privXCA: An efficient and privacy-preserving auditing architecture for cross-chain transfers","authors":"Jitao Wang,&nbsp;Changhao Wu,&nbsp;Yakun Chen,&nbsp;Weili Han","doi":"10.1016/j.sysarc.2025.103662","DOIUrl":"10.1016/j.sysarc.2025.103662","url":null,"abstract":"<div><div>As cross-chain protocols mature, asset transfers across blockchains are becoming increasingly common, posing new auditing challenges. On the one hand, cross-chain transfers involve multiple single-chain transactions distributed across different blockchains, and collecting such multi-chain transaction data is cumbersome, which makes traditional query-based audit methods highly inefficient. On the other hand, revealing full transaction details during audits poses serious privacy risks. To date, there is still a lack of efficient and privacy-preserving auditing methods for cross-chain transfers.</div><div>This paper proposes <span>privXCA</span>, an efficient and privacy-preserving auditing architecture for cross-chain transfers. Built upon the prevalent burn-to-mint cross-chain transfer model, <span>privXCA</span> introduces a multistage audit proof mechanism for cross-chain transfers using zero-knowledge proof. Firstly, we design a single-chain transaction batch audit proof, allowing compliance verification without exposing transaction details. Secondly, we design a compliance audit method for batch cross-chain transfers, which associates the burn and mint single-chain transactions of multiple cross-chain transfers scattered on the source and the destination blockchains under encryption and outputs a concise batch audit proof. Thirdly, to achieve efficient time-range auditing for cross-chain transfers, we design a recursive audit proof for cross-chain transfers and construct a time-ordered audit state linked list composed of state nodes containing recursive audit proofs, which is stored on an independent audit blockchain. Additionally, we introduce a distributed audit proof generation network to accelerate proof updates. Experimental results show that <span>privXCA</span> audits all cross-chain transfers within any time range in a constant time of approximately 3<!--> <!-->s. For 10,000 cross-chain transfers, <span>privXCA</span> is almost 800<span><math><mo>×</mo></math></span> faster than traditional query-based methods, which demonstrates the practicality of <span>privXCA</span>. Therefore, <span>privXCA</span> provides a practical and flexible approach for compliance regulation in cross-chain ecosystems.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103662"},"PeriodicalIF":4.1,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconfigurable acceleration for database systems: Taxonomy, techniques, and research challenges 数据库系统的可重构加速:分类法、技术和研究挑战
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-01 DOI: 10.1016/j.sysarc.2025.103659
Geetesh More, Suprio Ray, Kenneth B. Kent
Database query processing and optimization are critical components of modern database management systems (DBMS) that efficiently process user queries. In big data application scenarios, the movement of large volumes of data influences performance, power efficiency, and reliability, which are the three essential aspects of a computing system. Large-scale data centers require an exceptionally efficient server and storage infrastructure. The systems currently employed for managing and processing big data are increasingly showing inefficiency, both in terms of energy usage and scalability, primarily due to the constraints imposed by existing CPU architectures. A significant challenge in Database Management Systems (DBMS) is the growing disparity between the speeds of processors and memory access, which results in notable performance bottlenecks.
This paper presents a comprehensive survey of reconfigurable acceleration in database systems, offering a structured taxonomy that categorizes existing work based on query types, integration models, and hardware/software co-design strategies. We examine key acceleration techniques across relational operators, indexing, join algorithms, and compression, highlighting their trade-offs in performance, scalability, and adaptability. Furthermore, we identify current limitations in programmability, data movement, and workload variability, and outline open research challenges including dynamic reconfiguration, hybrid architectures, and compiler support. This taxonomy-driven perspective aims to guide both researchers and practitioners in navigating the design space and pushing the boundaries of FPGA-accelerated data processing.
数据库查询处理和优化是现代数据库管理系统(DBMS)有效处理用户查询的关键组成部分。在大数据应用场景下,大量数据的移动会影响计算系统的性能、功耗和可靠性,这是计算系统最基本的三个方面。大型数据中心需要非常高效的服务器和存储基础设施。目前用于管理和处理大数据的系统越来越显示出低效率,无论是在能源使用和可扩展性方面,主要是由于现有CPU架构的限制。数据库管理系统(DBMS)中的一个重大挑战是处理器速度和内存访问速度之间日益增长的差距,这导致了显著的性能瓶颈。本文介绍了数据库系统中可重构加速的全面调查,提供了一个结构化的分类法,该分类法根据查询类型、集成模型和硬件/软件协同设计策略对现有工作进行分类。我们研究了跨关系运算符、索引、连接算法和压缩的关键加速技术,重点介绍了它们在性能、可伸缩性和适应性方面的权衡。此外,我们确定了当前在可编程性、数据移动和工作负载可变性方面的限制,并概述了开放的研究挑战,包括动态重新配置、混合架构和编译器支持。这种分类法驱动的观点旨在指导研究人员和实践者在导航设计空间和推动fpga加速数据处理的边界。
{"title":"Reconfigurable acceleration for database systems: Taxonomy, techniques, and research challenges","authors":"Geetesh More,&nbsp;Suprio Ray,&nbsp;Kenneth B. Kent","doi":"10.1016/j.sysarc.2025.103659","DOIUrl":"10.1016/j.sysarc.2025.103659","url":null,"abstract":"<div><div>Database query processing and optimization are critical components of modern database management systems (DBMS) that efficiently process user queries. In big data application scenarios, the movement of large volumes of data influences performance, power efficiency, and reliability, which are the three essential aspects of a computing system. Large-scale data centers require an exceptionally efficient server and storage infrastructure. The systems currently employed for managing and processing big data are increasingly showing inefficiency, both in terms of energy usage and scalability, primarily due to the constraints imposed by existing CPU architectures. A significant challenge in Database Management Systems (DBMS) is the growing disparity between the speeds of processors and memory access, which results in notable performance bottlenecks.</div><div>This paper presents a comprehensive survey of reconfigurable acceleration in database systems, offering a structured taxonomy that categorizes existing work based on query types, integration models, and hardware/software co-design strategies. We examine key acceleration techniques across relational operators, indexing, join algorithms, and compression, highlighting their trade-offs in performance, scalability, and adaptability. Furthermore, we identify current limitations in programmability, data movement, and workload variability, and outline open research challenges including dynamic reconfiguration, hybrid architectures, and compiler support. This taxonomy-driven perspective aims to guide both researchers and practitioners in navigating the design space and pushing the boundaries of FPGA-accelerated data processing.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103659"},"PeriodicalIF":4.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient RISC-V processor with customized instruction set for sparse DNN acceleration on embedded system 基于自定义指令集的高效RISC-V处理器在嵌入式系统上实现稀疏DNN加速
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-29 DOI: 10.1016/j.sysarc.2025.103649
Bowen Jiang, Jianyang Ding, Huachen Zhang, Tianshuo Lu, Zhilei Chai
With the rapid proliferation of AI-driven computing applications, there is an increasing demand for efficient inference on embedded system. Sparse Deep Neural Networks (DNNs) have emerged as an effective solution to this challenge by reducing redundant computation and memory footprint. However, the practical deployment of sparse DNNs still faces multiple challenges such as computing load imbalance and power consumption limitation. To address these issues, we propose RV-SpDNN, an efficient RISC-V processor with hardware–software co-design technologies to accelerate sparse DNNs inference. Firstly, we design a set of custom instructions that enable efficient execution and flexible scheduling of sparse operations with minimal hardware overhead. Meanwhile, the sparse convolution operation under the super-im2col dataflow is transformed into a column-major sparse matrix multiplication operation with sparse coding, effectively exploiting data reuse and optimizing computational workload. Furthermore, we design a configurable, tightly-coupled acceleration module in processor that integrates SpConv array and controller module. Through parallel computation unfolding and memory access optimization, this architecture further enhances overall computational efficiency. Finally, we implement RV-SpDNN using a 55 nm CMOS technology. Experimental results show that the proposed processor can achieve a peak energy efficiency of 831 GOPS/W in sparse DNNs inference tasks and provide ×3.97 performance improvement compared to the baseline.
随着人工智能驱动的计算应用的迅速发展,对嵌入式系统的高效推理的需求越来越大。稀疏深度神经网络(dnn)通过减少冗余计算和内存占用而成为解决这一挑战的有效方法。然而,稀疏深度神经网络的实际部署仍然面临着计算负载不平衡和功耗限制等多重挑战。为了解决这些问题,我们提出了RV-SpDNN,一种高效的RISC-V处理器,采用硬件软件协同设计技术来加速稀疏dnn推理。首先,我们设计了一组自定义指令,以最小的硬件开销实现稀疏操作的高效执行和灵活调度。同时,将super-im2col数据流下的稀疏卷积运算转化为稀疏编码的列为主稀疏矩阵乘法运算,有效地利用了数据重用,优化了计算量。此外,我们在处理器中设计了一个可配置的、紧密耦合的加速模块,该模块集成了SpConv阵列和控制器模块。通过并行计算展开和内存访问优化,进一步提高了整体计算效率。最后,我们使用55纳米CMOS技术实现了RV-SpDNN。实验结果表明,该处理器在稀疏dnn推理任务中可以达到831 GOPS/W的峰值能量效率,与基线相比性能提高×3.97。
{"title":"An efficient RISC-V processor with customized instruction set for sparse DNN acceleration on embedded system","authors":"Bowen Jiang,&nbsp;Jianyang Ding,&nbsp;Huachen Zhang,&nbsp;Tianshuo Lu,&nbsp;Zhilei Chai","doi":"10.1016/j.sysarc.2025.103649","DOIUrl":"10.1016/j.sysarc.2025.103649","url":null,"abstract":"<div><div>With the rapid proliferation of AI-driven computing applications, there is an increasing demand for efficient inference on embedded system. Sparse Deep Neural Networks (DNNs) have emerged as an effective solution to this challenge by reducing redundant computation and memory footprint. However, the practical deployment of sparse DNNs still faces multiple challenges such as computing load imbalance and power consumption limitation. To address these issues, we propose RV-SpDNN, an efficient RISC-V processor with hardware–software co-design technologies to accelerate sparse DNNs inference. Firstly, we design a set of custom instructions that enable efficient execution and flexible scheduling of sparse operations with minimal hardware overhead. Meanwhile, the sparse convolution operation under the super-im2col dataflow is transformed into a column-major sparse matrix multiplication operation with sparse coding, effectively exploiting data reuse and optimizing computational workload. Furthermore, we design a configurable, tightly-coupled acceleration module in processor that integrates SpConv array and controller module. Through parallel computation unfolding and memory access optimization, this architecture further enhances overall computational efficiency. Finally, we implement RV-SpDNN using a 55 nm CMOS technology. Experimental results show that the proposed processor can achieve a peak energy efficiency of 831 GOPS/W in sparse DNNs inference tasks and provide <span><math><mo>×</mo></math></span>3.97 performance improvement compared to the baseline.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103649"},"PeriodicalIF":4.1,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stegano-ECC: Enhancing DNN fault tolerance with embedded parity for important bits 隐写- ecc:通过嵌入重要位的奇偶校验增强DNN容错性
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-28 DOI: 10.1016/j.sysarc.2025.103651
Min Jun Jo , Young Seo Lee
Recently, there has been increasing adoption of deep neural network (DNN) inference tasks directly on edge devices. These DNN inference tasks face critical reliability challenges due to increased DRAM errors caused by harsh operating conditions and resource constraints in edge environments. Though the conventional error correction codes (ECCs) mitigate DRAM errors exploiting parity bits, they incur substantial storage overhead, thereby making it challenging to deploy them on resource-constrained edge devices.
In this paper, we propose Stegano-ECC, a novel error protection scheme for DNN inference on edge devices, which provides strong DNN fault tolerance against DRAM errors without storage overhead. Stegano-ECC selectively applies single error correction (SEC) codes only to the important bits (that have a significant impact on DNN inference accuracy) of DNN weights, improving fault tolerance during DNN inference. It embeds the parity bits of SEC codes within the relatively less important bits of weights, which avoids any storage overhead while minimizing the DNN accuracy degradation. Our evaluation results show that Stegano-ECC significantly improves fault tolerance by up to 500000× and 27778× in FP32 format (up to 2000000× and 10.0× in FP16 and INT8 format, respectively), compared to the conventional systems and the state-of-the-art error protection technique for edge environments, respectively.
最近,越来越多的人直接在边缘设备上采用深度神经网络(DNN)推理任务。由于边缘环境中恶劣的操作条件和资源限制导致DRAM错误增加,这些DNN推理任务面临着严峻的可靠性挑战。虽然传统的纠错码(ecc)可以缓解利用奇偶校验位的DRAM错误,但它们会产生大量的存储开销,因此在资源受限的边缘设备上部署它们具有挑战性。在本文中,我们提出了一种新的边缘设备上DNN推理的错误保护方案Stegano-ECC,该方案在不增加存储开销的情况下对DRAM错误提供了强大的DNN容错性。Stegano-ECC仅对DNN权值的重要位(对DNN推理精度有显著影响)选择性地应用单错误校正(SEC)码,提高了DNN推理过程中的容错性。它将SEC代码的奇偶校验位嵌入到相对不太重要的权重位中,这避免了任何存储开销,同时最大限度地降低了DNN精度的降低。我们的评估结果表明,与传统系统和最先进的边缘环境错误保护技术相比,Stegano-ECC在FP32格式下的容错能力分别提高了500000x和27778 x(在FP16和INT8格式下分别提高了2000000x和10.0 x)。
{"title":"Stegano-ECC: Enhancing DNN fault tolerance with embedded parity for important bits","authors":"Min Jun Jo ,&nbsp;Young Seo Lee","doi":"10.1016/j.sysarc.2025.103651","DOIUrl":"10.1016/j.sysarc.2025.103651","url":null,"abstract":"<div><div>Recently, there has been increasing adoption of deep neural network (DNN) inference tasks directly on edge devices. These DNN inference tasks face critical reliability challenges due to increased DRAM errors caused by harsh operating conditions and resource constraints in edge environments. Though the conventional error correction codes (ECCs) mitigate DRAM errors exploiting parity bits, they incur substantial storage overhead, thereby making it challenging to deploy them on resource-constrained edge devices.</div><div>In this paper, we propose Stegano-ECC, a novel error protection scheme for DNN inference on edge devices, which provides strong DNN fault tolerance against DRAM errors without storage overhead. Stegano-ECC selectively applies single error correction (SEC) codes only to the important bits (that have a significant impact on DNN inference accuracy) of DNN weights, improving fault tolerance during DNN inference. It embeds the parity bits of SEC codes within the relatively less important bits of weights, which avoids any storage overhead while minimizing the DNN accuracy degradation. Our evaluation results show that Stegano-ECC significantly improves fault tolerance by up to 500000<span><math><mo>×</mo></math></span> and 27778<span><math><mo>×</mo></math></span> in FP32 format (up to 2000000<span><math><mo>×</mo></math></span> and 10.0<span><math><mo>×</mo></math></span> in FP16 and INT8 format, respectively), compared to the conventional systems and the state-of-the-art error protection technique for edge environments, respectively.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103651"},"PeriodicalIF":4.1,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated distillation in heterogeneous systems: A direction-oriented survey on communication-efficient knowledge transfer 异构系统中的联邦蒸馏:面向沟通效率的知识转移研究
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-27 DOI: 10.1016/j.sysarc.2025.103648
Fangfang Shan , Lulu Fan , Yuhang Liu , Yifan Mao , Zhuo Chen , Shuaifeng Li
With the rapid advancement of deep learning and privacy-preserving computation, Federated Distillation (FD) — an emerging paradigm integrating Federated Learning (FL) and Knowledge Distillation (KD) — has become a key approach to addressing challenges such as data silos, model heterogeneity, and high communication overhead. This paper presents, for the first time, a direction-oriented classification framework that systematically categorizes FD methods based on the flow of knowledge, namely, Client-to-Server (C2S), Server-to-Client (S2C), Client-to-Client (C2C), and Bidirectional Distillation (BiD)—where ”knowledge flow” refers to the transmission direction of distilled information between participants, defining how models exchange and update soft knowledge within the federated network. Unlike traditional classifications that focus on communication efficiency, data dependency, or privacy mechanisms, the proposed distillation direction perspective directly captures the intrinsic mechanism of how knowledge is transferred across heterogeneous and distributed systems. This provides a unified analytical lens to understand and compare FD paradigms in terms of knowledge topology, collaboration adaptability, and communication efficiency. Furthermore, the paper comprehensively reviews representative FD methods within each category, highlights their technical distinctions and optimization objectives, and discusses their applicability to real-world domains such as healthcare, finance, and intelligent manufacturing. Finally, emerging research directions including adaptive direction switching, decentralized collaboration, and multimodal distillation are explored to guide the evolution of FD toward more intelligent, secure, and systematic collaboration frameworks.
随着深度学习和隐私保护计算的快速发展,联邦蒸馏(FD)——一种集成了联邦学习(FL)和知识蒸馏(KD)的新兴范式——已经成为解决数据孤岛、模型异构和高通信开销等挑战的关键方法。本文首次提出了一个面向方向的分类框架,该框架基于知识流对FD方法进行了系统的分类,即客户端到服务器(C2S)、服务器到客户端(S2C)、客户端到客户端(C2C)和双向蒸馏(BiD),其中“知识流”指的是蒸馏信息在参与者之间的传输方向,定义了模型如何在联邦网络内交换和更新软知识。与关注通信效率、数据依赖性或隐私机制的传统分类不同,提出的蒸馏方向视角直接捕获了知识如何在异构和分布式系统之间传递的内在机制。这提供了一个统一的分析视角来理解和比较FD在知识拓扑、协作适应性和通信效率方面的范例。此外,本文全面回顾了每个类别中具有代表性的FD方法,突出了它们的技术区别和优化目标,并讨论了它们在现实世界领域(如医疗保健、金融和智能制造)的适用性。最后,探讨了自适应方向切换、分散协作和多模态蒸馏等新兴研究方向,以指导FD向更智能、更安全、更系统的协作框架发展。
{"title":"Federated distillation in heterogeneous systems: A direction-oriented survey on communication-efficient knowledge transfer","authors":"Fangfang Shan ,&nbsp;Lulu Fan ,&nbsp;Yuhang Liu ,&nbsp;Yifan Mao ,&nbsp;Zhuo Chen ,&nbsp;Shuaifeng Li","doi":"10.1016/j.sysarc.2025.103648","DOIUrl":"10.1016/j.sysarc.2025.103648","url":null,"abstract":"<div><div>With the rapid advancement of deep learning and privacy-preserving computation, Federated Distillation (FD) — an emerging paradigm integrating Federated Learning (FL) and Knowledge Distillation (KD) — has become a key approach to addressing challenges such as data silos, model heterogeneity, and high communication overhead. This paper presents, for the first time, a direction-oriented classification framework that systematically categorizes FD methods based on the flow of knowledge, namely, Client-to-Server (C2S), Server-to-Client (S2C), Client-to-Client (C2C), and Bidirectional Distillation (BiD)—where ”knowledge flow” refers to the transmission direction of distilled information between participants, defining how models exchange and update soft knowledge within the federated network. Unlike traditional classifications that focus on communication efficiency, data dependency, or privacy mechanisms, the proposed distillation direction perspective directly captures the intrinsic mechanism of how knowledge is transferred across heterogeneous and distributed systems. This provides a unified analytical lens to understand and compare FD paradigms in terms of knowledge topology, collaboration adaptability, and communication efficiency. Furthermore, the paper comprehensively reviews representative FD methods within each category, highlights their technical distinctions and optimization objectives, and discusses their applicability to real-world domains such as healthcare, finance, and intelligent manufacturing. Finally, emerging research directions including adaptive direction switching, decentralized collaboration, and multimodal distillation are explored to guide the evolution of FD toward more intelligent, secure, and systematic collaboration frameworks.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103648"},"PeriodicalIF":4.1,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WASCTL: A Wasserstein adversarial framework for safe controllers transfer learning 安全控制器迁移学习的Wasserstein对抗框架
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-26 DOI: 10.1016/j.sysarc.2025.103626
Yang Li , Rui Guo , Kai Yu , Wang Lin
This paper proposes a Wasserstein Adversarial approach for Safe Controller Transfer Learning (WASCTL) to synthesize safe controllers for dynamic systems. When a system or environment undergoes slight changes, manually recomputing Control Barrier Certificates (CBCs) and controllers typically incurs high computational costs. Existing neural-network-based methods often rely on deep neural networks, limiting their applicability in resource-constrained environments, WASCTL overcomes these limitations. WASCTL comprises two main components: a discriminator and a generator. The discriminator estimates the Wasserstein distance between the state distributions of the source and target systems and uses this distance as a feedback signal to guide the training of the generator. The generator employs a multi-objective loss function to learn a safe control policy for the target system while ensuring the theoretical correctness of the controller. Experimental results demonstrate that WASCTL enables effective safe controller transfer and reduces reliance on deep neural networks, thereby offering a feasible solution to the high computational cost associated with system changes.
本文提出了一种基于Wasserstein的安全控制器迁移学习(WASCTL)对抗方法来合成动态系统的安全控制器。当系统或环境发生轻微变化时,手动重新计算控制屏障证书(CBCs)和控制器通常会产生很高的计算成本。现有的基于神经网络的方法通常依赖于深度神经网络,限制了它们在资源受限环境中的适用性,WASCTL克服了这些限制。WASCTL包括两个主要组件:鉴别器和生成器。鉴别器估计源系统和目标系统状态分布之间的Wasserstein距离,并使用该距离作为反馈信号来指导生成器的训练。发生器采用多目标损失函数学习目标系统的安全控制策略,同时保证控制器的理论正确性。实验结果表明,WASCTL实现了有效的安全控制器转移,减少了对深度神经网络的依赖,从而为解决系统变化带来的高计算成本提供了可行的解决方案。
{"title":"WASCTL: A Wasserstein adversarial framework for safe controllers transfer learning","authors":"Yang Li ,&nbsp;Rui Guo ,&nbsp;Kai Yu ,&nbsp;Wang Lin","doi":"10.1016/j.sysarc.2025.103626","DOIUrl":"10.1016/j.sysarc.2025.103626","url":null,"abstract":"<div><div>This paper proposes a Wasserstein Adversarial approach for Safe Controller Transfer Learning (WASCTL) to synthesize safe controllers for dynamic systems. When a system or environment undergoes slight changes, manually recomputing Control Barrier Certificates (CBCs) and controllers typically incurs high computational costs. Existing neural-network-based methods often rely on deep neural networks, limiting their applicability in resource-constrained environments, WASCTL overcomes these limitations. WASCTL comprises two main components: a discriminator and a generator. The discriminator estimates the Wasserstein distance between the state distributions of the source and target systems and uses this distance as a feedback signal to guide the training of the generator. The generator employs a multi-objective loss function to learn a safe control policy for the target system while ensuring the theoretical correctness of the controller. Experimental results demonstrate that WASCTL enables effective safe controller transfer and reduces reliance on deep neural networks, thereby offering a feasible solution to the high computational cost associated with system changes.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"171 ","pages":"Article 103626"},"PeriodicalIF":4.1,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145625255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Systems Architecture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1