首页 > 最新文献

Journal of Systems Architecture最新文献

英文 中文
When fixes teach: Repair-aware contrastive learning for optimization-resilient binary vulnerability detection 当修复教:修复感知对比学习优化弹性二进制漏洞检测
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-28 DOI: 10.1016/j.sysarc.2026.103722
Zhenzhou Tian , Jiale Zhao , Ming Fan , Jiaze Sun , Yanping Chen , Lingwei Chen
Deep learning (DL)-based vulnerability detection in source code are prevalent, yet detecting vulnerabilities in binary code using this paradigm remains underexplored. The few works typically treat input instructions as individual entities, failing to extract and leverage fine-grained information due to their inability to account for the inherent connections and correlations between code segments and the impact of compilation optimizations. To address these challenges, this paper proposes Delta, a novel approach that incorporates Dynamic contrastive lEarning with vuLnerabiliTy repair Awareness to fine-tune pre-trained models, significantly enhancing the accuracy and efficiency of vulnerability detection in binary code. Delta proceeds by standardizing assembly instructions and utilizing function pairs that represent code before and after vulnerability repair along with their versions compiled under different optimization settings as contrastive learning samples. Building on these rich and diverse training signals, Delta fine-tunes CodeBERT using contrastive learning augmented with masked language modeling, resulting in a feature encoder CMBERT, which is adept at capturing nuanced vulnerability patterns in binary code and remain resilient to the impacts of compilation optimizations. DELTA is evaluated on the Juliet Test Suite dataset, achieving an average performance improvement of 8.04% in detection accuracy and 7.13% in F1 score compared to alternative methods.
基于深度学习(DL)的源代码漏洞检测很普遍,但使用这种范式检测二进制代码中的漏洞仍未得到充分探索。少数作品通常将输入指令视为单独的实体,由于无法解释代码段之间的内在联系和相关性以及编译优化的影响,因此无法提取和利用细粒度信息。为了应对这些挑战,本文提出了一种新的方法Delta,该方法将动态对比学习与漏洞修复意识相结合,对预训练模型进行微调,显著提高了二进制代码漏洞检测的准确性和效率。Delta将汇编指令标准化,并利用代表漏洞修复前后代码的函数对,以及在不同优化设置下编译的版本,作为对比学习样本。在这些丰富多样的训练信号的基础上,Delta使用对比学习和掩码语言建模对CodeBERT进行微调,从而产生了一个特征编码器CMBERT,它擅长捕捉二进制代码中细微的漏洞模式,并对编译优化的影响保持弹性。DELTA在Juliet Test Suite数据集上进行了评估,与其他方法相比,其检测精度平均提高了8.04%,F1分数提高了7.13%。
{"title":"When fixes teach: Repair-aware contrastive learning for optimization-resilient binary vulnerability detection","authors":"Zhenzhou Tian ,&nbsp;Jiale Zhao ,&nbsp;Ming Fan ,&nbsp;Jiaze Sun ,&nbsp;Yanping Chen ,&nbsp;Lingwei Chen","doi":"10.1016/j.sysarc.2026.103722","DOIUrl":"10.1016/j.sysarc.2026.103722","url":null,"abstract":"<div><div>Deep learning (DL)-based vulnerability detection in source code are prevalent, yet detecting vulnerabilities in binary code using this paradigm remains underexplored. The few works typically treat input instructions as individual entities, failing to extract and leverage fine-grained information due to their inability to account for the inherent connections and correlations between code segments and the impact of compilation optimizations. To address these challenges, this paper proposes <strong><span>Delta</span></strong>, a novel approach that incorporates <strong>D</strong>ynamic contrastive l<strong>E</strong>arning with vu<strong>L</strong>nerabili<strong>T</strong>y repair <strong>A</strong>wareness to fine-tune pre-trained models, significantly enhancing the accuracy and efficiency of vulnerability detection in binary code. <span>Delta</span> proceeds by standardizing assembly instructions and utilizing function pairs that represent code before and after vulnerability repair along with their versions compiled under different optimization settings as contrastive learning samples. Building on these rich and diverse training signals, <span>Delta</span> fine-tunes CodeBERT using contrastive learning augmented with masked language modeling, resulting in a feature encoder CMBERT, which is adept at capturing nuanced vulnerability patterns in binary code and remain resilient to the impacts of compilation optimizations. DELTA is evaluated on the Juliet Test Suite dataset, achieving an average performance improvement of 8.04% in detection accuracy and 7.13% in F1 score compared to alternative methods.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103722"},"PeriodicalIF":4.1,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comments on “Contention-aware workflow scheduling on heterogeneous computing systems with shared buses” 对“基于共享总线的异构计算系统的竞争感知工作流调度”的评论
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-27 DOI: 10.1016/j.sysarc.2026.103700
Rajesh Devaraj
Heterogeneous computing systems (HCSs) use different types of processors to balance performance and efficiency for complex applications like workflows. These processors often share a communication bus. When multiple parts of an application try to send data at the same time, this shared bus gets congested, causing delays. Despite this being a common problem, few studies have looked at how to handle this communication bottleneck. To solve this, a new method called Contention-Aware Clustering-based List scheduling (CACL) is proposed. The objective of CACL is to minimize the overall schedule length for the input workflow application modeled as a Directed Acyclic Graph (DAG) to be executed on a HCS interconnected via shared communication buses. While solving this problem, CACL first assigns priorities to task nodes. However, this task priority assignment may occasionally lead to situations where a task is erroneously assigned higher priority compared to one or more of its predecessor tasks in the task graph. Since tasks are selected for processor assignment in the order of their priorities, this situation subsequently leads to violation of precedence relationships between tasks. In this comment, we present a counter example to highlight the design flaw in the task prioritization scheme and discuss possible ways to fix this flaw.
异构计算系统(HCSs)使用不同类型的处理器来平衡复杂应用程序(如工作流)的性能和效率。这些处理器通常共享一个通信总线。当应用程序的多个部分试图同时发送数据时,这个共享总线就会变得拥挤,从而导致延迟。尽管这是一个常见的问题,但很少有研究关注如何处理这一沟通瓶颈。为了解决这一问题,提出了一种基于争用感知聚类的列表调度方法。CACL的目标是最小化作为有向无环图(DAG)建模的输入工作流应用程序在通过共享通信总线互连的HCS上执行的总体调度长度。在解决这个问题时,CACL首先为任务节点分配优先级。然而,这种任务优先级分配可能偶尔会导致这样的情况:与任务图中的一个或多个前一个任务相比,一个任务被错误地分配了更高的优先级。由于任务是按照其优先级顺序选择处理器分配的,因此这种情况随后会导致任务之间的优先级关系被破坏。在这篇评论中,我们提出了一个反例来突出任务优先级方案中的设计缺陷,并讨论了修复该缺陷的可能方法。
{"title":"Comments on “Contention-aware workflow scheduling on heterogeneous computing systems with shared buses”","authors":"Rajesh Devaraj","doi":"10.1016/j.sysarc.2026.103700","DOIUrl":"10.1016/j.sysarc.2026.103700","url":null,"abstract":"<div><div>Heterogeneous computing systems (HCSs) use different types of processors to balance performance and efficiency for complex applications like workflows. These processors often share a communication bus. When multiple parts of an application try to send data at the same time, this shared bus gets congested, causing delays. Despite this being a common problem, few studies have looked at how to handle this communication bottleneck. To solve this, a new method called <em>Contention-Aware Clustering-based List scheduling</em> (CACL) is proposed. The objective of <em>CACL</em> is to minimize the overall schedule length for the input workflow application modeled as a Directed Acyclic Graph (DAG) to be executed on a HCS interconnected via shared communication buses. While solving this problem, <em>CACL</em> first assigns priorities to task nodes. However, this task priority assignment may occasionally lead to situations where a task is erroneously assigned higher priority compared to one or more of its predecessor tasks in the task graph. Since tasks are selected for processor assignment in the order of their priorities, this situation subsequently leads to violation of precedence relationships between tasks. In this comment, we present a counter example to highlight the design flaw in the task prioritization scheme and discuss possible ways to fix this flaw.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103700"},"PeriodicalIF":4.1,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EDF-VD-based energy efficient scheduling for imprecise mixed-criticality task with resource synchronization 基于edf - vd的不精确混合临界任务资源同步节能调度
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-27 DOI: 10.1016/j.sysarc.2026.103725
Yi-Wen Zhang, Quan-Huang Zhang
Prior work on mixed-criticality scheduling with resource synchronization based on Earliest Deadline First with Virtual Deadlines immediately abandons all low-criticality (LO) tasks when the system enters high-criticality (HI) mode, which is not reasonable in practical systems. In this paper, we address the scheduling problem of the imprecise mixed-criticality task model with shared resources in which LO tasks continue to execute with a reduced time budget in HI mode. Thereafter, we propose a new resource access protocol called IMC-SRP, and outline some properties of the IMC-SRP. Moreover, we present sufficient conditions for the schedulability analysis of the IMC-SRP. To save energy, we propose a new algorithm called EAS-IMC-SRP. Furthermore, we use synthetic tasksets to evaluate the proposed algorithm.
以往基于“最早截止日期优先”和“虚拟截止日期”的混合临界调度方法在系统进入高临界模式时,会立即放弃所有低临界任务,这在实际系统中是不合理的。在本文中,我们解决了具有共享资源的不精确混合临界任务模型的调度问题,其中LO任务在HI模式下以减少的时间预算继续执行。在此基础上,我们提出了一种新的资源访问协议IMC-SRP,并概述了IMC-SRP的一些特性。此外,我们还给出了IMC-SRP可调度性分析的充分条件。为了节约能源,我们提出了一种新的算法easc - imc - srp。此外,我们使用合成任务集来评估所提出的算法。
{"title":"EDF-VD-based energy efficient scheduling for imprecise mixed-criticality task with resource synchronization","authors":"Yi-Wen Zhang,&nbsp;Quan-Huang Zhang","doi":"10.1016/j.sysarc.2026.103725","DOIUrl":"10.1016/j.sysarc.2026.103725","url":null,"abstract":"<div><div>Prior work on mixed-criticality scheduling with resource synchronization based on Earliest Deadline First with Virtual Deadlines immediately abandons all low-criticality (LO) tasks when the system enters high-criticality (HI) mode, which is not reasonable in practical systems. In this paper, we address the scheduling problem of the imprecise mixed-criticality task model with shared resources in which LO tasks continue to execute with a reduced time budget in HI mode. Thereafter, we propose a new resource access protocol called IMC-SRP, and outline some properties of the IMC-SRP. Moreover, we present sufficient conditions for the schedulability analysis of the IMC-SRP. To save energy, we propose a new algorithm called EAS-IMC-SRP. Furthermore, we use synthetic tasksets to evaluate the proposed algorithm.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103725"},"PeriodicalIF":4.1,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving access control and trust management for multi-authority in IoMT systems IoMT系统中多授权机构的隐私保护访问控制和信任管理
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-24 DOI: 10.1016/j.sysarc.2026.103719
Chenlu Xie, Xiaolin Gui
Internet of Medical Things (IoMT) devices generate a huge amount of real-time data on a daily basis, which can be analyzed by medical practitioners to optimize diagnosis and treatment. Due to the complexity of devices and users in IoMT systems, robust measures are needed to ensure security and quality of service or information. However, most existing schemes require a trusted central authority to generate secret keys for users, which is often impractical in real-world scenarios. Although many multi-authority access control schemes have been proposed to address this issue, they still lack stronger defense and supervision mechanisms to effectively regulate user access. In this paper, we propose a privacy-preserving multi-authority access control scheme that enables policy hiding and efficiently prevents malicious access attacks. Specifically, multiple untrusted authorities independently generate attribute keys through secure two-party computation and zero-knowledge proofs. Even if multiple authorities collude, they cannot trace secret keys. Furthermore, the scheme enhances privacy by breaking the mapping between attributes and the access matrix. What is more, we also construct a dynamic access control mechanism based on trust management, which can effectively curb persistent access attacks by malicious data users. Our security analysis and experimental results show that the scheme achieves semantic security, resists collusion attacks, and constrains the malicious behavior of data users with minimal online encryption computational costs compared to other schemes.
医疗物联网(IoMT)设备每天都会产生大量的实时数据,医生可以对这些数据进行分析,以优化诊断和治疗。由于IoMT系统中设备和用户的复杂性,需要强有力的措施来确保服务或信息的安全和质量。然而,大多数现有方案需要一个可信的中央权威机构为用户生成密钥,这在现实场景中通常是不切实际的。尽管已经提出了许多多权限访问控制方案来解决这一问题,但它们仍然缺乏更强的防御和监督机制来有效地规范用户的访问。本文提出了一种保护隐私的多权限访问控制方案,该方案既能实现策略隐藏,又能有效防止恶意访问攻击。具体来说,多个不受信任的权威机构通过安全的双方计算和零知识证明,独立地生成属性密钥。即使多个机构串通,他们也无法追踪密钥。此外,该方案通过打破属性和访问矩阵之间的映射来增强隐私性。此外,我们还构建了一种基于信任管理的动态访问控制机制,能够有效遏制恶意数据用户的持续访问攻击。安全性分析和实验结果表明,与其他方案相比,该方案以最小的在线加密计算成本实现了语义安全,抵抗合谋攻击,约束了数据用户的恶意行为。
{"title":"Privacy-preserving access control and trust management for multi-authority in IoMT systems","authors":"Chenlu Xie,&nbsp;Xiaolin Gui","doi":"10.1016/j.sysarc.2026.103719","DOIUrl":"10.1016/j.sysarc.2026.103719","url":null,"abstract":"<div><div>Internet of Medical Things (IoMT) devices generate a huge amount of real-time data on a daily basis, which can be analyzed by medical practitioners to optimize diagnosis and treatment. Due to the complexity of devices and users in IoMT systems, robust measures are needed to ensure security and quality of service or information. However, most existing schemes require a trusted central authority to generate secret keys for users, which is often impractical in real-world scenarios. Although many multi-authority access control schemes have been proposed to address this issue, they still lack stronger defense and supervision mechanisms to effectively regulate user access. In this paper, we propose a privacy-preserving multi-authority access control scheme that enables policy hiding and efficiently prevents malicious access attacks. Specifically, multiple untrusted authorities independently generate attribute keys through secure two-party computation and zero-knowledge proofs. Even if multiple authorities collude, they cannot trace secret keys. Furthermore, the scheme enhances privacy by breaking the mapping between attributes and the access matrix. What is more, we also construct a dynamic access control mechanism based on trust management, which can effectively curb persistent access attacks by malicious data users. Our security analysis and experimental results show that the scheme achieves semantic security, resists collusion attacks, and constrains the malicious behavior of data users with minimal online encryption computational costs compared to other schemes.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103719"},"PeriodicalIF":4.1,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A visual–tactile fusion system for terrain perception under varying illumination conditions 不同光照条件下地形感知的视触觉融合系统
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-24 DOI: 10.1016/j.sysarc.2026.103698
Rui Wang , Shichun Yang , Yuyi Chen , Zhuoyang Li , Jiayi Lu , Zexiang Tong , Jianyi Xu , Bin Sun , Xinjie Feng , Yaoguang Cao
Road terrain conditions are vital for ensuring the driving safety of autonomous vehicles (AVs). However, traditional sensors like cameras and LiDARs are sensitive to changes in lighting and weather, posing challenges for real-time road condition perception. In this paper, we propose an illumination-aware visual–tactile fusion system (IVTF) for terrain perception, integrating visual and tactile data while optimizing the fusion process based on illumination characteristics. The system employs a camera and an intelligent tire to capture visual and tactile data across various lighting conditions and vehicle speeds. Additionally, we also design a visual–tactile fusion module that dynamically adjusts the weights of different modalities according to illumination features. Comparative results with single-modality perception methods demonstrate the superior ability of visual–tactile fusion to accurately perceive road terrains under diverse lighting conditions. This approach significantly advances the robustness and reliability of terrain perception in AVs, contributing to enhanced driving safety.
道路地形条件对于确保自动驾驶汽车(AVs)的驾驶安全至关重要。然而,摄像头和激光雷达等传统传感器对光线和天气的变化很敏感,这对实时路况感知构成了挑战。本文提出了一种用于地形感知的光照感知视觉触觉融合系统(IVTF),该系统融合了视觉和触觉数据,并基于光照特征对融合过程进行了优化。该系统采用一个摄像头和一个智能轮胎来捕捉各种照明条件和车速下的视觉和触觉数据。此外,我们还设计了一个视触觉融合模块,根据光照特征动态调整不同模态的权重。与单模态感知方法的对比结果表明,视觉-触觉融合在不同光照条件下准确感知道路地形的能力更强。该方法显著提高了自动驾驶汽车地形感知的鲁棒性和可靠性,有助于提高驾驶安全性。
{"title":"A visual–tactile fusion system for terrain perception under varying illumination conditions","authors":"Rui Wang ,&nbsp;Shichun Yang ,&nbsp;Yuyi Chen ,&nbsp;Zhuoyang Li ,&nbsp;Jiayi Lu ,&nbsp;Zexiang Tong ,&nbsp;Jianyi Xu ,&nbsp;Bin Sun ,&nbsp;Xinjie Feng ,&nbsp;Yaoguang Cao","doi":"10.1016/j.sysarc.2026.103698","DOIUrl":"10.1016/j.sysarc.2026.103698","url":null,"abstract":"<div><div>Road terrain conditions are vital for ensuring the driving safety of autonomous vehicles (AVs). However, traditional sensors like cameras and LiDARs are sensitive to changes in lighting and weather, posing challenges for real-time road condition perception. In this paper, we propose an illumination-aware visual–tactile fusion system (IVTF) for terrain perception, integrating visual and tactile data while optimizing the fusion process based on illumination characteristics. The system employs a camera and an intelligent tire to capture visual and tactile data across various lighting conditions and vehicle speeds. Additionally, we also design a visual–tactile fusion module that dynamically adjusts the weights of different modalities according to illumination features. Comparative results with single-modality perception methods demonstrate the superior ability of visual–tactile fusion to accurately perceive road terrains under diverse lighting conditions. This approach significantly advances the robustness and reliability of terrain perception in AVs, contributing to enhanced driving safety.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"174 ","pages":"Article 103698"},"PeriodicalIF":4.1,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146192071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HPA: Manipulating deep reinforcement learning via adversarial interaction HPA:通过对抗性互动操纵深度强化学习
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-22 DOI: 10.1016/j.sysarc.2026.103685
Kanghua Mo , Zhengxin Zhang , Yuanzhi Zhang , Yucheng Long , Zhengdao Li
Recent studies have demonstrated that policy manipulation attacks on deep reinforcement learning (DRL) systems can lead to the learning of abnormal policies by victim agents. However, existing work typically assumes that the attacker can manipulate multiple components of the training process, such as reward functions, environment dynamics, or state information. In IoT-enabled smart societies, where AI-driven systems operate in interconnected and data-sensitive environments, such assumptions raise serious concerns regarding security and privacy. This paper investigates a novel policy manipulation attack in competitive multi-agent reinforcement learning under significantly weaker assumptions, where the attacker only requires access to the victim’s training settings and, in some cases, the learned policy outputs during training. We propose the honeypot policy attack (HPA), in which an adversarial agent induces the victim to learn an attacker-specified target policy by deliberately taking suboptimal actions. To this end, we introduce a honeypot reward estimation mechanism that quantifies the amount of reward sacrifice required by the adversarial agent to influence the victim’s learning process, and adapts this sacrifice according to the degree of policy manipulation. Extensive experiments on three representative competitive games demonstrate that HPA is both effective and stealthy, exposing previously unexplored vulnerabilities in DRL-based systems deployed in IoT-driven smart environments. To the best of our knowledge, this work presents the first policy manipulation attack that does not rely on explicit tampering with internal components of DRL systems, but instead operates solely through admissible adversarial interactions, offering new insights into security challenges faced by emerging AIoT ecosystems.
最近的研究表明,针对深度强化学习(DRL)系统的策略操纵攻击可以导致受害者代理学习异常策略。然而,现有的工作通常假设攻击者可以操纵训练过程的多个组件,例如奖励函数、环境动态或状态信息。在物联网驱动的智能社会中,人工智能驱动的系统在相互关联和数据敏感的环境中运行,这种假设引发了对安全和隐私的严重担忧。本文研究了竞争多智能体强化学习中的一种新的策略操纵攻击,在明显较弱的假设下,攻击者只需要访问受害者的训练设置,在某些情况下,在训练期间学习的策略输出。我们提出了蜜罐策略攻击(HPA),其中敌对代理通过故意采取次优行为诱导受害者学习攻击者指定的目标策略。为此,我们引入了一种蜜罐奖励估计机制,该机制量化了对抗代理影响受害者学习过程所需的奖励牺牲量,并根据策略操纵的程度对这种牺牲进行调整。在三个具有代表性的竞争游戏中进行的广泛实验表明,HPA既有效又隐蔽,暴露了在物联网驱动的智能环境中部署的基于drl的系统中以前未被探索的漏洞。据我们所知,这项工作提出了第一个策略操纵攻击,它不依赖于对DRL系统内部组件的显式篡改,而是仅通过可接受的对抗性相互作用进行操作,为新兴AIoT生态系统面临的安全挑战提供了新的见解。
{"title":"HPA: Manipulating deep reinforcement learning via adversarial interaction","authors":"Kanghua Mo ,&nbsp;Zhengxin Zhang ,&nbsp;Yuanzhi Zhang ,&nbsp;Yucheng Long ,&nbsp;Zhengdao Li","doi":"10.1016/j.sysarc.2026.103685","DOIUrl":"10.1016/j.sysarc.2026.103685","url":null,"abstract":"<div><div>Recent studies have demonstrated that policy manipulation attacks on deep reinforcement learning (DRL) systems can lead to the learning of abnormal policies by victim agents. However, existing work typically assumes that the attacker can manipulate multiple components of the training process, such as reward functions, environment dynamics, or state information. In IoT-enabled smart societies, where AI-driven systems operate in interconnected and data-sensitive environments, such assumptions raise serious concerns regarding security and privacy. This paper investigates a novel policy manipulation attack in competitive multi-agent reinforcement learning under significantly weaker assumptions, where the attacker only requires access to the victim’s training settings and, in some cases, the learned policy outputs during training. We propose the honeypot policy attack (HPA), in which an adversarial agent induces the victim to learn an attacker-specified target policy by deliberately taking suboptimal actions. To this end, we introduce a honeypot reward estimation mechanism that quantifies the amount of reward sacrifice required by the adversarial agent to influence the victim’s learning process, and adapts this sacrifice according to the degree of policy manipulation. Extensive experiments on three representative competitive games demonstrate that HPA is both effective and stealthy, exposing previously unexplored vulnerabilities in DRL-based systems deployed in IoT-driven smart environments. To the best of our knowledge, this work presents the first policy manipulation attack that does not rely on explicit tampering with internal components of DRL systems, but instead operates solely through admissible adversarial interactions, offering new insights into security challenges faced by emerging AIoT ecosystems.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103685"},"PeriodicalIF":4.1,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAS: A scheduling primitive dependency analysis-based cost model for tensor program optimization 基于调度原语依赖分析的张量程序优化成本模型
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-21 DOI: 10.1016/j.sysarc.2026.103721
Yonghua Hu , Anxing Xie , Yaohua Wang , Zhe Li , Zenghua Cheng , Junyang Tang
Automatically generating high-performance tensor programs has become a promising approach for deploying deep neural networks. A key challenge lies in designing an effective cost model to navigate the vast scheduling search space. Existing approaches typically fall into two categories, each with limitations: offline learning cost models rely on large pre-collected datasets, which may be incomplete or device-specific, and online learning cost models depend on handcrafted features, requiring substantial manual effort and expertise.
We propose GAS, a lightweight framework for generating tensor programs for deep learning applications. GAS reformulates feature extraction as a sequence-dependent analysis of scheduling primitives. Our cost model integrates three key factors to uncover performance-critical insights within scheduling sequences: (1) decision factors allocation, quantifying entropy and skewness of scheduling primitive factors to capture their dominance; (2) primitive contribution weights, measuring the relative impact of primitives on overall performance; and (3) structural semantic alignment, capturing correlations between scheduling primitive factors and hardware parallelism mechanisms. This approach reduces the complexity of handcrafted feature engineering and extensive pre-training datasets, significantly improving both efficiency and scalability. Experimental results on NVIDIA GPUs demonstrate that GAS achieves average speedups of 3.79× over AMOS and 2.22× over Ansor, while also consistently outperforming other state-of-the-art tensor compilers.
自动生成高性能张量程序已成为部署深度神经网络的一种很有前途的方法。一个关键的挑战在于设计一个有效的成本模型来导航巨大的调度搜索空间。现有的方法通常分为两类,每一类都有局限性:离线学习成本模型依赖于大量预先收集的数据集,这些数据集可能是不完整的或特定于设备的,在线学习成本模型依赖于手工制作的特征,需要大量的手工工作和专业知识。我们提出GAS,一个轻量级框架,用于为深度学习应用程序生成张量程序。GAS将特征提取重新表述为调度原语的序列相关分析。我们的成本模型集成了三个关键因素来揭示调度序列中的性能关键洞察:(1)决策因素分配,量化调度原始因素的熵和偏度,以捕获它们的主导地位;(2)原语贡献权重,衡量原语对整体性能的相对影响;(3)结构语义对齐,捕获调度原语因素与硬件并行机制之间的相关性。这种方法降低了手工特征工程和大量预训练数据集的复杂性,显著提高了效率和可扩展性。在NVIDIA gpu上的实验结果表明,GAS的平均速度比AMOS高3.79倍,比Ansor高2.22倍,同时也始终优于其他最先进的张量编译器。
{"title":"GAS: A scheduling primitive dependency analysis-based cost model for tensor program optimization","authors":"Yonghua Hu ,&nbsp;Anxing Xie ,&nbsp;Yaohua Wang ,&nbsp;Zhe Li ,&nbsp;Zenghua Cheng ,&nbsp;Junyang Tang","doi":"10.1016/j.sysarc.2026.103721","DOIUrl":"10.1016/j.sysarc.2026.103721","url":null,"abstract":"<div><div>Automatically generating high-performance tensor programs has become a promising approach for deploying deep neural networks. A key challenge lies in designing an effective cost model to navigate the vast scheduling search space. Existing approaches typically fall into two categories, each with limitations: offline learning cost models rely on large pre-collected datasets, which may be incomplete or device-specific, and online learning cost models depend on handcrafted features, requiring substantial manual effort and expertise.</div><div>We propose GAS, a lightweight framework for generating tensor programs for deep learning applications. GAS reformulates feature extraction as a sequence-dependent analysis of scheduling primitives. Our cost model integrates three key factors to uncover performance-critical insights within scheduling sequences: (1) decision factors allocation, quantifying entropy and skewness of scheduling primitive factors to capture their dominance; (2) primitive contribution weights, measuring the relative impact of primitives on overall performance; and (3) structural semantic alignment, capturing correlations between scheduling primitive factors and hardware parallelism mechanisms. This approach reduces the complexity of handcrafted feature engineering and extensive pre-training datasets, significantly improving both efficiency and scalability. Experimental results on NVIDIA GPUs demonstrate that GAS achieves average speedups of 3.79<span><math><mo>×</mo></math></span> over AMOS and 2.22<span><math><mo>×</mo></math></span> over Ansor, while also consistently outperforming other state-of-the-art tensor compilers.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103721"},"PeriodicalIF":4.1,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EdgeTrust-Shard: Hierarchical blockchain architecture for federated learning in cross-chain IoT ecosystems EdgeTrust-Shard:跨链物联网生态系统中联邦学习的分层区块链架构
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-21 DOI: 10.1016/j.sysarc.2026.103701
Tuan-Dung Tran, Phuong-Dai Bui, Van-Hau Pham
Enabling AI-driven real-time distributed computing on the edge-cloud continuum requires overcoming a critical dependability challenge: resource-constrained IoT devices cannot participate in Byzantine-resilient federated learning due to a 1940-fold memory gap, with robust aggregation methods demanding 512MB–2GB while microcontrollers offer only 264KB SRAM. We present EdgeTrust-Shard, a novel system architecture designed for dependability, security, and scalability in edge AI. It enables real-time Byzantine-resilient federated learning on commodity microcontrollers by distributing computational complexity across the network topology. The framework’s contributions include optimal M=N clustering for O(N) communication, a Multi-Factor Proof-of-Performance consensus mechanism providing quadratic Byzantine suppression with proven O(T1/2) convergence, and platform-optimized cryptography delivering a 3.4-fold speedup for real-time processing. A case study using a hybrid physical-simulation deployment demonstrates the system’s efficacy, achieving 93.9–94.7% accuracy across Byzantine attack scenarios at 30% adversary presence within a 140KB memory footprint on Raspberry Pi Pico nodes. By outperforming adapted state-of-the-art blockchain-FL systems like FedChain and BlockFL by up to 9.3 percentage points, EdgeTrust-Shard provides a critical security enhancement for the edge-cloud continuum, transforming passive IoT data sources into dependable participants in distributed trust computations for next-generation applications such as smart cities and industrial automation.
在边缘云连续体上实现人工智能驱动的实时分布式计算需要克服一个关键的可靠性挑战:由于内存缺口达40倍,资源受限的物联网设备无法参与拜占庭弹性联邦学习,而强大的聚合方法需要512MB-2GB,而微控制器仅提供264KB SRAM。我们提出了EdgeTrust-Shard,这是一种新颖的系统架构,专为边缘人工智能的可靠性,安全性和可扩展性而设计。它通过在网络拓扑上分配计算复杂性,在商品微控制器上实现实时拜占庭弹性联邦学习。该框架的贡献包括用于O(N)通信的最优M=N聚类,提供二次拜占庭抑制的多因素性能证明共识机制,具有已证明的O(T−1/2)收敛性,以及为实时处理提供3.4倍加速的平台优化加密。使用混合物理模拟部署的案例研究证明了系统的有效性,在Raspberry Pi Pico节点140KB内存占用内,在30%的对手存在的拜占庭攻击场景中,实现了93.9-94.7%的准确率。通过比FedChain和BlockFL等最先进的区块链- fl系统高出9.3个百分点,EdgeTrust-Shard为边缘云连续体提供了关键的安全性增强,将被动物联网数据源转化为下一代应用(如智能城市和工业自动化)分布式信任计算的可靠参与者。
{"title":"EdgeTrust-Shard: Hierarchical blockchain architecture for federated learning in cross-chain IoT ecosystems","authors":"Tuan-Dung Tran,&nbsp;Phuong-Dai Bui,&nbsp;Van-Hau Pham","doi":"10.1016/j.sysarc.2026.103701","DOIUrl":"10.1016/j.sysarc.2026.103701","url":null,"abstract":"<div><div>Enabling AI-driven real-time distributed computing on the edge-cloud continuum requires overcoming a critical dependability challenge: resource-constrained IoT devices cannot participate in Byzantine-resilient federated learning due to a 1940-fold memory gap, with robust aggregation methods demanding 512MB–2GB while microcontrollers offer only 264KB SRAM. We present EdgeTrust-Shard, a novel system architecture designed for dependability, security, and scalability in edge AI. It enables real-time Byzantine-resilient federated learning on commodity microcontrollers by distributing computational complexity across the network topology. The framework’s contributions include optimal <span><math><mrow><mi>M</mi><mo>=</mo><msqrt><mrow><mi>N</mi></mrow></msqrt></mrow></math></span> clustering for <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>N</mi><mo>)</mo></mrow></mrow></math></span> communication, a Multi-Factor Proof-of-Performance consensus mechanism providing quadratic Byzantine suppression with proven <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>T</mi></mrow><mrow><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span> convergence, and platform-optimized cryptography delivering a 3.4-fold speedup for real-time processing. A case study using a hybrid physical-simulation deployment demonstrates the system’s efficacy, achieving 93.9–94.7% accuracy across Byzantine attack scenarios at 30% adversary presence within a 140KB memory footprint on Raspberry Pi Pico nodes. By outperforming adapted state-of-the-art blockchain-FL systems like FedChain and BlockFL by up to 9.3 percentage points, EdgeTrust-Shard provides a critical security enhancement for the edge-cloud continuum, transforming passive IoT data sources into dependable participants in distributed trust computations for next-generation applications such as smart cities and industrial automation.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103701"},"PeriodicalIF":4.1,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GenMClass: Design and comparative analysis of genome classifier-on-chip platform GenMClass:基因组芯片分类器平台的设计与比较分析
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-17 DOI: 10.1016/j.sysarc.2026.103702
Daria Bromot , Yehuda Kra , Zuher Jahshan , Esteban Garzón , Adam Teman , Leonid Yavits
We propose GenMClass, a genome classification system-on-chip (SoC) implementing two different classification approaches and comprising two separate classification engines: a DNN accelerator GenDNN, that classifies DNA reads converted to images using a classification neural network, and a similarity search-capable Error Tolerant Content Addressable Memory ETCAM, that classifies genomes by k-mer matching. Classification operations are controlled by an embedded RISCV processor. GenMClass classification platform was designed and manufactured in a commercial 65 nm process. We conduct a comparative analysis of ETCAM and GenDNN classification efficiency as well as their performance, silicon area and power consumption using silicon measurements. The size of GenMClass SoC is 3.4 mm2 and its total power consumption (assuming both GenDNN and ETCAM perform classification at the same time) is 144 mW. This allows using GenMClass as a portable classifier for pathogen surveillance during pandemics, food safety and environmental monitoring, agriculture pathogen and antimicrobial resistance control, in the field or at points of care.
我们提出GenMClass,一个基因组分类系统芯片(SoC),实现两种不同的分类方法,包括两个独立的分类引擎:DNN加速器GenDNN,使用分类神经网络对转换为图像的DNA进行分类,以及具有相似性搜索功能的容错内容可寻址内存ETCAM,通过k-mer匹配对基因组进行分类。分类操作由嵌入式RISCV处理器控制。GenMClass分类平台采用商用65nm工艺设计和制造。我们对ETCAM和GenDNN的分类效率、性能、硅面积和功耗进行了比较分析。GenMClass SoC的尺寸为3.4 mm2,其总功耗(假设GenDNN和ETCAM同时进行分类)为144 mW。这使得GenMClass可以作为便携式分类器在大流行期间进行病原体监测、食品安全和环境监测、农业病原体和抗菌素耐药性控制,以及在现场或医疗点使用。
{"title":"GenMClass: Design and comparative analysis of genome classifier-on-chip platform","authors":"Daria Bromot ,&nbsp;Yehuda Kra ,&nbsp;Zuher Jahshan ,&nbsp;Esteban Garzón ,&nbsp;Adam Teman ,&nbsp;Leonid Yavits","doi":"10.1016/j.sysarc.2026.103702","DOIUrl":"10.1016/j.sysarc.2026.103702","url":null,"abstract":"<div><div>We propose GenMClass, a genome classification system-on-chip (SoC) implementing two different classification approaches and comprising two separate classification engines: a DNN accelerator GenDNN, that classifies DNA reads converted to images using a classification neural network, and a similarity search-capable Error Tolerant Content Addressable Memory ETCAM, that classifies genomes by k-mer matching. Classification operations are controlled by an embedded RISCV processor. GenMClass classification platform was designed and manufactured in a commercial 65 nm process. We conduct a comparative analysis of ETCAM and GenDNN classification efficiency as well as their performance, silicon area and power consumption using silicon measurements. The size of GenMClass SoC is 3.4 mm<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> and its total power consumption (assuming both GenDNN and ETCAM perform classification at the same time) is 144 mW. This allows using GenMClass as a portable classifier for pathogen surveillance during pandemics, food safety and environmental monitoring, agriculture pathogen and antimicrobial resistance control, in the field or at points of care.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103702"},"PeriodicalIF":4.1,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIMD: AI-powered android malware detection for securing AIoT devices and networks using graph embedding and ensemble learning AIMD: ai驱动的android恶意软件检测,用于使用图嵌入和集成学习来保护AIoT设备和网络
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-16 DOI: 10.1016/j.sysarc.2026.103707
Santosh K. Smmarwar , Rahul Priyadarshi , Pratik Angaitkar , Subodh Mishra , Rajkumar Singh Rathore
The rapid evolution of Artificial Intelligence of Things (AIoT) is accelerating the development of smart societies, where interconnected consumer electronics such as smartphones, IoT devices, smart meters, and surveillance systems play a crucial role in optimizing operational efficiency and service delivery. However, this hyper-connected digital ecosystem is increasingly vulnerable to sophisticated Android malware attacks that exploit system weaknesses, disrupt services, and compromise data privacy and integrity. These malware variants leverage advanced evasion techniques, including permission abuse, dynamic runtime manipulation, and memory-based obfuscation, rendering traditional detection methods ineffective. The key challenges in securing AIoT-driven smart societies include managing high-dimensional feature spaces, detecting dynamically evolving malware behaviours, and ensuring real-time classification performance. To address these issues, this paper proposed an AI-powered Android Malware Detection (AIMD) framework designed for AIoT-enabled smart society environments. The framework extracts multi-level features (permissions, intents, API calls, and obfuscated memory patterns) from Android APK files and employs graph embedding techniques (DeepWalk and Node2Vec) for dimensionality reduction. Feature selection is optimized using the Red Deer Algorithm (RDA), a metaheuristic approach, while classification is performed through an ensemble of machine learning models (Support Vector Machine, Decision Tree, Random Forest, Extra Trees) enhanced by bagging, boosting, stacking, and soft voting techniques. Experimental evaluations on CICInvesAndMal2019 and CICMalMem2022 datasets demonstrate the effectiveness of the proposed system, achieving malware detection accuracies of 98.78% and 99.99%, respectively. By integrating AI-driven malware detection into AIoT infrastructures, this research advances cybersecurity resilience, safeguarding smart societies against emerging threats in an increasingly connected world.
物联网人工智能(AIoT)的快速发展正在加速智能社会的发展,智能手机、物联网设备、智能电表和监控系统等互联消费电子产品在优化运营效率和服务交付方面发挥着至关重要的作用。然而,这种超连接的数字生态系统越来越容易受到复杂的Android恶意软件攻击,这些攻击会利用系统弱点,破坏服务,并损害数据隐私和完整性。这些恶意软件变体利用高级规避技术,包括权限滥用、动态运行时操作和基于内存的混淆,使传统的检测方法无效。保护人工智能物联网驱动的智能社会的关键挑战包括管理高维特征空间,检测动态演变的恶意软件行为,并确保实时分类性能。为了解决这些问题,本文提出了一种基于ai的Android恶意软件检测(AIMD)框架,该框架设计用于支持ai的智能社会环境。该框架从Android APK文件中提取多级特性(权限、意图、API调用和模糊内存模式),并采用图嵌入技术(DeepWalk和Node2Vec)进行降维。特征选择使用红鹿算法(RDA)进行优化,这是一种元启发式方法,而分类是通过机器学习模型(支持向量机、决策树、随机森林、额外树)的集合进行的,这些模型通过装袋、提升、堆叠和软投票技术进行增强。在CICInvesAndMal2019和CICMalMem2022数据集上的实验评估证明了该系统的有效性,恶意软件检测准确率分别达到98.78%和99.99%。通过将人工智能驱动的恶意软件检测集成到AIoT基础设施中,本研究提高了网络安全弹性,在日益互联的世界中保护智能社会免受新出现的威胁。
{"title":"AIMD: AI-powered android malware detection for securing AIoT devices and networks using graph embedding and ensemble learning","authors":"Santosh K. Smmarwar ,&nbsp;Rahul Priyadarshi ,&nbsp;Pratik Angaitkar ,&nbsp;Subodh Mishra ,&nbsp;Rajkumar Singh Rathore","doi":"10.1016/j.sysarc.2026.103707","DOIUrl":"10.1016/j.sysarc.2026.103707","url":null,"abstract":"<div><div>The rapid evolution of Artificial Intelligence of Things (AIoT) is accelerating the development of smart societies, where interconnected consumer electronics such as smartphones, IoT devices, smart meters, and surveillance systems play a crucial role in optimizing operational efficiency and service delivery. However, this hyper-connected digital ecosystem is increasingly vulnerable to sophisticated Android malware attacks that exploit system weaknesses, disrupt services, and compromise data privacy and integrity. These malware variants leverage advanced evasion techniques, including permission abuse, dynamic runtime manipulation, and memory-based obfuscation, rendering traditional detection methods ineffective. The key challenges in securing AIoT-driven smart societies include managing high-dimensional feature spaces, detecting dynamically evolving malware behaviours, and ensuring real-time classification performance. To address these issues, this paper proposed an AI-powered Android Malware Detection (AIMD) framework designed for AIoT-enabled smart society environments. The framework extracts multi-level features (permissions, intents, API calls, and obfuscated memory patterns) from Android APK files and employs graph embedding techniques (DeepWalk and Node2Vec) for dimensionality reduction. Feature selection is optimized using the Red Deer Algorithm (RDA), a metaheuristic approach, while classification is performed through an ensemble of machine learning models (Support Vector Machine, Decision Tree, Random Forest, Extra Trees) enhanced by bagging, boosting, stacking, and soft voting techniques. Experimental evaluations on CICInvesAndMal2019 and CICMalMem2022 datasets demonstrate the effectiveness of the proposed system, achieving malware detection accuracies of 98.78% and 99.99%, respectively. By integrating AI-driven malware detection into AIoT infrastructures, this research advances cybersecurity resilience, safeguarding smart societies against emerging threats in an increasingly connected world.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"173 ","pages":"Article 103707"},"PeriodicalIF":4.1,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Systems Architecture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1