首页 > 最新文献

Journal of Systems Architecture最新文献

英文 中文
An architecture-adaptive optimization strategy for high-performance SYMV on a heterogeneous AI accelerator 基于异构AI加速器的高性能SYMV自适应架构优化策略
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-02-09 DOI: 10.1016/j.sysarc.2026.103728
Hao Jiang , Lu Lu , Zhihong Liang
Emerging AI accelerators offer strong compute density for HPC workloads, but decoupled execution engines and software-managed memory systems complicate performance portability. This paper studies the memory-bound SYmmetric Matrix–Vector multiplication (SYMV) kernel on Huawei Ascend A2, a heterogeneous architecture with disjoint Cube (AIC) and Vector (AIV) engines. We propose an architecture-adaptive mapping that (i) assigns off-diagonal dense tiles to AIC while keeping diagonal/finalization on AIV, (ii) orchestrates cross-engine execution with a three-stage software pipeline to overlap DMA, compute, and synchronization, and (iii) reduces off-chip matrix-read traffic via symmetry-aware traversal under triangular storage, together with a transpose-free diagonal-tile strategy on AIV. On Ascend A2, the proposed kernel achieves a consistent 1.3×–1.6× speedup over the vendor matmul_gemv baseline, and we provide cross-platform context against cuBLAS (A100) and rocBLAS (MI210).
新兴的AI加速器为HPC工作负载提供了强大的计算密度,但解耦的执行引擎和软件管理的内存系统使性能可移植性复杂化。本文研究了华为Ascend A2上的内存绑定对称矩阵向量乘法(SYMV)内核,这是一种异构架构,具有不连接的立方体(AIC)和向量(AIV)引擎。我们提出了一种架构自适应映射(i)将非对角线密集块分配给AIC,同时在AIV上保持对角线/最终化,(ii)用三级软件管道协调跨引擎执行,以重叠DMA、计算和同步,以及(iii)通过三角形存储下的对称感知遍历减少片外矩阵读取流量,以及AIV上的无转置对角线块策略。在Ascend A2上,提议的内核在厂商matmul_gemv基线上实现了一致的1.3×-1.6×加速,并且我们提供了针对cuBLAS (A100)和rocBLAS (MI210)的跨平台上下文。
{"title":"An architecture-adaptive optimization strategy for high-performance SYMV on a heterogeneous AI accelerator","authors":"Hao Jiang ,&nbsp;Lu Lu ,&nbsp;Zhihong Liang","doi":"10.1016/j.sysarc.2026.103728","DOIUrl":"10.1016/j.sysarc.2026.103728","url":null,"abstract":"<div><div>Emerging AI accelerators offer strong compute density for HPC workloads, but decoupled execution engines and software-managed memory systems complicate performance portability. This paper studies the memory-bound SYmmetric Matrix–Vector multiplication (SYMV) kernel on Huawei Ascend A2, a heterogeneous architecture with disjoint Cube (AIC) and Vector (AIV) engines. We propose an architecture-adaptive mapping that (i) assigns off-diagonal dense tiles to AIC while keeping diagonal/finalization on AIV, (ii) orchestrates cross-engine execution with a three-stage software pipeline to overlap DMA, compute, and synchronization, and (iii) reduces off-chip matrix-read traffic via symmetry-aware traversal under triangular storage, together with a transpose-free diagonal-tile strategy on AIV. On Ascend A2, the proposed kernel achieves a consistent 1.3<span><math><mo>×</mo></math></span>–1.6<span><math><mo>×</mo></math></span> speedup over the vendor <span>matmul_gemv</span> baseline, and we provide cross-platform context against cuBLAS (A100) and rocBLAS (MI210).</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103728"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146161872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFTS-MCS: Dynamic fault-tolerant scheduling for mixed-criticality systems on heterogeneous multi-core processors DFTS-MCS:异构多核处理器上混合临界系统的动态容错调度
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-02-13 DOI: 10.1016/j.sysarc.2026.103740
Mahin Moradiyan, Yasser Sedaghat
Mixed-criticality embedded systems (MCSs) support safety-critical applications by managing tasks with different criticality levels under strict timing constraints. Traditional scheduling prioritizes high-criticality tasks by suspending or degrading low-criticality ones during faults or overruns, often leading to inefficient resource use and reduced quality of service. Various heterogeneous platforms support MCSs needs, with ARM big.LITTLE offering a balanced mix of performance, energy efficiency, and real-time reliability. This paper introduces DFTS-MCS, a dynamic fault-tolerant scheduling method for ARM big.LITTLE platforms to address these challenges. DFTS-MCS method includes three phases: (1) reliability-driven task mapping; (2) adaptive task allocation; and (3) a dynamic fault-tolerant execution model. Results show that compared to state-of-the-art methods, DFTS-MCS achieves the highest high-criticality task success rate (94.1 %) and reduces missed deadlines by up to 40 %. DFTS-MCS recovers tasks 1.3 × more effectively than average competing methods, with up to 19 % higher recovery rate over the weakest baseline. It also minimizes fault-induced delays (13.4 ms for HI tasks) and maintains low execution overhead (8.7 % HI, 14.3 % LO). It achieves superior load balancing by assigning up to 84 % of critical computation to big cores. These results validate DFTS-MCS as a scalable and robust solution for real-time MCSs operating under fault-prone and resource-constrained environments.
混合临界嵌入式系统(mcs)通过在严格的时间约束下管理不同临界级别的任务来支持安全关键型应用。传统调度在故障或超限期间通过挂起或降级低临界任务来优先处理高临界任务,这通常导致资源使用效率低下和服务质量下降。各种异构平台支持mcs的需求,以ARM为主。LITTLE提供了性能、能源效率和实时可靠性的平衡组合。本文介绍了DFTS-MCS一种ARM大系统的动态容错调度方法。LITTLE平台应对这些挑战。DFTS-MCS方法包括三个阶段:(1)可靠性驱动任务映射;(2)自适应任务分配;(3)动态容错执行模型。结果表明,与最先进的方法相比,DFTS-MCS实现了最高的高临界任务成功率(94.1%),并将错过的截止日期减少了高达40%。DFTS-MCS恢复任务的效率比平均竞争方法高1.3倍,在最弱基线上的回收率最高可达19%。它还最大限度地减少了故障引起的延迟(HI任务为13.4 ms),并保持了较低的执行开销(HI 8.7%, LO 14.3%)。它通过将高达84%的关键计算分配给大核心来实现卓越的负载平衡。这些结果验证了DFTS-MCS是一种可扩展且强大的解决方案,适用于在易故障和资源受限环境下运行的实时mcs。
{"title":"DFTS-MCS: Dynamic fault-tolerant scheduling for mixed-criticality systems on heterogeneous multi-core processors","authors":"Mahin Moradiyan,&nbsp;Yasser Sedaghat","doi":"10.1016/j.sysarc.2026.103740","DOIUrl":"10.1016/j.sysarc.2026.103740","url":null,"abstract":"<div><div>Mixed-criticality embedded systems (MCSs) support safety-critical applications by managing tasks with different criticality levels under strict timing constraints. Traditional scheduling prioritizes high-criticality tasks by suspending or degrading low-criticality ones during faults or overruns, often leading to inefficient resource use and reduced quality of service. Various heterogeneous platforms support MCSs needs, with ARM big.LITTLE offering a balanced mix of performance, energy efficiency, and real-time reliability. This paper introduces DFTS-MCS, a dynamic fault-tolerant scheduling method for ARM big.LITTLE platforms to address these challenges. DFTS-MCS method includes three phases: (1) reliability-driven task mapping; (2) adaptive task allocation; and (3) a dynamic fault-tolerant execution model. Results show that compared to state-of-the-art methods, DFTS-MCS achieves the highest high-criticality task success rate (94.1 %) and reduces missed deadlines by up to 40 %. DFTS-MCS recovers tasks 1.3 × more effectively than average competing methods, with up to 19 % higher recovery rate over the weakest baseline. It also minimizes fault-induced delays (13.4 ms for HI tasks) and maintains low execution overhead (8.7 % HI, 14.3 % LO). It achieves superior load balancing by assigning up to 84 % of critical computation to big cores. These results validate DFTS-MCS as a scalable and robust solution for real-time MCSs operating under fault-prone and resource-constrained environments.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103740"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating model quantization in a GenAI-enhanced weed detection pipeline 在genai增强的杂草检测管道中评估模型量化
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-02-26 DOI: 10.1016/j.sysarc.2026.103755
Sourav Modak, Ahmet Oğuz Saltık, Anthony Stein
Deep learning-based weed control systems often struggle with limited training data diversity and constrained computational resources, restricting their effectiveness in real-world deployment. To address these limitations, we introduce a Stable Diffusion-based inpainting framework that progressively augments training datasets in 25% increments, up to 200%, enriching both data volume and variability. We systematically evaluate three state-of-the-art object detection architectures, such as large, small, and nano variants of YOLO11 and YOLOv12, along with large RT-DETR models, under three precision settings (FP32, FP16, INT8) using mAP50 and mAP50-95 evaluation metrics. Experiments on NVIDIA Jetson Orin Nano, NVIDIA Jetson AGX Orin, and spo-comm rugged computing unit reveal that quantization consistently reduces latency and memory footprint, with INT8 compression producing the most compact and fastest models. While INT8 often induces accuracy degradation, we show that this loss is significantly minimized by targeted synthetic augmentation. Notably, small YOLO variants trained with augmented data match, and in some cases surpass, the detection performance of their baseline large counterparts, without added model size or inference cost. Furthermore, utilizing the INT8-quantized Stable Diffusion for data generation preserves augmentation benefits on the downstream models while minimizing generation overhead. In combination, these contributions establish a novel training and deployment strategy for embedded AI in the context of weed detection, demonstrating that small YOLO models, INT8 quantization, and targeted synthetic augmentation can jointly deliver higher efficiency without sacrificing accuracy.
基于深度学习的杂草控制系统经常受到有限的训练数据多样性和有限的计算资源的限制,限制了它们在实际部署中的有效性。为了解决这些限制,我们引入了一个基于稳定扩散的喷漆框架,以25%的增量逐步增加训练数据集,最多增加200%,丰富数据量和可变性。我们系统地评估了三种最先进的目标检测架构,如YOLO11和YOLOv12的大型、小型和纳米变体,以及大型RT-DETR模型,在三种精度设置(FP32、FP16、INT8)下使用mAP50和mAP50-95评估指标。在NVIDIA Jetson Orin Nano, NVIDIA Jetson AGX Orin和spo-comm坚固型计算单元上的实验表明,量化可以持续减少延迟和内存占用,INT8压缩产生最紧凑和最快的模型。虽然INT8通常会导致精度下降,但我们发现,通过有针对性的合成增强,这种损失可以显著降低。值得注意的是,使用增强数据训练的小型YOLO变体在不增加模型大小或推理成本的情况下,可以匹配并在某些情况下超过其基线大型对应的检测性能。此外,利用int8量化的稳定扩散进行数据生成,可以在最小化生成开销的同时保持下游模型的增强效益。总之,这些贡献为杂草检测背景下的嵌入式AI建立了一种新的训练和部署策略,表明小型YOLO模型、INT8量化和有针对性的合成增强可以在不牺牲准确性的情况下共同提供更高的效率。
{"title":"Evaluating model quantization in a GenAI-enhanced weed detection pipeline","authors":"Sourav Modak,&nbsp;Ahmet Oğuz Saltık,&nbsp;Anthony Stein","doi":"10.1016/j.sysarc.2026.103755","DOIUrl":"10.1016/j.sysarc.2026.103755","url":null,"abstract":"<div><div>Deep learning-based weed control systems often struggle with limited training data diversity and constrained computational resources, restricting their effectiveness in real-world deployment. To address these limitations, we introduce a Stable Diffusion-based inpainting framework that progressively augments training datasets in 25% increments, up to 200%, enriching both data volume and variability. We systematically evaluate three state-of-the-art object detection architectures, such as large, small, and nano variants of YOLO11 and YOLOv12, along with large RT-DETR models, under three precision settings (FP32, FP16, INT8) using mAP50 and mAP50-95 evaluation metrics. Experiments on NVIDIA Jetson Orin Nano, NVIDIA Jetson AGX Orin, and spo-comm rugged computing unit reveal that quantization consistently reduces latency and memory footprint, with INT8 compression producing the most compact and fastest models. While INT8 often induces accuracy degradation, we show that this loss is significantly minimized by targeted synthetic augmentation. Notably, small YOLO variants trained with augmented data match, and in some cases surpass, the detection performance of their baseline large counterparts, without added model size or inference cost. Furthermore, utilizing the INT8-quantized Stable Diffusion for data generation preserves augmentation benefits on the downstream models while minimizing generation overhead. In combination, these contributions establish a novel training and deployment strategy for embedded AI in the context of weed detection, demonstrating that small YOLO models, INT8 quantization, and targeted synthetic augmentation can jointly deliver higher efficiency without sacrificing accuracy.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103755"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictively controlling the computing continuum with distributed energy-aware orchestration 通过分布式的能量感知编排来预测控制计算连续体
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-02-23 DOI: 10.1016/j.sysarc.2026.103752
Pablo Rodríguez , Javier Mateos-Bravo , Sergio Laso , Juan Luis Herrera , Javier Berrocal
Distributing microservices across the Computing Continuum reduces latency and preserves data locality but introduces management complexity on heterogeneous, resource-constrained edge nodes. Traditional reactive orchestration triggers only after saturation occurs. Under bursty or high-density workloads, this latency leads to service degradation, instability, and inefficient energy usage. To address this, the Adaptive Resource-Aware Predictive Orchestrator (ARAPO) couples per-service local forecasting with calibrated node-level aggregation. It employs a dual-threshold policy based on predicted and observed load to trigger migrations. It maps CPU forecasts to power for energy-aware placement without external instrumentation. ARAPO is evaluated in a realistic hospital reference scenario against a reactive-only baseline. Results demonstrate that the system anticipates saturation and prevents control plane congestion. It significantly improves stability in oscillating workloads. Overload time drops from 28.4% to 4.5%. Consequently, energy usage during overload falls to 14.9% of the reactive baseline. Node-level forecasting achieves R2 up to 0.86. The power model tracks consumption with a mean absolute error as low as 0.40W. This validates its suitability as a lightweight, energy-efficient controller.
跨计算连续体分布微服务可以减少延迟并保持数据的局部性,但会在异构、资源受限的边缘节点上引入管理复杂性。传统的响应式业务流程只有在饱和后才会触发。在突发或高密度工作负载下,这种延迟会导致服务降级、不稳定和低效的能源使用。为了解决这个问题,自适应资源感知预测编排器(ARAPO)将每个服务的本地预测与校准的节点级聚合结合在一起。它采用基于预测和观察负载的双阈值策略来触发迁移。它将CPU预测映射到功耗,以便在没有外部仪器的情况下进行能源感知放置。ARAPO在现实的医院参考方案中根据仅反应性基线进行评估。结果表明,该系统能够预测饱和,防止控制平面拥塞。它显著提高了振荡工作负载的稳定性。过载时间从28.4%下降到4.5%。因此,过载期间的能源使用下降到反应基线的14.9%。节点级预测R2达到0.86。功率模型跟踪功耗的平均绝对误差低至0.40W。这证实了它作为一种轻量级、节能控制器的适用性。
{"title":"Predictively controlling the computing continuum with distributed energy-aware orchestration","authors":"Pablo Rodríguez ,&nbsp;Javier Mateos-Bravo ,&nbsp;Sergio Laso ,&nbsp;Juan Luis Herrera ,&nbsp;Javier Berrocal","doi":"10.1016/j.sysarc.2026.103752","DOIUrl":"10.1016/j.sysarc.2026.103752","url":null,"abstract":"<div><div>Distributing microservices across the Computing Continuum reduces latency and preserves data locality but introduces management complexity on heterogeneous, resource-constrained edge nodes. Traditional reactive orchestration triggers only after saturation occurs. Under bursty or high-density workloads, this latency leads to service degradation, instability, and inefficient energy usage. To address this, the Adaptive Resource-Aware Predictive Orchestrator (ARAPO) couples per-service local forecasting with calibrated node-level aggregation. It employs a dual-threshold policy based on predicted and observed load to trigger migrations. It maps CPU forecasts to power for energy-aware placement without external instrumentation. ARAPO is evaluated in a realistic hospital reference scenario against a reactive-only baseline. Results demonstrate that the system anticipates saturation and prevents control plane congestion. It significantly improves stability in oscillating workloads. Overload time drops from 28.4% to 4.5%. Consequently, energy usage during overload falls to 14.9% of the reactive baseline. Node-level forecasting achieves <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> up to 0.86. The power model tracks consumption with a mean absolute error as low as <span><math><mrow><mn>0</mn><mo>.</mo><mn>40</mn><mspace></mspace><mi>W</mi></mrow></math></span>. This validates its suitability as a lightweight, energy-efficient controller.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103752"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained sensitive node hardening for graph convolutional network systems 图卷积网络系统的细粒度敏感节点强化
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-02-10 DOI: 10.1016/j.sysarc.2026.103737
Jing Zhang , Mingzhang Duan , Peiyu Li , Lei Shen , Chang Cai
With the rapid development of the commercial space industry, the scale of graph-structured data is expected to grow significantly. Traditional deep neural networks face challenges in processing such data due to limitations in feature extraction and information propagation, driving research into Graph Convolutional Networks. Although advanced AI edge platforms offer high computational efficiency, they remain vulnerable to single-event upsets and face resource constraints when implementing redundancy for high-reliability designs. This paper presents an underlying circuit partitioning strategy and a node sensitivity analysis framework, where circuit nodes, defined as fine-grained sub-units obtained by further partitioning coarse-grained modules, are mapped to physical locations and the resulting mapping is integrated into fault analysis. Unlike coarse-grained hardening methods that overlook node-level sensitivities, the proposed approach allows for precise node-level sensitivity ranking, enabling fine-grained hardening where most needed. Experimental results demonstrate that the proposed strategy achieves fault tolerance comparable to full triple modular redundancy, while delivering improvements in resource hardening efficiency of 1.57 ×, 1.67 ×, and 1.76 ×, and improvements in timing hardening efficiency of 1.36 ×, 1.44 ×, and 1.52 × across the three datasets. Compared to coarse-grained methods, it outperforms in hardening efficiency with only a 1.57 × resource overhead and a minimal 15.9% reduction in worst negative slack.
随着商业航天事业的快速发展,图结构数据的规模有望大幅增长。由于特征提取和信息传播方面的限制,传统的深度神经网络在处理此类数据时面临挑战,这推动了对图卷积网络的研究。尽管先进的人工智能边缘平台提供了高计算效率,但在实现高可靠性设计的冗余时,它们仍然容易受到单事件干扰的影响,并且面临资源限制。本文提出了一种底层电路划分策略和节点灵敏度分析框架,其中将电路节点定义为通过进一步划分粗粒度模块获得的细粒度子单元,并将其映射到物理位置,并将生成的映射集成到故障分析中。与忽略节点级灵敏度的粗粒度加固方法不同,本文提出的方法允许精确的节点级灵敏度排序,从而在最需要的地方实现细粒度加固。实验结果表明,该策略实现了与全三模冗余相当的容错能力,同时在三个数据集上的资源强化效率分别提高了1.57 ×、1.67 ×和1.76 ×,时间强化效率分别提高了1.36 ×、1.44 ×和1.52 ×。与粗粒度方法相比,它在硬化效率方面优于粗粒度方法,资源开销仅为1.57倍,最坏的负松弛减少了15.9%。
{"title":"Fine-grained sensitive node hardening for graph convolutional network systems","authors":"Jing Zhang ,&nbsp;Mingzhang Duan ,&nbsp;Peiyu Li ,&nbsp;Lei Shen ,&nbsp;Chang Cai","doi":"10.1016/j.sysarc.2026.103737","DOIUrl":"10.1016/j.sysarc.2026.103737","url":null,"abstract":"<div><div>With the rapid development of the commercial space industry, the scale of graph-structured data is expected to grow significantly. Traditional deep neural networks face challenges in processing such data due to limitations in feature extraction and information propagation, driving research into Graph Convolutional Networks. Although advanced AI edge platforms offer high computational efficiency, they remain vulnerable to single-event upsets and face resource constraints when implementing redundancy for high-reliability designs. This paper presents an underlying circuit partitioning strategy and a node sensitivity analysis framework, where circuit nodes, defined as fine-grained sub-units obtained by further partitioning coarse-grained modules, are mapped to physical locations and the resulting mapping is integrated into fault analysis. Unlike coarse-grained hardening methods that overlook node-level sensitivities, the proposed approach allows for precise node-level sensitivity ranking, enabling fine-grained hardening where most needed. Experimental results demonstrate that the proposed strategy achieves fault tolerance comparable to full triple modular redundancy, while delivering improvements in resource hardening efficiency of 1.57 ×, 1.67 ×, and 1.76 ×, and improvements in timing hardening efficiency of 1.36 ×, 1.44 ×, and 1.52 × across the three datasets. Compared to coarse-grained methods, it outperforms in hardening efficiency with only a 1.57 × resource overhead and a minimal 15.9% reduction in worst negative slack.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103737"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible Model Inversion Attack with soft biometric attribute reconstruction against face classifiers 针对人脸分类器的软生物特征属性重构柔性模型反演攻击
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-02-12 DOI: 10.1016/j.sysarc.2026.103739
Zeping Zhang , Jie Huang , Changhao Ding
Model Inversion Attack aims to reconstruct private images from their feature vectors. Existing attacks are usually performed by training a reconstruction model over the feature vectors. As the feature vectors outputted by different target models have different distributions and dimensions, a new reconstruction model has to be trained for each target model. Thus, the flexibility of the attack is usually limited. This paper aims to improve the flexibility of model inversion attacks against face classifiers. The relationship between training-based MIA and auto-encoder is studied, and the challenges to improve the flexibility of inversion attacks are analyzed. To improve the flexibility of the attack, Mapping-MIA is proposed. Mapping-MIA consists of a Data Reconstruction Model to reconstruct faces and their soft biometric attributes. This model can be reused for future inversion tasks. Mapping-MIA also contains a lightweight Feature Mapping Model to map the feature vectors from the outputted space of each target model to the latent space of the Data Reconstruction Model. Experimental results show that Mapping-MIA is more flexible against different target models. It achieves similar or better results than existing methods. Further, the reconstructed soft biometric attributes also have an average accuracy of 86.63% on the private dataset.
模型反演攻击的目的是从私有图像的特征向量重构私有图像。现有的攻击通常是通过在特征向量上训练重建模型来执行的。由于不同目标模型输出的特征向量具有不同的分布和维数,因此需要为每个目标模型训练新的重构模型。因此,攻击的灵活性通常是有限的。本文旨在提高针对人脸分类器的模型反演攻击的灵活性。研究了基于训练的MIA和自编码器之间的关系,分析了提高反转攻击灵活性所面临的挑战。为了提高攻击的灵活性,提出了Mapping-MIA方法。Mapping-MIA由一个数据重建模型组成,用于重建人脸及其软生物特征属性。该模型可以在未来的反转任务中重用。Mapping- mia还包含一个轻量级的特征映射模型,将每个目标模型的输出空间的特征向量映射到数据重建模型的潜在空间。实验结果表明,Mapping-MIA对不同的目标模型具有更大的灵活性。它可以达到与现有方法相似或更好的结果。此外,重建的软生物特征属性在私有数据集上的平均准确率为86.63%。
{"title":"Flexible Model Inversion Attack with soft biometric attribute reconstruction against face classifiers","authors":"Zeping Zhang ,&nbsp;Jie Huang ,&nbsp;Changhao Ding","doi":"10.1016/j.sysarc.2026.103739","DOIUrl":"10.1016/j.sysarc.2026.103739","url":null,"abstract":"<div><div>Model Inversion Attack aims to reconstruct private images from their feature vectors. Existing attacks are usually performed by training a reconstruction model over the feature vectors. As the feature vectors outputted by different target models have different distributions and dimensions, a new reconstruction model has to be trained for each target model. Thus, the flexibility of the attack is usually limited. This paper aims to improve the flexibility of model inversion attacks against face classifiers. The relationship between training-based MIA and auto-encoder is studied, and the challenges to improve the flexibility of inversion attacks are analyzed. To improve the flexibility of the attack, Mapping-MIA is proposed. Mapping-MIA consists of a Data Reconstruction Model to reconstruct faces and their soft biometric attributes. This model can be reused for future inversion tasks. Mapping-MIA also contains a lightweight Feature Mapping Model to map the feature vectors from the outputted space of each target model to the latent space of the Data Reconstruction Model. Experimental results show that Mapping-MIA is more flexible against different target models. It achieves similar or better results than existing methods. Further, the reconstructed soft biometric attributes also have an average accuracy of 86.63% on the private dataset.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103739"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZTL: A block layer ZNS driver 一个块层ZNS驱动程序
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-03-03 DOI: 10.1016/j.sysarc.2026.103757
Jan Sass , André Brinkmann , Matias Bjørling , Xubin He , Reza Salkhordeh
Solid State Disks (SSDs) utilize NAND flash for data storage. Due to the physical characteristics of NAND, host systems would require extensive modifications in order to use flash storage directly. Instead, a firmware component of the SSD, the Flash Translation Layer (FTL), enables host systems to utilize flash storage without modification. However, the FTL performs its own data placement, requiring address translation and garbage collection, leading to performance unpredictability and performance and hardware overheads, as well as an increased cost for flash storage.
The Zoned Namespaces (ZNS) specification defines a novel interface for the host to interact with flash that avoids interfacing with the Flash Translation Layer and its shortcomings. In order to use the ZNS interface, a considerable amount of modification on the storage stack of the host is required, which is why F2FS is the only stable file system with ZNS support today. In this paper, we present the host-side Zoned Translation Layer (ZTL) and extend our previous work on ZTL by providing additional experiments and implementation details. ZTL provides abstractions and functionalities required by many file systems to support ZNS devices. We demonstrate the feasibility of ZTL by providing the first EXT4 implementation for ZNS devices and by comparing our implementation of ZNS support for F2FS with the native ZNS support of F2FS, showing that ZTL decreases implementation overheads for file system developers while performance is sustained or improved.
固态磁盘(ssd)利用NAND闪存进行数据存储。由于NAND的物理特性,主机系统需要大量修改才能直接使用闪存。相反,SSD的固件组件,即闪存转换层(FTL),使主机系统无需修改即可利用闪存。然而,FTL执行自己的数据放置,需要地址转换和垃圾收集,导致性能不可预测性和性能和硬件开销,以及闪存存储成本的增加。分区命名空间(ZNS)规范为主机与flash交互定义了一种新的接口,避免了与flash转换层及其缺点的交互。为了使用ZNS接口,需要对主机的存储堆栈进行大量修改,这就是为什么F2FS是目前唯一支持ZNS的稳定文件系统。在本文中,我们介绍了主机端的分区翻译层(ZTL),并通过提供额外的实验和实现细节来扩展我们之前在ZTL上的工作。ZTL提供了许多文件系统支持ZNS设备所需的抽象和功能。我们通过为ZNS设备提供第一个EXT4实现,并通过比较ZNS对F2FS的支持与本机ZNS对F2FS的支持来证明ZTL的可行性,表明ZTL减少了文件系统开发人员的实现开销,同时保持或提高了性能。
{"title":"ZTL: A block layer ZNS driver","authors":"Jan Sass ,&nbsp;André Brinkmann ,&nbsp;Matias Bjørling ,&nbsp;Xubin He ,&nbsp;Reza Salkhordeh","doi":"10.1016/j.sysarc.2026.103757","DOIUrl":"10.1016/j.sysarc.2026.103757","url":null,"abstract":"<div><div>Solid State Disks (SSDs) utilize NAND flash for data storage. Due to the physical characteristics of NAND, host systems would require extensive modifications in order to use flash storage directly. Instead, a firmware component of the SSD, the Flash Translation Layer (FTL), enables host systems to utilize flash storage without modification. However, the FTL performs its own data placement, requiring address translation and garbage collection, leading to performance unpredictability and performance and hardware overheads, as well as an increased cost for flash storage.</div><div>The Zoned Namespaces (ZNS) specification defines a novel interface for the host to interact with flash that avoids interfacing with the Flash Translation Layer and its shortcomings. In order to use the ZNS interface, a considerable amount of modification on the storage stack of the host is required, which is why F2FS is the only stable file system with ZNS support today. In this paper, we present the host-side Zoned Translation Layer (ZTL) and extend our previous work on ZTL by providing additional experiments and implementation details. ZTL provides abstractions and functionalities required by many file systems to support ZNS devices. We demonstrate the feasibility of ZTL by providing the first EXT4 implementation for ZNS devices and by comparing our implementation of ZNS support for F2FS with the native ZNS support of F2FS, showing that ZTL decreases implementation overheads for file system developers while performance is sustained or improved.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103757"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Derivative-based algorithms for membership, k-non-emptiness, and k-non-empty complement problems in enhanced regular expressions 增强正则表达式中基于导数的隶属度、k-非空和k-非空补问题的算法
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-03-04 DOI: 10.1016/j.sysarc.2026.103759
Mengxi Wang , Chunmei Dong , Weihao Su , Chengyao Peng , Haiming Chen
Enhanced regular expressions (EREs), which extend classical regular expressions with shuffle and counting operators, offer exponentially more succinct representations of regular languages. However, unconstrained EREs lack explicit algorithms for solving the membership, k-non-emptiness, and k-non-empty complement problems. In this paper, we introduce a derivative construction for counting and shuffle operators and formally prove its correctness. We also analyze its time complexity based on a lemma that relates the size of the derivative to that of the original expression. Using this derivative, we propose three algorithms to address the membership, k-non-emptiness, and k-non-empty complement problems for EREs. We conduct experiments demonstrating that these algorithms are both effective and practical. Finally, we validate the correctness of two existing inference algorithms that previously lacked formal guarantees, owing to the absence of practical membership algorithms for unconstrained EREs.
增强型正则表达式(Enhanced regular expressions, EREs)使用洗牌和计数运算符扩展了经典正则表达式,提供了指数级的更简洁的正则语言表示。然而,无约束e缺乏明确的算法来解决隶属度、k-非空和k-非空补问题。本文引入了计数算子和洗牌算子的导数构造,并正式证明了其正确性。我们还根据一个引理分析了它的时间复杂度,该引理将导数的大小与原始表达式的大小联系起来。利用这一导数,我们提出了三种算法来解决EREs的隶属度、k-非空和k-非空补问题。实验证明了这些算法的有效性和实用性。最后,我们验证了两种现有的推理算法的正确性,这两种算法以前缺乏形式保证,这是由于缺乏实用的无约束e的隶属度算法。
{"title":"Derivative-based algorithms for membership, k-non-emptiness, and k-non-empty complement problems in enhanced regular expressions","authors":"Mengxi Wang ,&nbsp;Chunmei Dong ,&nbsp;Weihao Su ,&nbsp;Chengyao Peng ,&nbsp;Haiming Chen","doi":"10.1016/j.sysarc.2026.103759","DOIUrl":"10.1016/j.sysarc.2026.103759","url":null,"abstract":"<div><div>Enhanced regular expressions (EREs), which extend classical regular expressions with shuffle and counting operators, offer exponentially more succinct representations of regular languages. However, unconstrained EREs lack explicit algorithms for solving the membership, <span><math><mi>k</mi></math></span>-non-emptiness, and <span><math><mi>k</mi></math></span>-non-empty complement problems. In this paper, we introduce a derivative construction for counting and shuffle operators and formally prove its correctness. We also analyze its time complexity based on a lemma that relates the size of the derivative to that of the original expression. Using this derivative, we propose three algorithms to address the membership, <span><math><mi>k</mi></math></span>-non-emptiness, and <span><math><mi>k</mi></math></span>-non-empty complement problems for EREs. We conduct experiments demonstrating that these algorithms are both effective and practical. Finally, we validate the correctness of two existing inference algorithms that previously lacked formal guarantees, owing to the absence of practical membership algorithms for unconstrained EREs.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103759"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal regular expression synthesis method based on large language models and semantics 基于大语言模型和语义的多模态正则表达式综合方法
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-03-04 DOI: 10.1016/j.sysarc.2026.103762
Zipan Tang , Yixuan Yan , Rongchen Li , Hanze Dong , Bo Sun , Haiming Chen , Hongyu Gao
Real-world regular expressions (regexes) are widely used in practice. However, due to their complex syntax and difficulty in both understanding and writing, automatic synthesis of regexes has been an important research challenge. Existing methods often have limited generalization ability and insufficient support for extended features. To address these challenges, we propose PowerSyn, a framework that leverages large language models (LLMs) and semantic manipulation of sub-expressions. PowerSyn synthesizes regexes from natural language descriptions and examples, and supports extended features. Specifically, our approach includes prompt design for synthesizing regexes with LLMs, as well as a novel algorithm for semantic manipulation of sub-expressions guided by examples and matching relationships. In addition, we explore the ability of LLMs to repair incorrect regexes. The experimental results demonstrate the significant effectiveness of our approach.
现实世界中的正则表达式(regexes)在实践中被广泛使用。然而,由于正则表达式语法复杂,难以理解和书写,自动合成正则表达式一直是一个重要的研究挑战。现有方法泛化能力有限,对扩展特征支持不足。为了应对这些挑战,我们提出了PowerSyn,这是一个利用大型语言模型(llm)和子表达式语义操作的框架。PowerSyn从自然语言描述和示例中合成正则表达式,并支持扩展特性。具体来说,我们的方法包括用llm合成正则表达式的提示设计,以及由示例和匹配关系指导的子表达式语义操作的新算法。此外,我们还探讨了llm修复错误正则的能力。实验结果证明了该方法的有效性。
{"title":"Multi-modal regular expression synthesis method based on large language models and semantics","authors":"Zipan Tang ,&nbsp;Yixuan Yan ,&nbsp;Rongchen Li ,&nbsp;Hanze Dong ,&nbsp;Bo Sun ,&nbsp;Haiming Chen ,&nbsp;Hongyu Gao","doi":"10.1016/j.sysarc.2026.103762","DOIUrl":"10.1016/j.sysarc.2026.103762","url":null,"abstract":"<div><div>Real-world regular expressions (regexes) are widely used in practice. However, due to their complex syntax and difficulty in both understanding and writing, automatic synthesis of regexes has been an important research challenge. Existing methods often have limited generalization ability and insufficient support for extended features. To address these challenges, we propose PowerSyn, a framework that leverages large language models (LLMs) and semantic manipulation of sub-expressions. PowerSyn synthesizes regexes from natural language descriptions and examples, and supports extended features. Specifically, our approach includes prompt design for synthesizing regexes with LLMs, as well as a novel algorithm for semantic manipulation of sub-expressions guided by examples and matching relationships. In addition, we explore the ability of LLMs to repair incorrect regexes. The experimental results demonstrate the significant effectiveness of our approach.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103762"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mSDF-DFT: An ultra-low energy discrete Fourier transform architecture for closed-loop neural sensing mSDF-DFT:一种用于闭环神经传感的超低能量离散傅里叶变换架构
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-06-01 Epub Date: 2026-02-19 DOI: 10.1016/j.sysarc.2026.103749
Richard Yang , Heather D. Orser , Kip A. Ludwig , Brandon S. Coventry
Digital implementations of the discrete Fourier transform (DFT) are a mainstay in feature assessment of recorded biopotentials, particularly in the quantification of biomarkers of neurological disease state for adaptive deep brain stimulation. Fast Fourier transform (FFT) algorithms and architectures present a substantial energy demand from onboard batteries in implantable medical devices, necessitating the development of ultra-low energy Fourier transform methods in resource-constrained environments. Numerous FFT architectures aim to optimize energy and resource consumption through computational efficiency; however, prioritizing logic complexity reduction at the expense of additional computations can be equally or more effective. This paper introduces a minimal-architecture single-delay feedback discrete Fourier transform (mSDF-DFT) for use in ultra-low-energy field-programmable gate array applications and demonstrates energy and power improvements over benchmark low-energy DFT and FFT methods. Across the parameter set, we observed 11.1% median resource usage reduction and 5.0% median energy reduction when compared to a gold standard SDF-FFT algorithm and 38.1% median resource reduction and 8.8% median energy reduction when compared to the Goertzel Algorithm. While designed for use in closed-loop deep brain stimulation and medical device implementations, the mSDF-DFT is also easily extendable to any ultra-low-energy embedded application.
离散傅立叶变换(DFT)的数字化实现是记录生物电位特征评估的支柱,特别是在自适应深部脑刺激的神经疾病状态生物标志物的量化中。快速傅里叶变换(FFT)算法和架构对植入式医疗设备的板载电池提出了大量的能量需求,因此有必要在资源受限的环境中开发超低能量傅里叶变换方法。许多FFT架构旨在通过计算效率来优化能源和资源消耗;然而,以牺牲额外计算为代价来优先考虑降低逻辑复杂性可能同样有效,甚至更有效。本文介绍了一种用于超低能量现场可编程门阵列应用的最小架构单延迟反馈离散傅里叶变换(mSDF-DFT),并演示了相对于基准低能量DFT和FFT方法的能量和功率改进。在整个参数集中,我们观察到与黄金标准SDF-FFT算法相比,中值资源使用减少了11.1%,中值能量减少了5.0%,与Goertzel算法相比,中值资源使用减少了38.1%,中值能量减少了8.8%。虽然设计用于闭环深部脑刺激和医疗设备实现,mSDF-DFT也很容易扩展到任何超低能耗的嵌入式应用。
{"title":"mSDF-DFT: An ultra-low energy discrete Fourier transform architecture for closed-loop neural sensing","authors":"Richard Yang ,&nbsp;Heather D. Orser ,&nbsp;Kip A. Ludwig ,&nbsp;Brandon S. Coventry","doi":"10.1016/j.sysarc.2026.103749","DOIUrl":"10.1016/j.sysarc.2026.103749","url":null,"abstract":"<div><div>Digital implementations of the discrete Fourier transform (DFT) are a mainstay in feature assessment of recorded biopotentials, particularly in the quantification of biomarkers of neurological disease state for adaptive deep brain stimulation. Fast Fourier transform (FFT) algorithms and architectures present a substantial energy demand from onboard batteries in implantable medical devices, necessitating the development of ultra-low energy Fourier transform methods in resource-constrained environments. Numerous FFT architectures aim to optimize energy and resource consumption through computational efficiency; however, prioritizing logic complexity reduction at the expense of additional computations can be equally or more effective. This paper introduces a minimal-architecture single-delay feedback discrete Fourier transform (mSDF-DFT) for use in ultra-low-energy field-programmable gate array applications and demonstrates energy and power improvements over benchmark low-energy DFT and FFT methods. Across the parameter set, we observed 11.1% median resource usage reduction and 5.0% median energy reduction when compared to a gold standard SDF-FFT algorithm and 38.1% median resource reduction and 8.8% median energy reduction when compared to the Goertzel Algorithm. While designed for use in closed-loop deep brain stimulation and medical device implementations, the mSDF-DFT is also easily extendable to any ultra-low-energy embedded application.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"175 ","pages":"Article 103749"},"PeriodicalIF":4.1,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147386649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Systems Architecture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1