首页 > 最新文献

ACM Transactions on Embedded Computing Systems最新文献

英文 中文
LL-GNN: Low Latency Graph Neural Networks on FPGAs for High Energy Physics LL-GNN:高能物理 FPGA 上的低延迟图神经网络
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-15 DOI: 10.1145/3640464
Zhiqiang Que, Hongxiang Fan, Marcus Loo, He Li, Michaela Blott, Maurizio Pierini, Alexander Tapper, Wayne Luk

This work presents a novel reconfigurable architecture for Low Latency Graph Neural Network (LL-GNN) designs for particle detectors, delivering unprecedented low latency performance. Incorporating FPGA-based GNNs into particle detectors presents a unique challenge since it requires sub-microsecond latency to deploy the networks for online event selection with a data rate of hundreds of terabytes per second in the Level-1 triggers at the CERN Large Hadron Collider experiments. This paper proposes a novel outer-product based matrix multiplication approach, which is enhanced by exploiting the structured adjacency matrix and a column-major data layout. In addition, we propose a custom code transformation for the matrix multiplication operations, which leverages the structured sparsity patterns and binary features of adjacency matrices to reduce latency and improve hardware efficiency. Moreover, a fusion step is introduced to further reduce the end-to-end design latency by eliminating unnecessary boundaries. Furthermore, a GNN-specific algorithm-hardware co-design approach is presented which not only finds a design with a much better latency but also finds a high accuracy design under given latency constraints. To facilitate this, a customizable template for this low latency GNN hardware architecture has been designed and open-sourced, which enables the generation of low-latency FPGA designs with efficient resource utilization using a high-level synthesis tool. Evaluation results show that our FPGA implementation is up to 9.0 times faster and achieves up to 13.1 times higher power efficiency than a GPU implementation. Compared to the previous FPGA implementations, this work achieves 6.51 to 16.7 times lower latency. Moreover, the latency of our FPGA design is sufficiently low to enable deployment of GNNs in a sub-microsecond, real-time collider trigger system, enabling it to benefit from improved accuracy. The proposed LL-GNN design advances the next generation of trigger systems by enabling sophisticated algorithms to process experimental data efficiently.

这项研究为粒子探测器的低延迟图神经网络(LL-GNN)设计提出了一种新颖的可重新配置架构,可提供前所未有的低延迟性能。将基于 FPGA 的 GNN 纳入粒子探测器是一项独特的挑战,因为在欧洲核子研究中心(CERN)大型强子对撞机实验的一级触发器中,需要亚微秒级的延迟来部署网络,以每秒数百 TB 的数据速率进行在线事件选择。本文提出了一种新颖的基于外积的矩阵乘法方法,该方法通过利用结构化邻接矩阵和列主数据布局得到了增强。此外,我们还为矩阵乘法运算提出了一种定制代码转换,利用邻接矩阵的结构稀疏性模式和二进制特征来减少延迟并提高硬件效率。此外,还引入了融合步骤,通过消除不必要的边界来进一步减少端到端设计延迟。此外,还提出了一种针对 GNN 的算法-硬件协同设计方法,该方法不仅能找到延迟更短的设计,还能在给定的延迟限制条件下找到高精度设计。为此,我们为这种低延迟 GNN 硬件架构设计了一个可定制的模板,并将其开源,从而能够使用高级综合工具生成资源利用率高的低延迟 FPGA 设计。评估结果表明,我们的 FPGA 实现比 GPU 实现快达 9.0 倍,功耗效率高达 13.1 倍。与之前的 FPGA 实现相比,这项工作实现了 6.51 到 16.7 倍的低延迟。此外,我们的 FPGA 设计的延迟足够低,可以在亚微秒级实时对撞机触发系统中部署 GNN,使其受益于更高的精度。所提出的 LL-GNN 设计使复杂的算法能够高效地处理实验数据,从而推动了下一代触发系统的发展。
{"title":"LL-GNN: Low Latency Graph Neural Networks on FPGAs for High Energy Physics","authors":"Zhiqiang Que, Hongxiang Fan, Marcus Loo, He Li, Michaela Blott, Maurizio Pierini, Alexander Tapper, Wayne Luk","doi":"10.1145/3640464","DOIUrl":"https://doi.org/10.1145/3640464","url":null,"abstract":"<p>This work presents a novel reconfigurable architecture for Low Latency Graph Neural Network (LL-GNN) designs for particle detectors, delivering unprecedented low latency performance. Incorporating FPGA-based GNNs into particle detectors presents a unique challenge since it requires sub-microsecond latency to deploy the networks for online event selection with a data rate of hundreds of terabytes per second in the Level-1 triggers at the CERN Large Hadron Collider experiments. This paper proposes a novel outer-product based matrix multiplication approach, which is enhanced by exploiting the structured adjacency matrix and a column-major data layout. In addition, we propose a custom code transformation for the matrix multiplication operations, which leverages the structured sparsity patterns and binary features of adjacency matrices to reduce latency and improve hardware efficiency. Moreover, a fusion step is introduced to further reduce the end-to-end design latency by eliminating unnecessary boundaries. Furthermore, a GNN-specific algorithm-hardware co-design approach is presented which not only finds a design with a much better latency but also finds a high accuracy design under given latency constraints. To facilitate this, a customizable template for this low latency GNN hardware architecture has been designed and open-sourced, which enables the generation of low-latency FPGA designs with efficient resource utilization using a high-level synthesis tool. Evaluation results show that our FPGA implementation is up to 9.0 times faster and achieves up to 13.1 times higher power efficiency than a GPU implementation. Compared to the previous FPGA implementations, this work achieves 6.51 to 16.7 times lower latency. Moreover, the latency of our FPGA design is sufficiently low to enable deployment of GNNs in a sub-microsecond, real-time collider trigger system, enabling it to benefit from improved accuracy. The proposed LL-GNN design advances the next generation of trigger systems by enabling sophisticated algorithms to process experimental data efficiently.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"5 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139474911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Issue on Real-Time Computing in the IoT-to-Edge-to-Cloud Continuum 物联网到边缘再到云的实时计算》特刊简介
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-10 DOI: 10.1145/3605180
Daniel Casini, Dakshina Dasari, Matthias Becker, Giorgio Buttazzo

No abstract available.

无摘要。
{"title":"Introduction to the Special Issue on Real-Time Computing in the IoT-to-Edge-to-Cloud Continuum","authors":"Daniel Casini, Dakshina Dasari, Matthias Becker, Giorgio Buttazzo","doi":"10.1145/3605180","DOIUrl":"https://doi.org/10.1145/3605180","url":null,"abstract":"<p>No abstract available.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"29 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139409718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing Pacemakers using Runtime Monitors over Physiological Signals 使用生理信号运行时监控器确保起搏器安全
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-01-06 DOI: 10.1145/3638286
Abhinandan Panda, Srinivas Pinisetty, Partha Roop

Wearable and implantable medical devices (IMDs) are increasingly deployed to diagnose, monitor, and provide therapy for critical medical conditions. Such medical devices are safety-critical cyber-physical systems (CPSs). These systems support wireless features introducing potential security vulnerabilities. Although these devices undergo rigorous safety certification processes, runtime security attacks are inevitable. Based on published literature, IMDs such as pacemakers and insulin infusion systems can be remotely controlled to inject deadly electric shocks and excess insulin, posing a threat to a patient’s life. While prior works based on formal methods have been proposed to detect potential attack vectors using different forms of static analysis, these have limitations in preventing attacks at runtime.

This paper discusses a formal framework for detecting cyber-physical attacks on a pacemaker by monitoring its security policies at runtime. We propose a wearable device that senses the Electrocardiogram (ECG) and Photoplethysmogram (PPG) of the body to detect attacks in a pacemaker. To facilitate the design of this device, we map the security policies of a pacemaker w.r.t ECG and PPG, paving the way for designing formal verification monitors for pacemakers for the first time using multiple physiological signals. The proposed monitoring framework allows the synthesis of parallel monitors from a given set of desired security policies, where all the monitors execute concurrently and generate an alarm to the user in the case of policy violation. Our implementation and the performance evaluation results demonstrate the technical feasibility of designing such a wearable device for attack detection in pacemakers. This device is separate from the pacemaker, ensuring no need for re-certification of pacemakers. Our approach is amenable to the application of security patches when new attack vectors are detected, making the approach ideal for runtime monitoring of medical CPSs.

可穿戴和植入式医疗设备(IMDs)越来越多地被用于诊断、监测和治疗危重症。此类医疗设备属于安全关键型网络物理系统(CPS)。这些系统支持引入潜在安全漏洞的无线功能。尽管这些设备经过了严格的安全认证流程,但运行时的安全攻击仍不可避免。根据已发表的文献,起搏器和胰岛素输注系统等 IMD 可被远程控制,以注射致命的电击和过量的胰岛素,对病人的生命构成威胁。虽然此前已有基于形式化方法的研究成果提出使用不同形式的静态分析来检测潜在的攻击向量,但这些方法在防止运行时攻击方面存在局限性。本文讨论了通过在运行时监控心脏起搏器的安全策略来检测网络物理攻击的正式框架。我们提出了一种感应人体心电图(ECG)和光电搏动图(PPG)的可穿戴设备,用于检测起搏器受到的攻击。为便于设计该设备,我们将起搏器的安全策略与心电图和光电血流图进行了映射,从而首次利用多种生理信号为起搏器设计形式验证监控器铺平了道路。所提出的监控框架允许根据一组给定的所需安全策略合成并行监控器,所有监控器同时执行,并在违反策略时向用户发出警报。我们的实施和性能评估结果证明了设计这种用于检测起搏器攻击的可穿戴设备在技术上的可行性。该设备与心脏起搏器是分开的,因此无需对心脏起搏器进行重新认证。当检测到新的攻击载体时,我们的方法可以应用安全补丁,这使得该方法成为医疗 CPS 运行时监控的理想选择。
{"title":"Securing Pacemakers using Runtime Monitors over Physiological Signals","authors":"Abhinandan Panda, Srinivas Pinisetty, Partha Roop","doi":"10.1145/3638286","DOIUrl":"https://doi.org/10.1145/3638286","url":null,"abstract":"<p>Wearable and implantable medical devices (IMDs) are increasingly deployed to diagnose, monitor, and provide therapy for critical medical conditions. Such medical devices are safety-critical cyber-physical systems (CPSs). These systems support wireless features introducing potential security vulnerabilities. Although these devices undergo rigorous safety certification processes, runtime security attacks are inevitable. Based on published literature, IMDs such as pacemakers and insulin infusion systems can be remotely controlled to inject deadly electric shocks and excess insulin, posing a threat to a patient’s life. While prior works based on formal methods have been proposed to detect potential attack vectors using different forms of static analysis, these have limitations in preventing attacks at runtime. </p><p>This paper discusses a formal framework for detecting cyber-physical attacks on a pacemaker by monitoring its security policies at runtime. We propose a wearable device that senses the Electrocardiogram (ECG) and Photoplethysmogram (PPG) of the body to detect attacks in a pacemaker. To facilitate the design of this device, we map the security policies of a pacemaker w.r.t ECG and PPG, paving the way for designing formal verification monitors for pacemakers for the first time using multiple physiological signals. The proposed monitoring framework allows the synthesis of parallel monitors from a given set of desired security policies, where all the monitors execute concurrently and generate an alarm to the user in the case of policy violation. Our implementation and the performance evaluation results demonstrate the technical feasibility of designing such a wearable device for attack detection in pacemakers. This device is separate from the pacemaker, ensuring no need for re-certification of pacemakers. Our approach is amenable to the application of security patches when new attack vectors are detected, making the approach ideal for runtime monitoring of medical CPSs.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"44 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139375901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Design Flow for Scheduling Spiking Deep Convolutional Neural Networks on Heterogeneous Neuromorphic System-on-Chip 异构神经形态片上调度尖峰深度卷积神经网络的设计流程
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-12-02 DOI: 10.1145/3635032
Anup Das

Neuromorphic systems-on-chip (NSoCs) integrate CPU cores and neuromorphic hardware accelerators on the same chip. These platforms can execute spiking deep convolutional neural networks (SDCNNs) with a low energy footprint. Modern NSoCs are heterogeneous in terms of their computing, communication, and storage resources. This makes scheduling SDCNN operations a combinatorial problem of exploring an exponentially-large state space in determining mapping, ordering, and timing of operations to achieve a target hardware performance, e.g., throughput.

We propose a systematic design flow to schedule SDCNNs on an NSoC. Our scheduler, called SMART (SDCNN MApping, OrdeRing, and Timing), branches the combinatorial optimization problem into computationally-relaxed sub-problems that generate fast solutions without significantly compromising the solution quality. SMART improves performance by efficiently incorporating the heterogeneity in computing, communication, and storage resources. SMART operates in four steps. First, it creates a self-timed execution schedule to map operations to compute resources, maximizing throughput. Second, it uses an optimization strategy to distribute activation and synaptic weights to storage resources, minimizing data communication-related overhead. Third, it constructs an inter-processor communication (IPC) graph with a transaction order for its communication actors. This transaction order is created using a transaction partial order algorithm, which minimizes contention on the shared communication resources. Finally, it schedules this IPC graph to hardware by overlapping communication with the computation, and leveraging operation, pipeline, and batch parallelism.

We evaluate SMART using 10 representative image, object, and language-based SDCNNs. Results show that SMART increases throughput by an average 23%, compared to a state-of-the-art scheduler. SMART is implemented entirely in software as a compiler extension. It doesn’t require any change in a neuromorphic hardware or its interface to CPUs. It improves throughput with only a marginal increase in the compilation time. SMART is released under the open-source MIT licensing at https://github.com/drexel-DISCO/SMARTto foster future research.

神经形态片上系统(nsoc)将CPU内核和神经形态硬件加速器集成在同一芯片上。这些平台可以以低能耗执行尖峰深度卷积神经网络(SDCNNs)。现代nsoc在计算、通信和存储资源方面是异构的。这使得调度SDCNN操作成为一个组合问题,在确定操作的映射、排序和定时方面探索一个指数级大的状态空间,以实现目标硬件性能,例如吞吐量。我们提出了一个系统的设计流程来调度NSoC上的sdcnn。我们的调度程序,称为SMART (SDCNN MApping, OrdeRing, and Timing),将组合优化问题分支为计算放松的子问题,这些子问题生成快速的解决方案,而不会显著影响解决方案的质量。SMART通过有效地整合计算、通信和存储资源的异构性来提高性能。SMART操作分为四个步骤。首先,它创建一个自定时执行计划,将操作映射到计算资源,从而最大化吞吐量。其次,它使用优化策略将激活和突触权重分配到存储资源,从而最小化与数据通信相关的开销。第三,它构建了一个处理器间通信(IPC)图,并为其通信参与者提供了事务顺序。此事务顺序是使用事务偏序算法创建的,该算法最大限度地减少了共享通信资源上的争用。最后,它通过与计算重叠通信,并利用操作、管道和批处理并行性,将IPC图调度到硬件。我们使用10个具有代表性的基于图像、对象和语言的sdcnn来评估SMART。结果表明,与最先进的调度器相比,SMART的吞吐量平均提高了23%。SMART完全作为编译器扩展在软件中实现。它不需要对神经形态硬件或其与cpu的接口进行任何更改。它提高了吞吐量,但只略微增加了编译时间。SMART在MIT开源许可下发布,网址为https://github.com/drexel-DISCO/SMARTto foster future research。
{"title":"A Design Flow for Scheduling Spiking Deep Convolutional Neural Networks on Heterogeneous Neuromorphic System-on-Chip","authors":"Anup Das","doi":"10.1145/3635032","DOIUrl":"https://doi.org/10.1145/3635032","url":null,"abstract":"<p>Neuromorphic systems-on-chip (NSoCs) integrate CPU cores and neuromorphic hardware accelerators on the same chip. These platforms can execute spiking deep convolutional neural networks (SDCNNs) with a low energy footprint. Modern NSoCs are heterogeneous in terms of their computing, communication, and storage resources. This makes scheduling SDCNN operations a combinatorial problem of exploring an exponentially-large state space in determining mapping, ordering, and timing of operations to achieve a target hardware performance, e.g., throughput. </p><p>We propose a systematic design flow to schedule SDCNNs on an NSoC. Our scheduler, called SMART (<underline>S</underline>DCNN <underline>MA</underline>pping, Orde<underline>R</underline>ing, and <underline>T</underline>iming), branches the combinatorial optimization problem into computationally-relaxed sub-problems that generate fast solutions without significantly compromising the solution quality. SMART improves performance by efficiently incorporating the heterogeneity in computing, communication, and storage resources. SMART operates in four steps. First, it creates a self-timed execution schedule to map operations to compute resources, maximizing throughput. Second, it uses an optimization strategy to distribute activation and synaptic weights to storage resources, minimizing data communication-related overhead. Third, it constructs an inter-processor communication (IPC) graph with a transaction order for its communication actors. This transaction order is created using a transaction partial order algorithm, which minimizes contention on the shared communication resources. Finally, it schedules this IPC graph to hardware by overlapping communication with the computation, and leveraging operation, pipeline, and batch parallelism. </p><p>We evaluate SMART using 10 representative image, object, and language-based SDCNNs. Results show that SMART increases throughput by an average 23%, compared to a state-of-the-art scheduler. SMART is implemented entirely in software as a compiler extension. It doesn’t require any change in a neuromorphic hardware or its interface to CPUs. It improves throughput with only a marginal increase in the compilation time. SMART is released under the open-source MIT licensing at https://github.com/drexel-DISCO/SMART\u0000to foster future research.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"47 6","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138509194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Compression Scale DNN Inference Acceleration based on Cloud-Edge-End Collaboration 基于云-边缘协同的多压缩尺度DNN推理加速
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-28 DOI: 10.1145/3634704
Huamei Qi, Fang Ren, Leilei Wang, Ping Jiang, Shaohua Wan, Xiaoheng Deng

Edge intelligence has emerged as a promising paradigm to accelerate DNN inference by model partitioning, which is particularly useful for intelligent scenarios that demand high accuracy and low latency. However, the dynamic nature of the edge environment and the diversity of end devices pose a significant challenge for DNN model partitioning strategies. Meanwhile, limited resources of edge server make it difficult to manage resource allocation efficiently among multiple devices. In addition, most of the existing studies disregard the different service requirements of the DNN inference tasks, such as its high accuracy-sensitive or high latency-sensitive. To address these challenges, we propose a Multi-Compression Scale DNN Inference Acceleration (MCIA) based on cloud-edge-end collaboration. We model this problem as a mixed-integer multi-dimensional optimization problem, jointly optimizing the DNN model version choice, the partitioning choice, and the allocation of computational and bandwidth resources to maximize the tradeoff between inference accuracy and latency depending on the property of the tasks. Initially, we train multiple versions of DNN inference models with different compression scales in the cloud, and deploy them to end devices and edge server. Next, a deep reinforcement learning-based algorithm is developed for joint decision making of adaptive collaborative inference and resource allocation based on the current multi-compression scale models and the task property. Experimental results show that MCIA can adapt to heterogeneous devices and dynamic networks, and has superior performance compared with other methods.

边缘智能已经成为一种有前途的范例,通过模型划分来加速DNN推理,这对于需要高精度和低延迟的智能场景特别有用。然而,边缘环境的动态性和终端设备的多样性对深度神经网络模型划分策略提出了重大挑战。同时,由于边缘服务器的资源有限,难以有效地管理多设备之间的资源分配。此外,现有的研究大多忽略了DNN推理任务的不同服务要求,如其高准确性敏感或高延迟敏感。为了解决这些挑战,我们提出了一种基于云边缘协作的多压缩尺度深度神经网络推理加速(MCIA)。我们将该问题建模为一个混合整数多维优化问题,根据任务的性质,共同优化DNN模型版本选择、分区选择以及计算和带宽资源的分配,以最大限度地在推理精度和延迟之间进行权衡。首先,我们在云中训练不同压缩尺度的DNN推理模型的多个版本,并将其部署到终端设备和边缘服务器上。其次,基于现有的多压缩尺度模型和任务属性,提出了一种基于深度强化学习的自适应协同推理和资源分配联合决策算法。实验结果表明,MCIA能够适应异构设备和动态网络,与其他方法相比具有优越的性能。
{"title":"Multi-Compression Scale DNN Inference Acceleration based on Cloud-Edge-End Collaboration","authors":"Huamei Qi, Fang Ren, Leilei Wang, Ping Jiang, Shaohua Wan, Xiaoheng Deng","doi":"10.1145/3634704","DOIUrl":"https://doi.org/10.1145/3634704","url":null,"abstract":"<p>Edge intelligence has emerged as a promising paradigm to accelerate DNN inference by model partitioning, which is particularly useful for intelligent scenarios that demand high accuracy and low latency. However, the dynamic nature of the edge environment and the diversity of end devices pose a significant challenge for DNN model partitioning strategies. Meanwhile, limited resources of edge server make it difficult to manage resource allocation efficiently among multiple devices. In addition, most of the existing studies disregard the different service requirements of the DNN inference tasks, such as its high accuracy-sensitive or high latency-sensitive. To address these challenges, we propose a Multi-Compression Scale DNN Inference Acceleration (MCIA) based on cloud-edge-end collaboration. We model this problem as a mixed-integer multi-dimensional optimization problem, jointly optimizing the DNN model version choice, the partitioning choice, and the allocation of computational and bandwidth resources to maximize the tradeoff between inference accuracy and latency depending on the property of the tasks. Initially, we train multiple versions of DNN inference models with different compression scales in the cloud, and deploy them to end devices and edge server. Next, a deep reinforcement learning-based algorithm is developed for joint decision making of adaptive collaborative inference and resource allocation based on the current multi-compression scale models and the task property. Experimental results show that MCIA can adapt to heterogeneous devices and dynamic networks, and has superior performance compared with other methods.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"50 8","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138509174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling and Analysis of ETC Control System with Colored Petri Net and Dynamic Slicing 基于彩色Petri网和动态切片的ETC控制系统建模与分析
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-27 DOI: 10.1145/3633450
Wangyang Yu, Jinming Kong, Zhijun Ding, Xiaojun Zhai, Zhiqiang Li, Qi Guo

Nowadays, an Electronic Toll Collection (ETC) control system in highways has been widely adopted to smoothen traffic flow. However, as it is a complex business interaction system, there are inevitably flaws in its control logic process, such as the problem of vehicle fee evasion. Even we find that there are more than one way for vehicles to evade fees. This shows that it is difficult to ensure the completeness of its design. Therefore, it is necessary to adopt a novel formal method to model and analyze its design, detect flaws and modify it. In this paper, a Colored Petri net (CPN) is introduced to establish its model. To analyze and modify the system model more efficiently, a dynamic slicing method of CPN is proposed. First, a static slice is obtained from the static slicing criterion by backtracking. Second, considering all binding elements that can be enabled under the initial marking, a forward slice is obtained from the dynamic slicing criterion by traversing. Third, the dynamic slicing of CPN is obtained by taking the intersection of both slices. The proposed dynamic slicing method of CPN can be used to formalize and verify the behavior properties of an ETC control system, and the flaws can be detected effectively. As a case study, the flaw about a vehicle that has not completed the payment following the previous vehicle to pass the railing is detected by the proposed method.

目前,高速公路已广泛采用电子收费控制系统,以实现交通畅通。然而,作为一个复杂的业务交互系统,其控制逻辑过程中不可避免地存在缺陷,例如车费漏缴问题。甚至我们也发现,车辆逃避收费的方式不止一种。这说明其设计的完整性很难保证。因此,有必要采用一种新颖的形式化方法对其设计进行建模和分析,发现缺陷并对其进行修改。本文引入有色Petri网(CPN)来建立其模型。为了更有效地分析和修正系统模型,提出了一种CPN的动态切片方法。首先,通过回溯法从静态切片准则得到一个静态切片;其次,考虑初始标记下所有可以启用的绑定元素,通过遍历从动态切片准则中得到一个正向切片;第三,取两个切片的交点,得到CPN的动态切片。所提出的CPN动态切片方法可以形式化和验证ETC控制系统的行为特性,并且可以有效地检测出缺陷。作为案例研究,提出的方法检测了未完成付款的车辆跟随前一辆车辆通过栏杆的缺陷。
{"title":"Modeling and Analysis of ETC Control System with Colored Petri Net and Dynamic Slicing","authors":"Wangyang Yu, Jinming Kong, Zhijun Ding, Xiaojun Zhai, Zhiqiang Li, Qi Guo","doi":"10.1145/3633450","DOIUrl":"https://doi.org/10.1145/3633450","url":null,"abstract":"<p>Nowadays, an Electronic Toll Collection (ETC) control system in highways has been widely adopted to smoothen traffic flow. However, as it is a complex business interaction system, there are inevitably flaws in its control logic process, such as the problem of vehicle fee evasion. Even we find that there are more than one way for vehicles to evade fees. This shows that it is difficult to ensure the completeness of its design. Therefore, it is necessary to adopt a novel formal method to model and analyze its design, detect flaws and modify it. In this paper, a Colored Petri net (CPN) is introduced to establish its model. To analyze and modify the system model more efficiently, a dynamic slicing method of CPN is proposed. First, a static slice is obtained from the static slicing criterion by backtracking. Second, considering all binding elements that can be enabled under the initial marking, a forward slice is obtained from the dynamic slicing criterion by traversing. Third, the dynamic slicing of CPN is obtained by taking the intersection of both slices. The proposed dynamic slicing method of CPN can be used to formalize and verify the behavior properties of an ETC control system, and the flaws can be detected effectively. As a case study, the flaw about a vehicle that has not completed the payment following the previous vehicle to pass the railing is detected by the proposed method.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"50 4","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138509176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Environment Model Generation for CPS Goal Verification using Imitation Learning 基于模仿学习的CPS目标验证虚拟环境模型生成
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-27 DOI: 10.1145/3633804
Yong-Jun Shin, Donghwan Shin, Doo-Hwan Bae

Cyber-Physical Systems (CPS) continuously interact with their physical environments through embedded software controllers that observe the environments and determine actions. Field Operational Tests (FOT) are essential to verify to what extent the CPS under analysis can achieve certain CPS goals, such as satisfying the safety and performance requirements, while interacting with the real operational environment. However, performing many FOTs to obtain statistically significant verification results is challenging due to its high cost and risk in practice. Simulation-based verification can be an alternative to address the challenge, but it still requires an accurate virtual environment model that can replace the real environment interacting with the CPS in a closed loop.

In this paper, we propose ENVI (ENVironment Imitation), a novel approach to automatically generate an accurate virtual environment model, enabling efficient and accurate simulation-based CPS goal verification in practice.To do this, we first formally define the problem of the virtual environment model generation and solve it by leveraging Imitation Learning (IL), which has been actively studied in machine learning to learn complex behaviors from expert demonstrations. The key idea behind the model generation is to leverage IL for training a model that imitates the interactions between the CPS controller and its real environment as recorded in (possibly very small) FOT logs. We then statistically verify the goal achievement of the CPS by simulating it with the generated model. We empirically evaluate ENVI by applying it to the verification of two popular autonomous driving assistant systems. The results show that ENVI can reduce the cost of CPS goal verification while maintaining its accuracy by generating accurate environment models from only a few FOT logs. The use of IL in virtual environment model generation opens new research directions, further discussed at the end of the paper.

信息物理系统(CPS)通过嵌入式软件控制器不断地与物理环境交互,这些软件控制器观察环境并决定行动。现场操作测试(FOT)对于验证所分析的CPS在多大程度上能够实现某些CPS目标至关重要,例如满足安全和性能要求,同时与实际操作环境相互作用。然而,由于在实践中成本和风险高,执行许多fts以获得统计上显著的验证结果是具有挑战性的。基于仿真的验证可以作为解决挑战的替代方案,但它仍然需要一个精确的虚拟环境模型,该模型可以取代在闭环中与CPS交互的真实环境。在本文中,我们提出了一种自动生成精确虚拟环境模型的新方法ENVI (ENVironment Imitation),从而在实践中实现基于仿真的高效准确的CPS目标验证。为此,我们首先正式定义虚拟环境模型生成的问题,并利用模仿学习(IL)来解决它,模仿学习在机器学习中得到了积极的研究,可以从专家演示中学习复杂的行为。模型生成背后的关键思想是利用IL来训练一个模型,该模型模仿记录在(可能非常小的)FOT日志中的CPS控制器与其真实环境之间的交互。然后,我们通过使用生成的模型进行模拟,统计地验证了CPS的目标实现。我们通过将其应用于两种流行的自动驾驶辅助系统的验证来对ENVI进行实证评估。结果表明,ENVI可以通过仅从少量FOT日志生成准确的环境模型,从而降低CPS目标验证的成本,同时保持其准确性。IL在虚拟环境模型生成中的应用开辟了新的研究方向,并在文章的最后进行了进一步的讨论。
{"title":"Virtual Environment Model Generation for CPS Goal Verification using Imitation Learning","authors":"Yong-Jun Shin, Donghwan Shin, Doo-Hwan Bae","doi":"10.1145/3633804","DOIUrl":"https://doi.org/10.1145/3633804","url":null,"abstract":"<p>Cyber-Physical Systems (CPS) continuously interact with their physical environments through embedded software controllers that observe the environments and determine actions. Field Operational Tests (FOT) are essential to verify to what extent the CPS under analysis can achieve certain CPS goals, such as satisfying the safety and performance requirements, while interacting with the real operational environment. However, performing many FOTs to obtain statistically significant verification results is challenging due to its high cost and risk in practice. Simulation-based verification can be an alternative to address the challenge, but it still requires an accurate virtual environment model that can replace the real environment interacting with the CPS in a closed loop. </p><p>In this paper, we propose ENVI (ENVironment Imitation), a novel approach to automatically generate an accurate virtual environment model, enabling efficient and accurate simulation-based CPS goal verification in practice.To do this, we first formally define the problem of the virtual environment model generation and solve it by leveraging Imitation Learning (IL), which has been actively studied in machine learning to learn complex behaviors from expert demonstrations. The key idea behind the model generation is to leverage IL for training a model that imitates the interactions between the CPS controller and its real environment as recorded in (possibly very small) FOT logs. We then statistically verify the goal achievement of the CPS by simulating it with the generated model. We empirically evaluate ENVI by applying it to the verification of two popular autonomous driving assistant systems. The results show that ENVI can reduce the cost of CPS goal verification while maintaining its accuracy by generating accurate environment models from only a few FOT logs. The use of IL in virtual environment model generation opens new research directions, further discussed at the end of the paper.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"49 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138509183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoV-Fog-Assisted Framework for Accident Detection and Classification 基于iov - fog的事故检测与分类框架
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-24 DOI: 10.1145/3633805
Navin Kumar, Sandeep Kumar Sood, Munish Saini

The evolution of vehicular research into an effectuating area like the Internet of Vehicles (IoV) was verified by technical developments in hardware. The integration of the Internet of Things (IoT) and Vehicular Ad-hoc Networks (VANET) has significantly impacted addressing various problems, from dangerous situations to finding practical solutions. During a catastrophic collision, the vehicle experiences extreme turbulence, which may be captured using Micro-Electromechanical systems (MEMS) to yield signatures characterizing the severity of the accident. This study presents a three-layer design, with the data collecting layer relying on a low-power IoT configuration that includes GPS and an MPU 6050 placed on an Arduino Mega. The fog layer oversees data pre-processing and other low-level computing operations. With its extensive computing capabilities, the farthest cloud layer carries out Multidimensional Dynamic Time Warping (MDTW) to identify accidents and maintains the information repository by updating it. The experimentation compared the state-of-the-art algorithms such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Random Forest Tree (RFT) using threshold-based detection with the proposed MDTW clustering approach. Data collection involves simulating accidents via VirtualCrash for training and testing, whereas the IoV circuitry would be utilized in actual real-life scenarios. The proposed approach achieved an F1-Score of 0.8921 and 0.8184 for rear and head-on collisions.

硬件技术的发展验证了车辆研究向车联网(IoV)等实施领域的演变。物联网(IoT)和车载自组织网络(VANET)的集成对解决各种问题(从危险情况到寻找实际解决方案)产生了重大影响。在灾难性碰撞中,车辆会经历极端的湍流,这些湍流可以通过微机电系统(MEMS)捕捉到,从而产生表征事故严重程度的特征。本研究提出了一个三层设计,其中数据收集层依赖于低功耗物联网配置,包括GPS和放置在Arduino Mega上的MPU 6050。雾层监督数据预处理和其他低级计算操作。由于其广泛的计算能力,最远的云层执行多维动态时间扭曲(MDTW)来识别事故,并通过更新来维护信息存储库。实验比较了最先进的算法,如支持向量机(SVM)、k近邻(KNN)和随机森林树(RFT)使用基于阈值的检测与提出的MDTW聚类方法。数据收集包括通过VirtualCrash模拟事故进行培训和测试,而车联网电路将用于实际生活场景。该方法在后碰撞和正面碰撞的F1-Score分别为0.8921和0.8184。
{"title":"IoV-Fog-Assisted Framework for Accident Detection and Classification","authors":"Navin Kumar, Sandeep Kumar Sood, Munish Saini","doi":"10.1145/3633805","DOIUrl":"https://doi.org/10.1145/3633805","url":null,"abstract":"<p>The evolution of vehicular research into an effectuating area like the Internet of Vehicles (IoV) was verified by technical developments in hardware. The integration of the Internet of Things (IoT) and Vehicular Ad-hoc Networks (VANET) has significantly impacted addressing various problems, from dangerous situations to finding practical solutions. During a catastrophic collision, the vehicle experiences extreme turbulence, which may be captured using Micro-Electromechanical systems (MEMS) to yield signatures characterizing the severity of the accident. This study presents a three-layer design, with the data collecting layer relying on a low-power IoT configuration that includes GPS and an MPU 6050 placed on an Arduino Mega. The fog layer oversees data pre-processing and other low-level computing operations. With its extensive computing capabilities, the farthest cloud layer carries out Multidimensional Dynamic Time Warping (MDTW) to identify accidents and maintains the information repository by updating it. The experimentation compared the state-of-the-art algorithms such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Random Forest Tree (RFT) using threshold-based detection with the proposed MDTW clustering approach. Data collection involves simulating accidents via VirtualCrash for training and testing, whereas the IoV circuitry would be utilized in actual real-life scenarios. The proposed approach achieved an F1-Score of 0.8921 and 0.8184 for rear and head-on collisions.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"50 5","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138509175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COBRRA: COntention aware cache Bypass with Request-Response Arbitration 具有请求-响应仲裁的争用感知缓存旁路
IF 2 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-17 DOI: 10.1145/3632748
Aritra Bagchi, Dinesh Joshi, Preeti Ranjan Panda

In modern multi-processor systems-on-chip (MPSoCs), requests from different processor cores, accelerators, and their responses from the lower level memory contend for the shared cache bandwidth, making it a critical performance bottleneck. Prior research on shared cache management has considered requests from cores, but has ignored crucial contributions from their responses. Prior cache bypass techniques focused on data reuse and neglected the system-level implications of shared cache contention. We propose COBRRA, a novel shared cache controller policy that mitigates the contention by aggressively bypassing selected responses from the lower level memory, and scheduling the remaining requests and responses to the cache efficiently. COBRRA is able to improve the average performance of a set of 15 SPEC workloads by (49% ) and (33% ) compared to the no-bypass baseline and the best performing state-of-the-art bypass solution, respectively. Furthermore, COBRRA reduces the overall cache energy consumption by (38% ) and (31% ) compared to the no-bypass baseline and the most energy-efficient state-of-the-art bypass solution, respectively.

在现代的多处理器片上系统(mpsoc)中,来自不同处理器内核、加速器的请求及其来自底层内存的响应会争夺共享缓存带宽,从而成为关键的性能瓶颈。先前关于共享缓存管理的研究考虑了来自核心的请求,但忽略了它们的响应的关键贡献。先前的缓存绕过技术侧重于数据重用,而忽略了共享缓存争用的系统级含义。我们提出了一种新的共享缓存控制器策略COBRRA,它通过积极地绕过来自底层内存的选定响应,并有效地将剩余的请求和响应调度到缓存中,从而减轻了争用。与无旁路基线和性能最佳的最先进旁路解决方案相比,COBRRA能够将一组15个SPEC工作负载的平均性能分别提高(49% )和(33% )。此外,与无旁路基线和最节能的最先进的旁路解决方案相比,COBRRA将总体缓存能耗分别降低(38% )和(31% )。
{"title":"COBRRA: COntention aware cache Bypass with Request-Response Arbitration","authors":"Aritra Bagchi, Dinesh Joshi, Preeti Ranjan Panda","doi":"10.1145/3632748","DOIUrl":"https://doi.org/10.1145/3632748","url":null,"abstract":"<p>In modern multi-processor systems-on-chip (MPSoCs), requests from different processor cores, accelerators, and their responses from the lower level memory contend for the shared cache bandwidth, making it a critical performance bottleneck. Prior research on shared cache management has considered requests from cores, but has ignored crucial contributions from their responses. Prior cache bypass techniques focused on data reuse and neglected the system-level implications of shared cache contention. We propose COBRRA, a novel shared cache controller policy that mitigates the contention by aggressively bypassing selected responses from the lower level memory, and scheduling the remaining requests and responses to the cache efficiently. COBRRA is able to improve the average performance of a set of 15 SPEC workloads by (49% ) and (33% ) compared to the no-bypass baseline and the best performing state-of-the-art bypass solution, respectively. Furthermore, COBRRA reduces the overall cache energy consumption by (38% ) and (31% ) compared to the no-bypass baseline and the most energy-efficient state-of-the-art bypass solution, respectively.</p>","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"49 8","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138509181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Aware Adaptive Mixed-Criticality Scheduling with Semi-Clairvoyance and Graceful Degradation 具有半洞察力和优雅退化的能量感知自适应混合临界调度
3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-11-13 DOI: 10.1145/3632749
Yi-Wen Zhang, Hui Zheng, Zonghua Gu
The classic Mixed-Criticality System (MCS) task model is a non-clairvoyance model in which the change of the system behavior is based on the completion of high-criticality tasks while dropping low-criticality tasks in high-criticality mode. In this paper, we simultaneously consider graceful degradation and semi-clairvoyance in MCS. We first propose the analysis for adaptive mixed-criticality with semi-clairvoyance denoted as C-AMC-sem. The so-called semi-clairvoyance refers to the system’s behavior change being revealed at the time that jobs are released. Moreover, we propose a new algorithm based on C-AMC-sem to reduce energy consumption. Finally, we verify the performance of the proposed algorithms via experiments upon synthetically generated tasksets. The experimental results indicate that the proposed algorithms significantly outperform the existing algorithms.
经典的混合临界系统(mixed -critical System, MCS)任务模型是一种非洞察力模型,在该模型中,系统行为的变化是基于高临界任务的完成,而在高临界模式下放弃低临界任务。在本文中,我们同时考虑了MCS中的优美退化和半透视。我们首先提出了半透视自适应混合临界分析,表示为C-AMC-sem。所谓的“半透视”是指在释放工作岗位时,系统的行为变化被揭示出来。此外,我们提出了一种新的基于C-AMC-sem的算法来降低能耗。最后,我们通过在综合生成的任务集上进行实验来验证所提出算法的性能。实验结果表明,所提算法明显优于现有算法。
{"title":"Energy-Aware Adaptive Mixed-Criticality Scheduling with Semi-Clairvoyance and Graceful Degradation","authors":"Yi-Wen Zhang, Hui Zheng, Zonghua Gu","doi":"10.1145/3632749","DOIUrl":"https://doi.org/10.1145/3632749","url":null,"abstract":"The classic Mixed-Criticality System (MCS) task model is a non-clairvoyance model in which the change of the system behavior is based on the completion of high-criticality tasks while dropping low-criticality tasks in high-criticality mode. In this paper, we simultaneously consider graceful degradation and semi-clairvoyance in MCS. We first propose the analysis for adaptive mixed-criticality with semi-clairvoyance denoted as C-AMC-sem. The so-called semi-clairvoyance refers to the system’s behavior change being revealed at the time that jobs are released. Moreover, we propose a new algorithm based on C-AMC-sem to reduce energy consumption. Finally, we verify the performance of the proposed algorithms via experiments upon synthetically generated tasksets. The experimental results indicate that the proposed algorithms significantly outperform the existing algorithms.","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":"46 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136347911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Embedded Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1