首页 > 最新文献

ACM Transactions on Autonomous and Adaptive Systems最新文献

英文 中文
IBAQ: Frequency-Domain Backdoor Attack Threatening Autonomous Driving via Quadratic Phase IBAQ:通过二次相位威胁自动驾驶的频域后门攻击
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-19 DOI: 10.1145/3673904
Jinghan Qiu, Honglong Chen, Junjian Li, Yudong Gao, Junwei Li, Xingang Wang

The rapid evolution of backdoor attacks has emerged as a significant threat to the security of autonomous driving models. An attacker injects a backdoor into the model by adding triggers to the samples, which can be activated to manipulate the model’s inference. Backdoor attacks can lead to severe consequences, such as misidentifying traffic signs during autonomous driving, posing a risk of causing traffic accidents. Recently, there has been a gradual evolution of frequency-domain backdoor attacks. However, since the change of both amplitude and its corresponding phase will significantly affect image appearance, most of the existing frequency-domain backdoor attacks change only the amplitude, which results in a suboptimal efficacy of the attack. In this work, we propose an attack called IBAQ, to solve this problem by blurring semantic information of the trigger image through the quadratic phase. Initially, we convert the trigger and benign sample to YCrCb space. Then, we perform the fast Fourier transform on the Y channel, blending the trigger image’s amplitude and quadratic phase linearly with the benign sample’s amplitude and phase. IBAQ achieves covert injection of trigger information within amplitude and phase, enhancing the attack effect. We validate the effectiveness and stealthiness of IBAQ through comprehensive experiments.

后门攻击的快速发展已成为自动驾驶模型安全性的重大威胁。攻击者通过在样本中添加触发器向模型中注入后门,从而激活后门来操纵模型的推理。后门攻击可能会导致严重后果,例如在自动驾驶过程中错误识别交通标志,造成交通事故风险。最近,频域后门攻击逐渐发展起来。然而,由于振幅和相应相位的变化都会对图像外观产生重大影响,现有的频域后门攻击大多只改变振幅,导致攻击效果不理想。在这项工作中,我们提出了一种名为 IBAQ 的攻击,通过二次相位模糊触发图像的语义信息来解决这个问题。首先,我们将触发和良性样本转换到 YCrCb 空间。然后,我们对 Y 通道进行快速傅里叶变换,将触发图像的振幅和二次相位与良性样本的振幅和相位线性混合。IBAQ 实现了在振幅和相位内隐蔽注入触发信息,增强了攻击效果。我们通过综合实验验证了 IBAQ 的有效性和隐蔽性。
{"title":"IBAQ: Frequency-Domain Backdoor Attack Threatening Autonomous Driving via Quadratic Phase","authors":"Jinghan Qiu, Honglong Chen, Junjian Li, Yudong Gao, Junwei Li, Xingang Wang","doi":"10.1145/3673904","DOIUrl":"https://doi.org/10.1145/3673904","url":null,"abstract":"<p>The rapid evolution of backdoor attacks has emerged as a significant threat to the security of autonomous driving models. An attacker injects a backdoor into the model by adding triggers to the samples, which can be activated to manipulate the model’s inference. Backdoor attacks can lead to severe consequences, such as misidentifying traffic signs during autonomous driving, posing a risk of causing traffic accidents. Recently, there has been a gradual evolution of frequency-domain backdoor attacks. However, since the change of both amplitude and its corresponding phase will significantly affect image appearance, most of the existing frequency-domain backdoor attacks change only the amplitude, which results in a suboptimal efficacy of the attack. In this work, we propose an attack called IBAQ, to solve this problem by blurring semantic information of the trigger image through the quadratic phase. Initially, we convert the trigger and benign sample to YCrCb space. Then, we perform the fast Fourier transform on the Y channel, blending the trigger image’s amplitude and quadratic phase linearly with the benign sample’s amplitude and phase. IBAQ achieves covert injection of trigger information within amplitude and phase, enhancing the attack effect. We validate the effectiveness and stealthiness of IBAQ through comprehensive experiments.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"8 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Scheduling of High-Availability Drone Swarms for Congestion Alleviation in Connected Automated Vehicles 自适应调度高可用性无人机群,缓解联网自动驾驶汽车的拥堵状况
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-19 DOI: 10.1145/3673905
Shengye Pang, Yi Li, Zhen Qin, Xinkui Zhao, Jintao Chen, Fan Wang, Jianwei Yin

The Intelligent Transportation System (ITS) serves as a pivotal element within urban networks, offering decision support to users and connected automated vehicles (CAVs) through comprehensive information gathering, sensing, device control, and data processing. Presently, ITS predominantly relies on sensors embedded in fixed infrastructure, notably Roadside Units (RSUs). However, RSUs are confined by coverage limitations and may encounter challenges in prompt emergency responses. On-demand resources, such as drones, present a viable option to supplement these deficiencies effectively. This paper introduces an approach where Software-Defined Networking (SDN) and Mobile Edge Computing (MEC) technologies are integrated to formulate a high-availability drone swarm control and communication infrastructure framework, comprising the cloud layer, edge layer, and device layer. Drones confront limitations in flight duration attributed to battery limitations, posing a challenge in sustaining continuous monitoring of road conditions over extended periods. Effective drone scheduling stands as a promising solution to overcome these constraints. To tackle this issue, we initially utilized Graph WaveNet, a specialized graph neural network structure tailored for spatial-temporal graph modeling, for training a congestion prediction model using real-world dataset inputs. Building upon this, we further propose an algorithm for drone scheduling based on congestion prediction. Our simulation experiments using real-world data demonstrate that, compared to the baseline method, the proposed scheduling algorithm not only yielded superior scheduling gains but also mitigated drone idle rates.

智能交通系统(ITS)是城市网络中的关键要素,通过全面的信息收集、传感、设备控制和数据处理,为用户和联网自动驾驶车辆(CAV)提供决策支持。目前,智能交通系统主要依靠嵌入在固定基础设施中的传感器,特别是路边装置(RSU)。然而,RSU 受到覆盖范围的限制,在迅速作出应急响应方面可能会遇到挑战。无人机等按需资源是有效补充这些不足的可行选择。本文介绍了一种将软件定义网络(SDN)和移动边缘计算(MEC)技术相结合的方法,以制定一个由云层、边缘层和设备层组成的高可用性无人机群控制和通信基础设施框架。由于电池的限制,无人机的飞行时间有限,这给长时间持续监控路况带来了挑战。有效的无人机调度是克服这些限制的可行解决方案。为了解决这个问题,我们首先利用了专门为时空图建模而定制的图神经网络结构 Graph WaveNet,使用真实世界数据集输入来训练拥堵预测模型。在此基础上,我们进一步提出了一种基于拥堵预测的无人机调度算法。我们使用真实世界数据进行的模拟实验表明,与基线方法相比,所提出的调度算法不仅获得了卓越的调度收益,还降低了无人机空闲率。
{"title":"Adaptive Scheduling of High-Availability Drone Swarms for Congestion Alleviation in Connected Automated Vehicles","authors":"Shengye Pang, Yi Li, Zhen Qin, Xinkui Zhao, Jintao Chen, Fan Wang, Jianwei Yin","doi":"10.1145/3673905","DOIUrl":"https://doi.org/10.1145/3673905","url":null,"abstract":"<p>The Intelligent Transportation System (ITS) serves as a pivotal element within urban networks, offering decision support to users and connected automated vehicles (CAVs) through comprehensive information gathering, sensing, device control, and data processing. Presently, ITS predominantly relies on sensors embedded in fixed infrastructure, notably Roadside Units (RSUs). However, RSUs are confined by coverage limitations and may encounter challenges in prompt emergency responses. On-demand resources, such as drones, present a viable option to supplement these deficiencies effectively. This paper introduces an approach where Software-Defined Networking (SDN) and Mobile Edge Computing (MEC) technologies are integrated to formulate a high-availability drone swarm control and communication infrastructure framework, comprising the cloud layer, edge layer, and device layer. Drones confront limitations in flight duration attributed to battery limitations, posing a challenge in sustaining continuous monitoring of road conditions over extended periods. Effective drone scheduling stands as a promising solution to overcome these constraints. To tackle this issue, we initially utilized Graph WaveNet, a specialized graph neural network structure tailored for spatial-temporal graph modeling, for training a congestion prediction model using real-world dataset inputs. Building upon this, we further propose an algorithm for drone scheduling based on congestion prediction. Our simulation experiments using real-world data demonstrate that, compared to the baseline method, the proposed scheduling algorithm not only yielded superior scheduling gains but also mitigated drone idle rates.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"28 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Machine Learning Framework for Online Container Security Attack Detection 用于在线容器安全攻击检测的自监督机器学习框架
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-28 DOI: 10.1145/3665795
Olufogorehan Tunde-Onadele, Yuhang Lin, Xiaohui Gu, Jingzhu He, Hugo Latapie

Container security has received much research attention recently. Previous work has proposed to apply various machine learning techniques to detect security attacks in containerized applications. On one hand, supervised machine learning schemes require sufficient labeled training data to achieve good attack detection accuracy. On the other hand, unsupervised machine learning methods are more practical by avoiding training data labeling requirements, but they often suffer from high false alarm rates. In this paper, we present a generic self-supervised hybrid learning (SHIL) framework for achieving efficient online security attack detection in containerized systems. SHIL can effectively combine both unsupervised and supervised learning algorithms but does not require any manual data labeling. We have implemented a prototype of SHIL and conducted experiments over 46 real world security attacks in 29 commonly used server applications. Our experimental results show that SHIL can reduce false alarms by 33-93% compared to existing supervised, unsupervised, or semi-supervised machine learning schemes while achieving a higher or similar detection rate.

最近,容器安全受到了很多研究的关注。以往的工作提出应用各种机器学习技术来检测容器化应用中的安全攻击。一方面,有监督机器学习方案需要足够多的标注训练数据,才能达到良好的攻击检测精度。另一方面,无监督机器学习方法避免了训练数据标记的要求,因而更加实用,但它们往往存在误报率高的问题。本文提出了一种通用的自监督混合学习(SHIL)框架,用于在容器化系统中实现高效的在线安全攻击检测。SHIL 可以有效结合无监督和有监督学习算法,但不需要任何人工数据标记。我们已经实现了SHIL的原型,并在29个常用服务器应用程序中对46个真实世界的安全攻击进行了实验。实验结果表明,与现有的有监督、无监督或半监督机器学习方案相比,SHIL 可将误报率降低 33-93%,同时实现更高或类似的检测率。
{"title":"Self-Supervised Machine Learning Framework for Online Container Security Attack Detection","authors":"Olufogorehan Tunde-Onadele, Yuhang Lin, Xiaohui Gu, Jingzhu He, Hugo Latapie","doi":"10.1145/3665795","DOIUrl":"https://doi.org/10.1145/3665795","url":null,"abstract":"<p>Container security has received much research attention recently. Previous work has proposed to apply various machine learning techniques to detect security attacks in containerized applications. On one hand, supervised machine learning schemes require sufficient labeled training data to achieve good attack detection accuracy. On the other hand, unsupervised machine learning methods are more practical by avoiding training data labeling requirements, but they often suffer from high false alarm rates. In this paper, we present a generic self-supervised hybrid learning (SHIL) framework for achieving efficient online security attack detection in containerized systems. SHIL can effectively combine both unsupervised and supervised learning algorithms but does not require any manual data labeling. We have implemented a prototype of SHIL and conducted experiments over 46 real world security attacks in 29 commonly used server applications. Our experimental results show that SHIL can reduce false alarms by 33-93% compared to existing supervised, unsupervised, or semi-supervised machine learning schemes while achieving a higher or similar detection rate.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"43 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Framework for Simultaneous Task Allocation and Planning under Uncertainty 不确定性条件下的任务分配与规划同步框架
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-28 DOI: 10.1145/3665499
Fatma Faruq, Bruno Lacerda, Nick Hawes, David Parker

We present novel techniques for simultaneous task allocation and planning in multi-robot systems operating under uncertainty. By performing task allocation and planning simultaneously, allocations are informed by individual robot behaviour, creating more efficient team behaviour. We go beyond existing work by planning for task reallocation across the team given a model of partial task satisfaction under potential robot failures and uncertain action outcomes. We model the problem using Markov decision processes, with tasks encoded in co-safe linear temporal logic, and optimise for the expected number of tasks completed by the team. To avoid the inherent complexity of joint models, we propose an alternative model that simultaneously considers task allocation and planning, but in a sequential fashion. We then build a joint policy from the sequential policy obtained from our model, thus allowing for concurrent policy execution. Furthermore, to enable adaptation in the case of robot failures, we consider replanning from failure states and propose an approach to preemptively replan in an anytime fashion, replanning for more probable failure states first. Our method also allows us to quantify the performance of the team by providing an analysis of properties such as the expected number of completed tasks under concurrent policy execution. We implement and extensively evaluate our approach on a range of scenarios. We compare its performance to a state-of-the-art baseline in decoupled task allocation and planning: sequential single-item auctions. Our approach outperforms the baseline in terms of computation time and the number of times replanning is required on robot failure.

我们介绍了在不确定条件下运行的多机器人系统中同时进行任务分配和规划的新技术。通过同时执行任务分配和规划,机器人个体的行为为任务分配提供了信息,从而提高了团队行为的效率。我们在现有工作的基础上,根据潜在机器人故障和不确定行动结果下的部分任务满意度模型,对整个团队的任务重新分配进行规划。我们使用马尔可夫决策过程对问题进行建模,任务以共安全线性时间逻辑进行编码,并对团队完成任务的预期数量进行优化。为了避免联合模型固有的复杂性,我们提出了一个替代模型,该模型同时考虑任务分配和规划,但以顺序的方式进行。然后,我们根据从模型中获得的顺序策略建立联合策略,从而实现策略的并行执行。此外,为了能够在机器人发生故障时进行调整,我们考虑了从故障状态重新规划的问题,并提出了一种以随时方式进行抢先重新规划的方法,首先对更有可能发生的故障状态进行重新规划。我们的方法还允许我们通过分析并发策略执行下完成任务的预期数量等属性来量化团队的性能。我们在一系列场景中实施并广泛评估了我们的方法。我们将其性能与任务分配和规划解耦的最先进基线(顺序单项拍卖)进行了比较。在计算时间和机器人故障时需要重新规划的次数方面,我们的方法优于基准方法。
{"title":"A Framework for Simultaneous Task Allocation and Planning under Uncertainty","authors":"Fatma Faruq, Bruno Lacerda, Nick Hawes, David Parker","doi":"10.1145/3665499","DOIUrl":"https://doi.org/10.1145/3665499","url":null,"abstract":"<p>We present novel techniques for simultaneous task allocation and planning in multi-robot systems operating under uncertainty. By performing task allocation and planning simultaneously, allocations are informed by individual robot behaviour, creating more efficient team behaviour. We go beyond existing work by planning for task reallocation across the team given a model of partial task satisfaction under potential robot failures and uncertain action outcomes. We model the problem using Markov decision processes, with tasks encoded in co-safe linear temporal logic, and optimise for the expected number of tasks completed by the team. To avoid the inherent complexity of joint models, we propose an alternative model that simultaneously considers task allocation and planning, but in a sequential fashion. We then build a joint policy from the sequential policy obtained from our model, thus allowing for concurrent policy execution. Furthermore, to enable adaptation in the case of robot failures, we consider replanning from failure states and propose an approach to preemptively replan in an anytime fashion, replanning for more probable failure states first. Our method also allows us to quantify the performance of the team by providing an analysis of properties such as the expected number of completed tasks under concurrent policy execution. We implement and extensively evaluate our approach on a range of scenarios. We compare its performance to a state-of-the-art baseline in decoupled task allocation and planning: sequential single-item auctions. Our approach outperforms the baseline in terms of computation time and the number of times replanning is required on robot failure.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"25 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptation in Edge Computing: A review on design principles and research challenges 边缘计算中的适应性:设计原则与研究挑战综述
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-09 DOI: 10.1145/3664200
Fatemeh Golpayegani, Nanxi Chen, Nima Afraz, Eric Gyamfi, Abdollah Malekjafarian, Dominik Schäfer, Christian Krupitzer

Edge Computing places the computational services and resources closer to the user proximity, to reduce latency, and ensure the quality of service and experience. Low latency, context awareness, and mobility support are the major contributors to edge-enabled smart systems. Such systems require handling new situations and change on the fly and ensuring the quality of service while only having access to constrained computation and communication resources and operating in mobile, dynamic, and ever-changing environments. Hence, adaptation and self-organisation are crucial for such systems to maintain their performance, and operability while accommodating new changes in their environment.

This paper reviews the current literature in the field of adaptive Edge Computing systems. We use a widely accepted taxonomy, which describes the important aspects of adaptive behaviour implementation in computing systems. This taxonomy discusses aspects such as adaptation reasons, the various levels an adaptation strategy can be implemented, the time of reaction to a change, categories of adaptation technique, and control of the adaptive behaviour. In this paper, we discuss how these aspects are addressed in the literature, and identify the open research challenges and future direction in adaptive Edge Computing systems.

The results of our analysis show that most of the identified approaches target adaptation at the application level, and only a few focus on middleware, communication infrastructure, and context. Adaptations that are required to address the changes in the context, changes caused by users or in the system itself are also less explored. Furthermore, most of the literature has opted for reactive adaptation, although proactive adaptation is essential to maintain the edge computing systems’ performance and interoperability by anticipating the required adaptations on the fly. Additionally, most approaches apply a centralised adaptation control, which does not perfectly fit the mostly decentralised/distributed Edge Computing settings.

边缘计算将计算服务和资源置于更接近用户的位置,以减少延迟,确保服务质量和体验。低延迟、情境感知和移动支持是边缘智能系统的主要优势。这类系统要求在移动、动态和不断变化的环境中运行时,仅能访问有限的计算和通信资源,并能即时处理新情况和新变化,确保服务质量。因此,自适应和自组织对于此类系统在适应环境新变化的同时保持性能和可操作性至关重要。我们采用了一种广为接受的分类法,该分类法描述了在计算系统中实施自适应行为的重要方面。该分类法讨论了适应原因、可实施适应策略的不同层次、对变化做出反应的时间、适应技术的类别以及适应行为的控制等方面。在本文中,我们将讨论文献中是如何论述这些方面的,并确定自适应边缘计算系统的公开研究挑战和未来方向。我们的分析结果表明,大多数已确定的方法都针对应用层面的自适应,只有少数方法关注中间件、通信基础设施和上下文。针对环境变化、用户变化或系统本身变化所需的适应性研究也较少。此外,大多数文献都选择了被动适应,尽管主动适应对于通过预测所需的即时适应来保持边缘计算系统的性能和互操作性至关重要。此外,大多数方法都采用集中式适应控制,这并不完全适合大部分分散式/分布式边缘计算设置。
{"title":"Adaptation in Edge Computing: A review on design principles and research challenges","authors":"Fatemeh Golpayegani, Nanxi Chen, Nima Afraz, Eric Gyamfi, Abdollah Malekjafarian, Dominik Schäfer, Christian Krupitzer","doi":"10.1145/3664200","DOIUrl":"https://doi.org/10.1145/3664200","url":null,"abstract":"<p>Edge Computing places the computational services and resources closer to the user proximity, to reduce latency, and ensure the quality of service and experience. Low latency, context awareness, and mobility support are the major contributors to edge-enabled smart systems. Such systems require handling new situations and change on the fly and ensuring the quality of service while only having access to constrained computation and communication resources and operating in mobile, dynamic, and ever-changing environments. Hence, adaptation and self-organisation are crucial for such systems to maintain their performance, and operability while accommodating new changes in their environment.</p><p>This paper reviews the current literature in the field of adaptive Edge Computing systems. We use a widely accepted taxonomy, which describes the important aspects of adaptive behaviour implementation in computing systems. This taxonomy discusses aspects such as adaptation reasons, the various levels an adaptation strategy can be implemented, the time of reaction to a change, categories of adaptation technique, and control of the adaptive behaviour. In this paper, we discuss how these aspects are addressed in the literature, and identify the open research challenges and future direction in adaptive Edge Computing systems.</p><p>The results of our analysis show that most of the identified approaches target adaptation at the application level, and only a few focus on middleware, communication infrastructure, and context. Adaptations that are required to address the changes in the context, changes caused by users or in the system itself are also less explored. Furthermore, most of the literature has opted for reactive adaptation, although proactive adaptation is essential to maintain the edge computing systems’ performance and interoperability by anticipating the required adaptations on the fly. Additionally, most approaches apply a centralised adaptation control, which does not perfectly fit the mostly decentralised/distributed Edge Computing settings.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"43 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140933501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OptimML: Joint Control of Inference Latency and Server Power Consumption for ML Performance Optimization OptimML:联合控制推理延迟和服务器功耗,优化 ML 性能
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-07 DOI: 10.1145/3661825
Guoyu Chen, Xiaorui Wang

Power capping is an important technique for high-density servers to safely oversubscribe the power infrastructure in a data center. However, power capping is commonly accomplished by dynamically lowering the server processors’ frequency levels, which can result in degraded application performance. For servers that run important machine learning (ML) applications with Service-Level Objective (SLO) requirements, inference performance such as recognition accuracy must be optimized within a certain latency constraint, which demands high server performance. In order to achieve the best inference accuracy under the desired latency and server power constraints, this paper proposes OptimML, a multi-input-multi-output (MIMO) control framework that jointly controls both inference latency and server power consumption, by flexibly adjusting the machine learning model size (and so its required computing resources) when server frequency needs to be lowered for power capping. Our results on a hardware testbed with widely adopted ML framework (including PyTorch, TensorFlow, and MXNet) show that OptimML achieves higher inference accuracy compared with several well-designed baselines, while respecting both latency and power constraints. Furthermore, an adaptive control scheme with online model switching and estimation is designed to achieve analytic assurance of control accuracy and system stability, even in the face of significant workload/hardware variations.

功率封顶是高密度服务器的一项重要技术,可安全地超额分配数据中心的电力基础设施。然而,功率封顶通常是通过动态降低服务器处理器的频率水平来实现的,这会导致应用性能下降。对于运行有服务级目标(SLO)要求的重要机器学习(ML)应用的服务器来说,必须在一定的延迟约束内优化推理性能(如识别准确率),这就要求服务器具有很高的性能。为了在所需的延迟和服务器功耗限制条件下实现最佳推理精度,本文提出了多输入多输出(MIMO)控制框架 OptimML,当服务器频率需要降低以达到功耗上限时,通过灵活调整机器学习模型的大小(因此也调整了所需的计算资源)来共同控制推理延迟和服务器功耗。我们在采用广泛应用的 ML 框架(包括 PyTorch、TensorFlow 和 MXNet)的硬件测试平台上取得的结果表明,与几种精心设计的基线相比,OptimML 实现了更高的推理精度,同时遵守了延迟和功耗限制。此外,我们还设计了一种具有在线模型切换和估计功能的自适应控制方案,以实现对控制精度和系统稳定性的分析保证,即使面对显著的工作负载/硬件变化也是如此。
{"title":"OptimML: Joint Control of Inference Latency and Server Power Consumption for ML Performance Optimization","authors":"Guoyu Chen, Xiaorui Wang","doi":"10.1145/3661825","DOIUrl":"https://doi.org/10.1145/3661825","url":null,"abstract":"<p>Power capping is an important technique for high-density servers to safely oversubscribe the power infrastructure in a data center. However, power capping is commonly accomplished by dynamically lowering the server processors’ frequency levels, which can result in degraded application performance. For servers that run important machine learning (ML) applications with Service-Level Objective (SLO) requirements, inference performance such as recognition accuracy must be optimized within a certain latency constraint, which demands high server performance. In order to achieve the best inference accuracy under the desired latency and server power constraints, this paper proposes OptimML, a multi-input-multi-output (MIMO) control framework that jointly controls both inference latency and server power consumption, by flexibly adjusting the machine learning model size (and so its required computing resources) when server frequency needs to be lowered for power capping. Our results on a hardware testbed with widely adopted ML framework (including PyTorch, TensorFlow, and MXNet) show that OptimML achieves higher inference accuracy compared with several well-designed baselines, while respecting both latency and power constraints. Furthermore, an adaptive control scheme with online model switching and estimation is designed to achieve analytic assurance of control accuracy and system stability, even in the face of significant workload/hardware variations.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"63 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140888046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Trust for Operational States of ICT-Enabled Power Grid Services 将信任应用于信息和通信技术驱动的电网服务的运行状态
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-03 DOI: 10.1145/3654672
Michael Brand, Anand Narayan, Sebastian Lehnhoff

Digitalization enables the automation required to operate modern cyber-physical energy systems (CPESs), leading to a shift from hierarchical to organic systems. However, digitalization increases the number of factors affecting the state of a CPES (e.g., software bugs and cyber threats). In addition to established factors like functional correctness, others like security become relevant but are yet to be integrated into an operational viewpoint, i.e. a holistic perspective on the system state. Trust in organic computing is an approach to gain a holistic view of the state of systems. It consists of several facets (e.g., functional correctness, security, and reliability), which can be used to assess the state of CPES. Therefore, a trust assessment on all levels can contribute to a coherent state assessment. This paper focuses on the trust in ICT-enabled grid services in a CPES. These are essential for operating the CPES, and their performance relies on various data aspects like availability, timeliness, and correctness. This paper proposes to assess the trust in involved components and data to estimate data correctness, which is crucial for grid services. The assessment is presented considering two exemplary grid services, namely state estimation and coordinated voltage control. Furthermore, the interpretation of different trust facets is also discussed.

数字化实现了现代网络物理能源系统(CPES)运行所需的自动化,导致系统从分层向有机转变。然而,数字化增加了影响 CPES 状态的因素数量(如软件错误和网络威胁)。除了功能正确性等既有因素外,安全性等其他因素也变得十分重要,但这些因素尚未纳入运行视角,即系统状态的整体视角。有机计算中的信任是一种全面了解系统状态的方法。它包括几个方面(如功能正确性、安全性和可靠性),可用于评估 CPES 的状态。因此,对所有层面的信任评估都有助于进行一致的状态评估。本文重点关注 CPES 中由 ICT 支持的电网服务的信任度。这些服务对 CPES 的运行至关重要,其性能依赖于各种数据,如可用性、及时性和正确性。本文建议对相关组件和数据的信任度进行评估,以估计数据的正确性,这对电网服务至关重要。本文考虑了两种典型的电网服务,即状态估计和协调电压控制,提出了评估方法。此外,还讨论了不同信任面的解释。
{"title":"Applying Trust for Operational States of ICT-Enabled Power Grid Services","authors":"Michael Brand, Anand Narayan, Sebastian Lehnhoff","doi":"10.1145/3654672","DOIUrl":"https://doi.org/10.1145/3654672","url":null,"abstract":"<p>Digitalization enables the automation required to operate modern cyber-physical energy systems (CPESs), leading to a shift from hierarchical to organic systems. However, digitalization increases the number of factors affecting the state of a CPES (e.g., software bugs and cyber threats). In addition to established factors like functional correctness, others like security become relevant but are yet to be integrated into an operational viewpoint, i.e. a holistic perspective on the system state. Trust in organic computing is an approach to gain a holistic view of the state of systems. It consists of several facets (e.g., functional correctness, security, and reliability), which can be used to assess the state of CPES. Therefore, a trust assessment on all levels can contribute to a coherent state assessment. This paper focuses on the trust in ICT-enabled grid services in a CPES. These are essential for operating the CPES, and their performance relies on various data aspects like availability, timeliness, and correctness. This paper proposes to assess the trust in involved components and data to estimate data correctness, which is crucial for grid services. The assessment is presented considering two exemplary grid services, namely state estimation and coordinated voltage control. Furthermore, the interpretation of different trust facets is also discussed.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"2021 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140560874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Game-Theoretical Self-Adaptation Framework for Securing Software-Intensive Systems 确保软件密集型系统安全的游戏理论自适应框架
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-22 DOI: 10.1145/3652949
Nianyu Li, Mingyue Zhang, Jialong Li, Sridhar Adepu, Eunsuk Kang, Zhi Jin

Security attacks present unique challenges to the design of self-adaptation mechanism for software-intensive systems due to the adversarial nature of the environment. Game-theoretical approaches have been explored in security to model malicious behaviors and design reliable defense for the system in a mathematically grounded manner. However, modeling the system as a single player, as done in prior works, is insufficient for the system under partial compromise and for the design of fine-grained defensive policies where the rest of the system with autonomy can cooperate to mitigate the impact of attacks. To address such issues, we propose a new self-adaptation framework incorporating Bayesian game theory and model the defender (i.e., the system) at the granularity of components. Under security attacks, the architecture model of the system is automatically translated, by the proposed translation process with designed algorithms, into a multi-player Bayesian game. This representation allows each component to be modelled as an independent player, while security attacks are encoded as variant types for the components. By solving for pure equilibrium (i.e., adaptation response), the system’s optimal defensive strategy is dynamically computed, enhancing system resilience against security attacks by maximizing system utility. We validate the effectiveness of our framework through two sets of experiments using generic benchmark tasks tailored for the security domain. Additionally, we exemplify the practical application of our approach through a real-world implementation in the Secure Water Treatment System to demonstrates the applicability and potency in mitigating security risks.

由于环境的对抗性,安全攻击给软件密集型系统的自适应机制设计带来了独特的挑战。人们在安全领域探索了博弈论方法,以模拟恶意行为,并以数学方式为系统设计可靠的防御机制。然而,将系统建模为单个玩家(如之前的工作中所做的那样),对于部分妥协下的系统和设计细粒度防御策略(系统中具有自主性的其他部分可以合作以减轻攻击的影响)来说是不够的。为了解决这些问题,我们提出了一种新的自适应框架,其中包含贝叶斯博弈论,并以组件的粒度为防御者(即系统)建模。在受到安全攻击的情况下,系统的架构模型会通过建议的翻译过程和设计的算法自动翻译成多玩家贝叶斯博弈。通过这种表示方法,每个组件都可以被模拟为独立的玩家,而安全攻击则被编码为组件的变体类型。通过求解纯平衡(即适应响应),可以动态计算出系统的最佳防御策略,从而通过最大化系统效用来增强系统抵御安全攻击的能力。我们使用为安全领域量身定制的通用基准任务,通过两组实验验证了我们框架的有效性。此外,我们还通过在安全水处理系统中的实际应用,例证了我们的方法在实际应用中的适用性和降低安全风险的效力。
{"title":"A Game-Theoretical Self-Adaptation Framework for Securing Software-Intensive Systems","authors":"Nianyu Li, Mingyue Zhang, Jialong Li, Sridhar Adepu, Eunsuk Kang, Zhi Jin","doi":"10.1145/3652949","DOIUrl":"https://doi.org/10.1145/3652949","url":null,"abstract":"<p>Security attacks present unique challenges to the design of self-adaptation mechanism for software-intensive systems due to the adversarial nature of the environment. Game-theoretical approaches have been explored in security to model malicious behaviors and design reliable defense for the system in a mathematically grounded manner. However, modeling the system as a single player, as done in prior works, is insufficient for the system under partial compromise and for the design of fine-grained defensive policies where the rest of the system with autonomy can cooperate to mitigate the impact of attacks. To address such issues, we propose a new self-adaptation framework incorporating Bayesian game theory and model the defender (i.e., the system) at the granularity of components. Under security attacks, the architecture model of the system is automatically translated, by the proposed translation process with designed algorithms, into a multi-player Bayesian game. This representation allows each component to be modelled as an independent player, while security attacks are encoded as variant types for the components. By solving for pure equilibrium (i.e., adaptation response), the system’s optimal defensive strategy is dynamically computed, enhancing system resilience against security attacks by maximizing system utility. We validate the effectiveness of our framework through two sets of experiments using generic benchmark tasks tailored for the security domain. Additionally, we exemplify the practical application of our approach through a real-world implementation in the Secure Water Treatment System to demonstrates the applicability and potency in mitigating security risks.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"145 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140202548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Adapting Machine Learning-based Systems via a Probabilistic Model Checking Framework 通过概率模型检查框架实现基于机器学习的自适应系统
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-07 DOI: 10.1145/3648682
Maria Casimiro, Diogo Soares, David Garlan, Luís Rodrigues, Paolo Romano

This paper focuses on the problem of optimizing system utility of Machine-Learning (ML) based systems in the presence of ML mispredictions. This is achieved via the use of self-adaptive systems and through the execution of adaptation tactics, such as model retraining, which operate at the level of individual ML components.

To address this problem, we propose a probabilistic modeling framework that reasons about the cost/benefit trade-offs associated with adapting ML components. The key idea of the proposed approach is to decouple the problems of estimating (i) the expected performance improvement after adaptation and (ii) the impact of ML adaptation on overall system utility.

We apply the proposed framework to engineer a self-adaptive ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection data-set. We initially consider a scenario in which information on model’s quality is immediately available. Next we relax this assumption by integrating (and extending) state-of-the-art techniques for estimating model’s quality in the proposed framework. We show that by predicting the system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic or reactive retraining.

本文重点探讨了在机器学习(ML)预测失误的情况下,如何优化基于机器学习(ML)的系统效用的问题。这是通过使用自适应系统和执行适应策略(如模型再训练)来实现的,这些策略在单个 ML 组件的层面上运行。为解决这一问题,我们提出了一个概率建模框架,该框架可对与适应 ML 组件相关的成本/收益权衡进行推理。所提方法的关键思路是将以下问题分离开来:(i) 适应后的预期性能改进;(ii) ML 适应对整个系统效用的影响。我们将提出的框架应用于设计基于 ML 的自适应欺诈检测系统,并使用公开的真实欺诈检测数据集对该系统进行评估。我们首先考虑的是模型质量信息立即可用的情况。接下来,我们放宽了这一假设,在提议的框架中整合(并扩展)了最先进的模型质量估算技术。我们的研究表明,通过预测重新训练 ML 组件所产生的系统效用,概率模型检查器可以生成明显更接近最优的适应策略,与周期性或反应性重新训练等基线策略相比,效果更佳。
{"title":"Self-Adapting Machine Learning-based Systems via a Probabilistic Model Checking Framework","authors":"Maria Casimiro, Diogo Soares, David Garlan, Luís Rodrigues, Paolo Romano","doi":"10.1145/3648682","DOIUrl":"https://doi.org/10.1145/3648682","url":null,"abstract":"<p>This paper focuses on the problem of optimizing system utility of Machine-Learning (ML) based systems in the presence of ML mispredictions. This is achieved via the use of self-adaptive systems and through the execution of adaptation tactics, such as <i>model retraining</i>, which operate at the level of individual ML components. </p><p>To address this problem, we propose a probabilistic modeling framework that reasons about the cost/benefit trade-offs associated with adapting ML components. The key idea of the proposed approach is to decouple the problems of estimating <b>(i)</b> the expected performance improvement after adaptation and <b>(ii)</b> the impact of ML adaptation on overall system utility. </p><p>We apply the proposed framework to engineer a self-adaptive ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection data-set. We initially consider a scenario in which information on model’s quality is immediately available. Next we relax this assumption by integrating (and extending) state-of-the-art techniques for estimating model’s quality in the proposed framework. We show that by predicting the system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic or reactive retraining.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"18 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140070348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anunnaki: A Modular Framework for Developing Trusted Artificial Intelligence Anunnaki:开发可信人工智能的模块化框架
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-06 DOI: 10.1145/3649453
Michael Austin Langford, Sol Zilberman, Betty H.C. Cheng

Trustworthy artificial intelligence (Trusted AI) is of utmost importance when learning-enabled components (LECs) are used in autonomous, safety-critical systems. When reliant on deep learning, these systems need to address the reliability, robustness, and interpretability of learning models. In addition to developing strategies to address these concerns, appropriate software architectures are needed to coordinate LECs and ensure they deliver acceptable behavior even under uncertain conditions. This work describes Anunnaki, a model-driven framework comprising loosely-coupled modular services designed to monitor and manage LECs with respect to Trusted AI assurance concerns when faced with different sources of uncertainty. More specifically, the Anunnaki framework supports the composition of independent, modular services to assess and improve the resilience and robustness of AI systems. The design of Annunaki was guided by several key software engineering principles (e.g., modularity, composabiilty, and reusability) in order to facilitate its use and maintenance to support different aggregate monitoring and assurance analysis tools for LESs and their respective data sets. We demonstrate Anunnaki on two autonomous platforms, a terrestrial rover and an unmanned aerial vehicle. Our studies show how Anunnaki can be used to manage the operations of different autonomous learning-enabled systems with vision-based LECs while exposed to uncertain environmental conditions.

在自主安全关键型系统中使用具有学习功能的组件(LEC)时,可信赖的人工智能(Trusted AI)至关重要。当依赖于深度学习时,这些系统需要解决学习模型的可靠性、鲁棒性和可解释性问题。除了开发解决这些问题的策略外,还需要适当的软件架构来协调 LEC,确保它们即使在不确定的条件下也能提供可接受的行为。这项工作介绍了 Anunnaki,这是一个模型驱动框架,由松散耦合的模块化服务组成,旨在监控和管理 LEC,以解决面对不同不确定性来源时的可信人工智能保证问题。更具体地说,Anunnaki 框架支持组成独立的模块化服务,以评估和提高人工智能系统的弹性和鲁棒性。Annunaki的设计遵循几项关键的软件工程原则(如模块化、可组合性和可重用性),以便于使用和维护,支持针对LES及其各自数据集的不同总体监控和保证分析工具。我们在两个自主平台(地面漫游车和无人机)上演示了 Anunnaki。我们的研究表明,Anunnaki 可用于管理不同自主学习系统的运行,这些系统具有基于视觉的 LEC,同时暴露在不确定的环境条件下。
{"title":"Anunnaki: A Modular Framework for Developing Trusted Artificial Intelligence","authors":"Michael Austin Langford, Sol Zilberman, Betty H.C. Cheng","doi":"10.1145/3649453","DOIUrl":"https://doi.org/10.1145/3649453","url":null,"abstract":"<p>Trustworthy artificial intelligence (Trusted AI) is of utmost importance when learning-enabled components (LECs) are used in autonomous, safety-critical systems. When reliant on deep learning, these systems need to address the reliability, robustness, and interpretability of learning models. In addition to developing strategies to address these concerns, appropriate software architectures are needed to coordinate LECs and ensure they deliver acceptable behavior even under uncertain conditions. This work describes Anunnaki, a model-driven framework comprising loosely-coupled modular services designed to monitor and manage LECs with respect to Trusted AI assurance concerns when faced with different sources of uncertainty. More specifically, the Anunnaki framework supports the composition of independent, modular services to assess and improve the resilience and robustness of AI systems. The design of Annunaki was guided by several key software engineering principles (e.g., modularity, composabiilty, and reusability) in order to facilitate its use and maintenance to support different aggregate monitoring and assurance analysis tools for LESs and their respective data sets. We demonstrate Anunnaki on two autonomous platforms, a terrestrial rover and an unmanned aerial vehicle. Our studies show how Anunnaki can be used to manage the operations of different autonomous learning-enabled systems with vision-based LECs while exposed to uncertain environmental conditions.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"40 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140076753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Autonomous and Adaptive Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1