首页 > 最新文献

2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)最新文献

英文 中文
Stable Cuckoo Filter for Data Streams 稳定的杜鹃过滤器的数据流
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00023
Shangsen Li, Lailong Luo, Deke Guo, Yawei Zhao
Cuckoo filter (CF), Bloom filter (BF) and their variants are space-efficient probabilistic data structures for approximate set membership queries. However, their data synopsis would inevitably become unusable when there are a number of member updates on the set; while updates are not uncommon for the real-world data streaming applications such as duplicate item detection, malicious URL checking, and caching applications. It has been shown that some variants of BF can be adaptive to stream applications. However, current extensions of BF structures generally incur unstable performance or intolerant membership testing errors. In this paper, we aim to design a data synopsis for membership testing on data streams with stable performance and tolerant query errors. To this end, we propose Stable Cuckoo Filters (SCF), which take a fine-grained manner to evict the stale elements and store those more recent ones. SCF absorbs the design philosophy from several unsuccessful designs. Specifically, SCFs take elegant update operations to embed time information with insertion operation and carefully evict the stale elements. We show that a tight upper bound of the expected false positive rate (FPR) remains asymptotically constant over the insertion of new members. The query error for recent elements of SCF (FNR) is related to the characteristics of the input data stream and query workloads. Extensive experiments on the real-world and synthetic datasets show that our designs are more stable than the existing variants of BF and realize 7 x smaller false errors and up to 3 x throughput.
布谷鸟滤波器(CF)、布隆滤波器(BF)及其变体是用于近似集隶属度查询的空间效率高的概率数据结构。然而,当集合上有许多成员更新时,他们的数据概要将不可避免地变得不可用;而对于真实世界的数据流应用程序(如重复项检测、恶意URL检查和缓存应用程序)来说,更新并不少见。研究表明,BF的一些变体可以适应流应用。然而,目前的BF结构扩展通常会导致不稳定的性能或不可容忍的成员测试误差。在本文中,我们的目标是设计一个性能稳定、查询错误容忍度高的数据流隶属度测试数据概要。为此,我们提出了稳定杜鹃过滤器(SCF),它采用细粒度的方式来剔除过时的元素并存储最新的元素。SCF从几个不成功的设计中吸取了设计理念。具体来说,scf采用优雅的更新操作,通过插入操作嵌入时间信息,并小心地剔除过时的元素。我们证明了期望假阳性率(FPR)的紧上界在新成员的插入上保持渐近常数。SCF最近元素的查询错误与输入数据流的特征和查询工作负载有关。在真实世界和合成数据集上的大量实验表明,我们的设计比现有的BF变体更稳定,并且实现了7倍小的假误差和高达3倍的吞吐量。
{"title":"Stable Cuckoo Filter for Data Streams","authors":"Shangsen Li, Lailong Luo, Deke Guo, Yawei Zhao","doi":"10.1109/ICPADS53394.2021.00023","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00023","url":null,"abstract":"Cuckoo filter (CF), Bloom filter (BF) and their variants are space-efficient probabilistic data structures for approximate set membership queries. However, their data synopsis would inevitably become unusable when there are a number of member updates on the set; while updates are not uncommon for the real-world data streaming applications such as duplicate item detection, malicious URL checking, and caching applications. It has been shown that some variants of BF can be adaptive to stream applications. However, current extensions of BF structures generally incur unstable performance or intolerant membership testing errors. In this paper, we aim to design a data synopsis for membership testing on data streams with stable performance and tolerant query errors. To this end, we propose Stable Cuckoo Filters (SCF), which take a fine-grained manner to evict the stale elements and store those more recent ones. SCF absorbs the design philosophy from several unsuccessful designs. Specifically, SCFs take elegant update operations to embed time information with insertion operation and carefully evict the stale elements. We show that a tight upper bound of the expected false positive rate (FPR) remains asymptotically constant over the insertion of new members. The query error for recent elements of SCF (FNR) is related to the characteristics of the input data stream and query workloads. Extensive experiments on the real-world and synthetic datasets show that our designs are more stable than the existing variants of BF and realize 7 x smaller false errors and up to 3 x throughput.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125002861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ATO-EDGE: Adaptive Task Offloading for Deep Learning in Resource-Constrained Edge Computing Systems ATO-EDGE:资源受限边缘计算系统中深度学习的自适应任务卸载
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00025
Yihao Wang, Ling Gao, J. Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao
On-device deep learning enables mobile devices to perform complex tasks, such as object detection and voice translation, regardless of the network condition. The advanced deep learning model gives an excellent performance, also leads to a heavy burden on resource-limited devices (i.e., mobile devices). To speed up the on-device deep learning. Prior studies focus on developing lightweight network architecture for real-time inference by sacrificing model accuracy. This paper presents ATO-EDGE: adaptive task offloading for deep learning based on edge computing. Considering three optimization goals, energy consumption, accuracy, and latency, ATO-EDGE leverages an offline pre-trained model to select a suitable deep learning model on a specific device to process the given task. We apply our approach to object detection and evaluate it on Jetson TX2, Xilinx ZYNQ 7020, and Raspberry 3B+. The deep learning model candidates contain ten typical object detection models trained on Microsoft COCO 2017 dataset. We obtain, on average, 28.25%, 35.44%, and 0.9 improvements respectively for latency, energy consumption, and mAP (mean average precision) when compared to the SOTA DETR model on the Raspberry Pi.
设备上深度学习使移动设备能够执行复杂的任务,如对象检测和语音翻译,而不受网络条件的影响。先进的深度学习模型提供了出色的性能,但也导致资源有限的设备(即移动设备)负担沉重。加速设备上的深度学习。以往的研究主要是通过牺牲模型精度来开发轻量级的实时推理网络架构。提出了一种基于边缘计算的深度学习自适应任务卸载算法ATO-EDGE。考虑到能耗、准确性和延迟三个优化目标,ATO-EDGE利用离线预训练模型在特定设备上选择合适的深度学习模型来处理给定任务。我们将该方法应用于目标检测,并在Jetson TX2、Xilinx ZYNQ 7020和Raspberry 3B+上进行了评估。深度学习候选模型包含在Microsoft COCO 2017数据集上训练的10个典型目标检测模型。与树莓派上的SOTA DETR模型相比,我们在延迟、能耗和mAP(平均平均精度)方面平均分别提高了28.25%、35.44%和0.9。
{"title":"ATO-EDGE: Adaptive Task Offloading for Deep Learning in Resource-Constrained Edge Computing Systems","authors":"Yihao Wang, Ling Gao, J. Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao","doi":"10.1109/ICPADS53394.2021.00025","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00025","url":null,"abstract":"On-device deep learning enables mobile devices to perform complex tasks, such as object detection and voice translation, regardless of the network condition. The advanced deep learning model gives an excellent performance, also leads to a heavy burden on resource-limited devices (i.e., mobile devices). To speed up the on-device deep learning. Prior studies focus on developing lightweight network architecture for real-time inference by sacrificing model accuracy. This paper presents ATO-EDGE: adaptive task offloading for deep learning based on edge computing. Considering three optimization goals, energy consumption, accuracy, and latency, ATO-EDGE leverages an offline pre-trained model to select a suitable deep learning model on a specific device to process the given task. We apply our approach to object detection and evaluate it on Jetson TX2, Xilinx ZYNQ 7020, and Raspberry 3B+. The deep learning model candidates contain ten typical object detection models trained on Microsoft COCO 2017 dataset. We obtain, on average, 28.25%, 35.44%, and 0.9 improvements respectively for latency, energy consumption, and mAP (mean average precision) when compared to the SOTA DETR model on the Raspberry Pi.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On Consensus Number 1 Objects 关于共识1对象
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00115
P. Khanchandani, Jan Schäppi, Ye Wang, Roger Wattenhofer
The consensus number concept is used to determine the power of synchronization primitives in distributed systems. Recent work in the blockchain domain motivates shifting the attention to consensus number 1 objects, as it has been shown that transaction-based blockchains just need consensus number 1. In this paper we want to get a better understanding of such consensus number 1 objects. In particular, we study the necessary and sufficient conditions for determining the consensus number 1 objects. If an object has consensus number 1, then its operations must be either commutative or associative (necessary condition). On the other hand, if the operations are consistently commutative or overwriting, i.e., independent of the current state of the object, then the consensus number of the object is 1 (sufficient condition). We give an algorithm to implement such generic consensus number 1 objects using only read/write registers. This implies that read/write registers are universal enough to solve tasks, such as asset transfer of a cryptocurrency, among many others, in wait-free distributed systems for any number of processes.
一致性数概念用于确定分布式系统中同步原语的能力。最近在区块链领域的工作促使人们将注意力转移到共识1对象上,因为已经证明基于交易的区块链只需要共识1。在本文中,我们希望更好地理解这种共识1对象。特别地,我们研究了确定共识数为1的对象的充分必要条件。如果一个对象的共识数为1,那么它的操作要么是可交换的,要么是关联的(必要条件)。另一方面,如果操作是一致可交换或覆盖的,即与对象的当前状态无关,则对象的共识数为1(充分条件)。我们给出了一种仅使用读/写寄存器实现这种通用共识1号对象的算法。这意味着读/写寄存器足够通用,可以在无等待分布式系统中解决任何数量进程的任务,例如加密货币的资产转移等。
{"title":"On Consensus Number 1 Objects","authors":"P. Khanchandani, Jan Schäppi, Ye Wang, Roger Wattenhofer","doi":"10.1109/ICPADS53394.2021.00115","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00115","url":null,"abstract":"The consensus number concept is used to determine the power of synchronization primitives in distributed systems. Recent work in the blockchain domain motivates shifting the attention to consensus number 1 objects, as it has been shown that transaction-based blockchains just need consensus number 1. In this paper we want to get a better understanding of such consensus number 1 objects. In particular, we study the necessary and sufficient conditions for determining the consensus number 1 objects. If an object has consensus number 1, then its operations must be either commutative or associative (necessary condition). On the other hand, if the operations are consistently commutative or overwriting, i.e., independent of the current state of the object, then the consensus number of the object is 1 (sufficient condition). We give an algorithm to implement such generic consensus number 1 objects using only read/write registers. This implies that read/write registers are universal enough to solve tasks, such as asset transfer of a cryptocurrency, among many others, in wait-free distributed systems for any number of processes.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123901322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Anomaly Detection Based on Reinforcement Learning in Network Traffic Data 基于强化学习的网络流量数据有效异常检测
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00043
Zhongyang Wang, Yijie Wang, Hongzuo Xu, Yongjun Wang
Mixed-type data with both categorical and numerical features are ubiquitous in network security, but the existing methods are minimal to deal with them. Existing methods usually process mixed-type data through feature conversion, whereas their performance is downgraded by information loss and noise caused by the transformation. Meanwhile, existing methods usually superimpose domain knowledge and machine learning in which fixed thresholds are used. It cannot dynamically adjust the anomaly threshold to the actual scenario, resulting in inaccurate anomalies obtained, which results in poor performance. To address these issues, this paper proposes a novel Anomaly Detection method based on Reinforcement Learning, termed ADRL, which uses reinforcement learning to dynamically search for thresholds and accurately obtain anomaly candidate sets, fusing domain knowledge and machine learning fully and promoting each other. Specifically, ADRL uses prior domain knowledge to label known anomalies and uses entropy and deep autoencoder in the categorical and numerical feature spaces, respectively, to obtain anomaly scores combining with known anomaly information, which are integrated to get the overall anomaly scores via a dynamic integration strategy. To obtain accurate anomaly candidate sets, ADRL uses reinforcement learning to search for the best threshold. Detailedly, it initializes the anomaly threshold to get the initial anomaly candidate set and carries on the frequent rule mining to the anomaly candidate set to form the new knowledge. Then, ADRL uses the obtained knowledge to adjust the anomaly score and get the score modification rate. According to the modification rate, different threshold modification strategies are executed, and the best threshold, that is, the threshold under the maximum modification rate, is finally obtained, and the modified anomaly scores are obtained. The scores are used to re-carry out machine learning to improve the algorithm's accuracy for anomalous data. Repeat the above process until the method is stable. We experiment on ten real network traffic datasets. Experiments show ADRL averagely improves ROC-AUC and PR-AUC than eight state-of-the-art competitors by 89.6% and 286.0%, respectively.
具有分类和数值特征的混合类型数据在网络安全中普遍存在,但现有的处理方法很少。现有的方法通常通过特征转换来处理混合类型的数据,而特征转换带来的信息丢失和噪声会降低其性能。同时,现有的方法通常是将领域知识和机器学习叠加在一起,使用固定的阈值。无法根据实际场景动态调整异常阈值,导致获取的异常不准确,导致性能不佳。针对这些问题,本文提出了一种新的基于强化学习的异常检测方法,即ADRL,该方法利用强化学习动态搜索阈值,准确获取异常候选集,充分融合领域知识和机器学习,相互促进。具体而言,ADRL利用先验领域知识对已知异常进行标注,并分别在分类特征空间和数值特征空间中利用熵和深度自编码器结合已知异常信息获得异常分数,通过动态集成策略对已知异常信息进行综合得到整体异常分数。为了获得准确的异常候选集,ADRL使用强化学习来搜索最佳阈值。初始化异常阈值得到初始异常候选集,并对异常候选集进行频繁规则挖掘,形成新知识。然后,ADRL利用获得的知识对异常评分进行调整,得到评分修改率。根据修改率执行不同的阈值修改策略,最终得到最佳阈值,即最大修改率下的阈值,并得到修改后的异常评分。这些分数被用来重新进行机器学习,以提高算法对异常数据的准确性。重复上述过程,直到方法稳定。我们在十个真实的网络流量数据集上进行了实验。实验表明,ADRL的ROC-AUC和PR-AUC平均比8个最先进的竞争对手分别提高89.6%和286.0%。
{"title":"Effective Anomaly Detection Based on Reinforcement Learning in Network Traffic Data","authors":"Zhongyang Wang, Yijie Wang, Hongzuo Xu, Yongjun Wang","doi":"10.1109/ICPADS53394.2021.00043","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00043","url":null,"abstract":"Mixed-type data with both categorical and numerical features are ubiquitous in network security, but the existing methods are minimal to deal with them. Existing methods usually process mixed-type data through feature conversion, whereas their performance is downgraded by information loss and noise caused by the transformation. Meanwhile, existing methods usually superimpose domain knowledge and machine learning in which fixed thresholds are used. It cannot dynamically adjust the anomaly threshold to the actual scenario, resulting in inaccurate anomalies obtained, which results in poor performance. To address these issues, this paper proposes a novel Anomaly Detection method based on Reinforcement Learning, termed ADRL, which uses reinforcement learning to dynamically search for thresholds and accurately obtain anomaly candidate sets, fusing domain knowledge and machine learning fully and promoting each other. Specifically, ADRL uses prior domain knowledge to label known anomalies and uses entropy and deep autoencoder in the categorical and numerical feature spaces, respectively, to obtain anomaly scores combining with known anomaly information, which are integrated to get the overall anomaly scores via a dynamic integration strategy. To obtain accurate anomaly candidate sets, ADRL uses reinforcement learning to search for the best threshold. Detailedly, it initializes the anomaly threshold to get the initial anomaly candidate set and carries on the frequent rule mining to the anomaly candidate set to form the new knowledge. Then, ADRL uses the obtained knowledge to adjust the anomaly score and get the score modification rate. According to the modification rate, different threshold modification strategies are executed, and the best threshold, that is, the threshold under the maximum modification rate, is finally obtained, and the modified anomaly scores are obtained. The scores are used to re-carry out machine learning to improve the algorithm's accuracy for anomalous data. Repeat the above process until the method is stable. We experiment on ten real network traffic datasets. Experiments show ADRL averagely improves ROC-AUC and PR-AUC than eight state-of-the-art competitors by 89.6% and 286.0%, respectively.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128081775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic Path Based DNN Synergistic Inference Acceleration in Edge Computing Environment 边缘计算环境下基于动态路径的DNN协同推理加速
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00076
Mengpu Zhou, Bowen Zhou, Huitian Wang, Fang Dong, Wei Zhao
Deep Neural Networks (DNNs) have achieved excellent performance in intelligent applications. Nevertheless, it is elusive for devices with limited resources to support computationally intensive DNNs, while employing the cloud may lead to prohibitive latency. Better solutions are exploiting edge computing and reducing unnecessary computation. Multi-exit DNN based on the early exit mechanism has an impressive effect in the latter, and in edge computing paradigm, model partition on multi-exit chain DNNs is proved to accelerate inference effectively. However, despite reducing computations to some extent, multiple exits may lead to instability of performance due to variable sample quality, performance inferior to the original model especially in the worst case. Furthermore, nowadays DNNs are universally characterized by a directed acyclic graph (DAG), complicating the partition of multi-exit DNN exceedingly. To solve the issues, in this paper, considering online exit prediction and model execution optimization for multi-exit DNN, we propose a Dynamic Path based DNN Synergistic inference acceleration framework (DPDS), where exit designators are designed to avoid iterative entry for exits; to further promote computational synergy in the edge, the multi-exit DNN is dynamically partitioned according to network environment to achieve fine-grained computing offloading. Experimental results show that DPDS can significantly accelerate DNN inference by 1.87× to 6.78×.
深度神经网络(Deep Neural Networks, dnn)在智能应用中取得了优异的成绩。然而,对于资源有限的设备来说,支持计算密集型dnn是难以捉摸的,而使用云可能会导致令人望而却步的延迟。更好的解决方案是利用边缘计算并减少不必要的计算。基于早退出机制的多出口深度神经网络在后者中具有显著的效果,在边缘计算范式中,对多出口链深度神经网络进行模型划分可以有效地加速推理。然而,尽管在一定程度上减少了计算量,但由于样本质量的变化,多个出口可能会导致性能的不稳定,尤其是在最坏的情况下,性能会低于原始模型。此外,目前深度神经网络普遍采用有向无环图(DAG)来表征,这使得多出口深度神经网络的划分变得非常复杂。为了解决这一问题,本文考虑了多出口深度神经网络的在线出口预测和模型执行优化,提出了一种基于动态路径的深度神经网络协同推理加速框架(DPDS),该框架设计了出口指示符以避免出口的迭代进入;为了进一步促进边缘计算协同,根据网络环境对多出口DNN进行动态分区,实现细粒度计算卸载。实验结果表明,DPDS可以将DNN推理速度提高1.87 ~ 6.78倍。
{"title":"Dynamic Path Based DNN Synergistic Inference Acceleration in Edge Computing Environment","authors":"Mengpu Zhou, Bowen Zhou, Huitian Wang, Fang Dong, Wei Zhao","doi":"10.1109/ICPADS53394.2021.00076","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00076","url":null,"abstract":"Deep Neural Networks (DNNs) have achieved excellent performance in intelligent applications. Nevertheless, it is elusive for devices with limited resources to support computationally intensive DNNs, while employing the cloud may lead to prohibitive latency. Better solutions are exploiting edge computing and reducing unnecessary computation. Multi-exit DNN based on the early exit mechanism has an impressive effect in the latter, and in edge computing paradigm, model partition on multi-exit chain DNNs is proved to accelerate inference effectively. However, despite reducing computations to some extent, multiple exits may lead to instability of performance due to variable sample quality, performance inferior to the original model especially in the worst case. Furthermore, nowadays DNNs are universally characterized by a directed acyclic graph (DAG), complicating the partition of multi-exit DNN exceedingly. To solve the issues, in this paper, considering online exit prediction and model execution optimization for multi-exit DNN, we propose a Dynamic Path based DNN Synergistic inference acceleration framework (DPDS), where exit designators are designed to avoid iterative entry for exits; to further promote computational synergy in the edge, the multi-exit DNN is dynamically partitioned according to network environment to achieve fine-grained computing offloading. Experimental results show that DPDS can significantly accelerate DNN inference by 1.87× to 6.78×.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133988044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Analysis of Open-Source Hypervisors for Automotive Systems 面向汽车系统的开源管理程序性能分析
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00072
Zhengjun Zhang, Yanqiang Liu, Jiangtao Chen, Zhengwei Qi, Yifeng Zhang, Huai Liu
Nowadays, automotive products are intelligence intensive and thus inevitably handle multiple functionalities under the current high-speed networking environment. The embedded virtualization has high potentials in the automotive industry, thanks to its advantages in function integration, resource utilization, and security. The invention of ARM virtualization extensions has made it possible to run open-source hypervisors, such as Xen and KVM, for embedded applications. Nevertheless, there is little work to investigate the performance of these hypervisors on automotive platforms. This paper presents a detailed analysis of different types of open-source hypervisors that can be applied in the ARM platform. We carry out the virtualization performance experiment from the perspectives of CPU, memory, file I/O, and some OS operation performance on Xen and Jailhouse. A series of microbenchmark programs have been designed, specifically to evaluate the real-time performance of various hypervisors and the relevant overhead. Compared with Xen, Jailhouse has better latency performance, stable latency, and little interference jitter. The performance experiment results help us summarize the advantages and disadvantages of these hypervisors in automotive applications.
如今的汽车产品是智能密集型的,在高速联网的环境下,汽车产品不可避免地要处理多种功能。嵌入式虚拟化以其在功能集成、资源利用、安全性等方面的优势,在汽车行业具有很大的应用潜力。ARM虚拟化扩展的发明使得为嵌入式应用程序运行开源管理程序(如Xen和KVM)成为可能。然而,很少有人研究这些管理程序在汽车平台上的性能。本文详细分析了可以应用于ARM平台的不同类型的开源管理程序。我们在Xen和Jailhouse上分别从CPU、内存、文件I/O和部分操作系统性能等方面进行了虚拟化性能实验。已经设计了一系列微基准程序,专门用于评估各种管理程序的实时性能和相关开销。与Xen相比,Jailhouse具有更好的延迟性能,延迟稳定,干扰抖动小。性能实验结果有助于我们总结这些管理程序在汽车应用中的优缺点。
{"title":"Performance Analysis of Open-Source Hypervisors for Automotive Systems","authors":"Zhengjun Zhang, Yanqiang Liu, Jiangtao Chen, Zhengwei Qi, Yifeng Zhang, Huai Liu","doi":"10.1109/ICPADS53394.2021.00072","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00072","url":null,"abstract":"Nowadays, automotive products are intelligence intensive and thus inevitably handle multiple functionalities under the current high-speed networking environment. The embedded virtualization has high potentials in the automotive industry, thanks to its advantages in function integration, resource utilization, and security. The invention of ARM virtualization extensions has made it possible to run open-source hypervisors, such as Xen and KVM, for embedded applications. Nevertheless, there is little work to investigate the performance of these hypervisors on automotive platforms. This paper presents a detailed analysis of different types of open-source hypervisors that can be applied in the ARM platform. We carry out the virtualization performance experiment from the perspectives of CPU, memory, file I/O, and some OS operation performance on Xen and Jailhouse. A series of microbenchmark programs have been designed, specifically to evaluate the real-time performance of various hypervisors and the relevant overhead. Compared with Xen, Jailhouse has better latency performance, stable latency, and little interference jitter. The performance experiment results help us summarize the advantages and disadvantages of these hypervisors in automotive applications.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131495161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Agent for Failure-aware Job scheduling in High-Performance Computing 高性能计算中故障感知作业调度的深度强化代理
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00061
K. Yang, Rongyu Cao, Yueyuan Zhou, Jiawei Zhang, En Shao, Guangming Tan
Job scheduling is crucial in high-performance computing (HPC), which is dedicated to deciding when and which jobs are allocated to the system and placing the jobs on which resources, by considering multiple scheduling goals. Along with the incremental of various resources and dazzling deep learning training (DLT) workloads, job failure becomes a quite common issue in HPC, which will affect user satisfaction and cluster utilization. To alleviate the influence of hardware and software errors as much as possible, in this paper, we aim to tackle the problem of failure-aware job scheduling in HPC clusters. Inspired by the success of previous studies of deep reinforcement learning-driven job scheduling, we propose a novel HPC scheduling agent named FARS (Failure-aware RL-based scheduler) by considering the effects of job failures. On the one hand, a neural network is applied to map the information of raw cluster and job states to job placement decisions. On the other hand, to consider the influence of job failure for user satisfaction and cluster utilization, FARS leverages make-span of the entire workload as the training objective. Additionally, effective exploration and experience replay techniques are applied to obtain effectively converged agent. To evaluate the capability of FARS, we design extensive trace-based simulation experiments with the popular DLT workloads. The experimental results show that, compared with the best baseline model, FARS obtains 5.69% improvement of average make-span under different device error rates. Together, our FARS is an ideal candidate for failure-aware job scheduler in HPC clusters.
作业调度在高性能计算(HPC)中至关重要,它致力于通过考虑多个调度目标来决定何时以及将哪些作业分配给系统,并将作业放置在哪些资源上。随着各种资源的增加和令人眼花缭乱的深度学习训练(DLT)工作量,作业失败成为高性能计算中一个相当普遍的问题,它将影响用户满意度和集群利用率。为了尽可能地减轻硬件和软件错误的影响,本文旨在解决高性能计算集群中故障感知作业调度问题。受以往深度强化学习驱动作业调度成功研究的启发,我们提出了一种考虑作业失败影响的新型高性能计算调度代理FARS (Failure-aware RL-based scheduler)。一方面,利用神经网络将原始集群和工作状态信息映射到工作分配决策中。另一方面,为了考虑作业失败对用户满意度和集群利用率的影响,FARS利用整个工作量的make-span作为培训目标。此外,还采用了有效的探索和经验回放技术来获得有效的聚合代理。为了评估FARS的能力,我们使用流行的DLT工作负载设计了广泛的基于跟踪的仿真实验。实验结果表明,与最佳基线模型相比,在不同设备错误率下,FARS的平均制作跨度提高了5.69%。总之,我们的FARS是HPC集群中故障感知作业调度器的理想候选。
{"title":"Deep Reinforcement Agent for Failure-aware Job scheduling in High-Performance Computing","authors":"K. Yang, Rongyu Cao, Yueyuan Zhou, Jiawei Zhang, En Shao, Guangming Tan","doi":"10.1109/ICPADS53394.2021.00061","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00061","url":null,"abstract":"Job scheduling is crucial in high-performance computing (HPC), which is dedicated to deciding when and which jobs are allocated to the system and placing the jobs on which resources, by considering multiple scheduling goals. Along with the incremental of various resources and dazzling deep learning training (DLT) workloads, job failure becomes a quite common issue in HPC, which will affect user satisfaction and cluster utilization. To alleviate the influence of hardware and software errors as much as possible, in this paper, we aim to tackle the problem of failure-aware job scheduling in HPC clusters. Inspired by the success of previous studies of deep reinforcement learning-driven job scheduling, we propose a novel HPC scheduling agent named FARS (Failure-aware RL-based scheduler) by considering the effects of job failures. On the one hand, a neural network is applied to map the information of raw cluster and job states to job placement decisions. On the other hand, to consider the influence of job failure for user satisfaction and cluster utilization, FARS leverages make-span of the entire workload as the training objective. Additionally, effective exploration and experience replay techniques are applied to obtain effectively converged agent. To evaluate the capability of FARS, we design extensive trace-based simulation experiments with the popular DLT workloads. The experimental results show that, compared with the best baseline model, FARS obtains 5.69% improvement of average make-span under different device error rates. Together, our FARS is an ideal candidate for failure-aware job scheduler in HPC clusters.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133336121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Forecasting Method of Dual Traffic Condition Indicators Based on Ensemble Learning 基于集成学习的双交通状态指标预测方法
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00047
Chuanhao Dong, Zhiqiang Lv, Jianbo Li
By the prediction of traffic conditions, the occurrence of traffic congestion can be warned in advance, so that the traffic managers can intervene in time, which can help to reduce the risk of traffic congestion. Therefore, aiming at the problem of traffic congestion, a prediction method for dual traffic condition indicators is proposed. The method for capturing spatial dependence based on the topology of roads and road driving direction is proposed to provide more flexible and targeted spatial features for predicting traffic conditions. In addition, according to the real-time and accuracy requirements of traffic conditions prediction, a novel model named dual-channel convolution block is designed to capture the temporal dependence of traffic conditions. Learning from the idea of ensemble learning, $K$ independent base models are trained to predict traffic condition at the same time, and a model fusion mechanism based on real-time traffic conditions is proposed to fuse the predictions of the base models so that the model can have stronger generalization ability to adapt to various noise data in real traffic conditions. The proposed method is validated on the traffic data sets and compares with the optimal model of all the existing models, the proposed method reduces MAPE of speed prediction by 12.1% and TTI prediction by 10.4%.
通过对交通状况的预测,可以提前预警交通拥堵的发生,使交通管理者能够及时干预,有助于降低交通拥堵的风险。因此,针对交通拥堵问题,提出了一种双交通状况指标的预测方法。提出了基于道路拓扑和道路行驶方向的空间依赖性捕获方法,为交通状况预测提供更灵活、更有针对性的空间特征。此外,根据交通状况预测的实时性和准确性要求,设计了一种新的双通道卷积块模型来捕捉交通状况的时间依赖性。借鉴集成学习的思想,同时训练$K$独立的基础模型进行交通状况预测,并提出了一种基于实时交通状况的模型融合机制,将基础模型的预测融合在一起,使模型具有更强的泛化能力,以适应真实交通状况下的各种噪声数据。在交通数据集上进行了验证,并与所有现有模型的最优模型进行了比较,所提方法将速度预测的MAPE降低了12.1%,TTI预测的MAPE降低了10.4%。
{"title":"A Forecasting Method of Dual Traffic Condition Indicators Based on Ensemble Learning","authors":"Chuanhao Dong, Zhiqiang Lv, Jianbo Li","doi":"10.1109/ICPADS53394.2021.00047","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00047","url":null,"abstract":"By the prediction of traffic conditions, the occurrence of traffic congestion can be warned in advance, so that the traffic managers can intervene in time, which can help to reduce the risk of traffic congestion. Therefore, aiming at the problem of traffic congestion, a prediction method for dual traffic condition indicators is proposed. The method for capturing spatial dependence based on the topology of roads and road driving direction is proposed to provide more flexible and targeted spatial features for predicting traffic conditions. In addition, according to the real-time and accuracy requirements of traffic conditions prediction, a novel model named dual-channel convolution block is designed to capture the temporal dependence of traffic conditions. Learning from the idea of ensemble learning, $K$ independent base models are trained to predict traffic condition at the same time, and a model fusion mechanism based on real-time traffic conditions is proposed to fuse the predictions of the base models so that the model can have stronger generalization ability to adapt to various noise data in real traffic conditions. The proposed method is validated on the traffic data sets and compares with the optimal model of all the existing models, the proposed method reduces MAPE of speed prediction by 12.1% and TTI prediction by 10.4%.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114355427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trusted Sliding-Window Aggregation over Blockchains 基于区块链的可信滑动窗口聚合
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00038
Qifeng Shao, Zhao Zhang, Cheqing Jin, Aoying Zhou
Blockchain that continuously generates infinite transactions is widely applied to many decentralized applications. Applications generally focus more on the most recent transaction data to discover trends and make predictions, and thus there is an increasing demand for sliding-window aggregation over blockchains (e.g., a continuous query for the moving average of Bitcoin transaction volume over the last 24 hours). Blockchain submits transactions by block periodically, which makes it work well for sliding-window aggregation. However, the mutual distrust between blockchain nodes makes users consider both query efficiency and query authentication (e.g., simple payment verification (SPV) in Bitcoin). Aggregate B-tree can process sliding-window aggregation in a multi-query setting efficiently. In order to achieve authenticated sliding-window aggregation, a naive scheme may incorporate the Merkle tree into the aggregate B-tree, but that will complicate the index structure, and couple query logic and verification logic. In this paper, we propose a novel authenticated sliding-window aggregation scheme that separates query authentication from query processing. By designing a separate encoded Merkle tree, verification logic can authenticate query results of the aggregate B-tree by itself, without affecting query logic. We also develop an optimized scheme based on FiBA and software guard extensions (SGX), which further reduces aggregate and digest update costs. Security analysis and empirical study validate the robustness and practicality of the proposed scheme.
连续产生无限交易的区块链被广泛应用于许多去中心化应用。应用程序通常更关注最近的交易数据,以发现趋势并做出预测,因此对区块链上滑动窗口聚合的需求越来越大(例如,连续查询过去24小时内比特币交易量的移动平均值)。区块链定期按块提交交易,这使得它可以很好地进行滑动窗口聚合。然而,区块链节点之间的互不信任使得用户同时考虑查询效率和查询认证(例如比特币中的简单支付验证(SPV))。聚合b树可以有效地处理多查询设置中的滑动窗口聚合。为了实现经过身份验证的滑动窗口聚合,一个朴素的方案可能会将Merkle树合并到聚合b树中,但这会使索引结构复杂化,并且会耦合查询逻辑和验证逻辑。本文提出了一种新的身份验证滑动窗口聚合方案,将查询身份验证与查询处理分离开来。通过设计单独的编码Merkle树,验证逻辑可以在不影响查询逻辑的情况下对聚合b树的查询结果进行验证。我们还开发了一种基于FiBA和软件保护扩展(SGX)的优化方案,进一步降低了聚合和摘要更新成本。安全性分析和实证研究验证了该方案的鲁棒性和实用性。
{"title":"Trusted Sliding-Window Aggregation over Blockchains","authors":"Qifeng Shao, Zhao Zhang, Cheqing Jin, Aoying Zhou","doi":"10.1109/ICPADS53394.2021.00038","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00038","url":null,"abstract":"Blockchain that continuously generates infinite transactions is widely applied to many decentralized applications. Applications generally focus more on the most recent transaction data to discover trends and make predictions, and thus there is an increasing demand for sliding-window aggregation over blockchains (e.g., a continuous query for the moving average of Bitcoin transaction volume over the last 24 hours). Blockchain submits transactions by block periodically, which makes it work well for sliding-window aggregation. However, the mutual distrust between blockchain nodes makes users consider both query efficiency and query authentication (e.g., simple payment verification (SPV) in Bitcoin). Aggregate B-tree can process sliding-window aggregation in a multi-query setting efficiently. In order to achieve authenticated sliding-window aggregation, a naive scheme may incorporate the Merkle tree into the aggregate B-tree, but that will complicate the index structure, and couple query logic and verification logic. In this paper, we propose a novel authenticated sliding-window aggregation scheme that separates query authentication from query processing. By designing a separate encoded Merkle tree, verification logic can authenticate query results of the aggregate B-tree by itself, without affecting query logic. We also develop an optimized scheme based on FiBA and software guard extensions (SGX), which further reduces aggregate and digest update costs. Security analysis and empirical study validate the robustness and practicality of the proposed scheme.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125738370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ShadowDroid: Practical Black-box Attack against ML-based Android Malware Detection ShadowDroid:针对基于ml的Android恶意软件检测的实用黑盒攻击
Pub Date : 2021-12-01 DOI: 10.1109/ICPADS53394.2021.00084
Jin Zhang, Chennan Zhang, Xiangyu Liu, Yuncheng Wang, Wenrui Diao, Shanqing Guo
Machine learning (ML) techniques have been widely deployed in the field of Android malware detection. On the other hand, ML-based malware detection also faces the threat of adversarial attacks. Recently, some research has demonstrated the possibility of such attacks under the settings of white-box or grey-box. However, a more practical threat model - black-box adversarial attack has not been well validated and evaluated. In this paper, we bridge this research gap and propose a black-box adversarial attack approach, ShadowDroid, against ML-based Android malware detection. On a high level, ShadowDroid tries to construct a substitute model of the target malware detection system. Utilizing this substitute model, we can identify and modify the key features of a malicious app to generate an adversarial sample. During the experiment, we evaluated the effectiveness of ShadowDroid against nine ML-based Android malware detection frameworks. It achieved successful malware evading on five platforms. Based on these results, we also discuss how to design a robust malware detection system to prevent adversarial attacks.
机器学习(ML)技术已广泛应用于Android恶意软件检测领域。另一方面,基于机器学习的恶意软件检测也面临着对抗性攻击的威胁。最近,一些研究表明,在白盒或灰盒设置下,这种攻击是可能的。然而,一种更实用的威胁模型——黑盒对抗攻击尚未得到很好的验证和评估。在本文中,我们弥合了这一研究空白,并提出了一种针对基于ml的Android恶意软件检测的黑盒对抗性攻击方法ShadowDroid。在高层次上,ShadowDroid试图构建目标恶意软件检测系统的替代模型。利用这个替代模型,我们可以识别和修改恶意应用程序的关键特征,以生成对抗性样本。在实验中,我们评估了ShadowDroid针对9个基于ml的Android恶意软件检测框架的有效性。它在五个平台上成功地规避了恶意软件。基于这些结果,我们还讨论了如何设计一个强大的恶意软件检测系统来防止对抗性攻击。
{"title":"ShadowDroid: Practical Black-box Attack against ML-based Android Malware Detection","authors":"Jin Zhang, Chennan Zhang, Xiangyu Liu, Yuncheng Wang, Wenrui Diao, Shanqing Guo","doi":"10.1109/ICPADS53394.2021.00084","DOIUrl":"https://doi.org/10.1109/ICPADS53394.2021.00084","url":null,"abstract":"Machine learning (ML) techniques have been widely deployed in the field of Android malware detection. On the other hand, ML-based malware detection also faces the threat of adversarial attacks. Recently, some research has demonstrated the possibility of such attacks under the settings of white-box or grey-box. However, a more practical threat model - black-box adversarial attack has not been well validated and evaluated. In this paper, we bridge this research gap and propose a black-box adversarial attack approach, ShadowDroid, against ML-based Android malware detection. On a high level, ShadowDroid tries to construct a substitute model of the target malware detection system. Utilizing this substitute model, we can identify and modify the key features of a malicious app to generate an adversarial sample. During the experiment, we evaluated the effectiveness of ShadowDroid against nine ML-based Android malware detection frameworks. It achieved successful malware evading on five platforms. Based on these results, we also discuss how to design a robust malware detection system to prevent adversarial attacks.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128693759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1