首页 > 最新文献

IEEE Transactions on Multi-Scale Computing Systems最新文献

英文 中文
Smart, Secure, Yet Energy-Efficient, Internet-of-Things Sensors 智能、安全、节能的物联网传感器
Pub Date : 2018-08-10 DOI: 10.1109/TMSCS.2018.2864297
Ayten Ozge Akmandor;Hongxu YIN;Niraj K. Jha
The proliferation of Internet-of-Things (IoT) has led to the generation of zettabytes of sensitive data each year. The generated data are usually raw, requiring cloud resources for processing and decision-making operations to extract valuable information (i.e., distill smartness). Use of cloud resources raises serious design issues: limited bandwidth, insufficient energy, and security concerns. Edge-side computing and cryptographic techniques have been proposed to get around these problems. However, as a result of increased computational load and energy consumption, it is difficult to simultaneously achieve smartness, security, and energy efficiency. We propose a novel way out of this predicament by employing signal compression and machine learning inference on the IoT sensor node. An important sensor operation scenario is for the sensor to transmit data to the base station immediately when an event of interest occurs, e.g., arrhythmia is detected by a smart electrocardiogram sensor or seizure is detected by a smart electroencephalogram sensor, and transmit data on a less urgent basis otherwise. Since on-sensor compression and inference drastically reduce the amount of data that need to be transmitted, we actually end up with a dramatic energy bonus relative to the traditional sense-and-transmit IoT sensor. We use a part of this energy bonus to carry out encryption and hashing to ensure data confidentiality and integrity. We analyze the effectiveness of this approach on six different IoT applications with two data transmission scenarios: alert notification and continuous notification. The experimental results indicate that relative to the traditional sense-and-transmit sensor, IoT sensor energy is reduced by $57.1times$ for electrocardiogram (ECG) sensor based arrhythmia detection, $379.8times$ for freezing of gait detection in the context of Parkinson's disease, $139.7times$ for electroencephalogram (EEG) sensor based seizure detection, $216.6times$ for human activity classification, $162.8times$ for neural prosthesis spike sorting, and $912.6times$ for chemical gas classification. Our approach not only enables the IoT system to push signal processing and decision-making to the extreme of the edge-side (i.e., the sensor node), but also solves data security and energy efficiency problems simultaneously.
物联网(IoT)的激增导致每年都会产生敏感数据。生成的数据通常是原始的,需要云资源进行处理和决策操作,以提取有价值的信息(即提取智能)。云资源的使用引发了严重的设计问题:带宽有限、能源不足和安全问题。已经提出了边缘计算和加密技术来解决这些问题。然而,由于计算负载和能耗的增加,很难同时实现智能性、安全性和能源效率。我们提出了一种通过在物联网传感器节点上使用信号压缩和机器学习推理来摆脱这种困境的新方法。一个重要的传感器操作场景是,当感兴趣的事件发生时,传感器立即向基站发送数据,例如,通过智能心电图传感器检测到心律失常或通过智能脑电图传感器检测到癫痫发作,否则在不太紧急的基础上发送数据。由于传感器上的压缩和推理大大减少了需要传输的数据量,与传统的传感和传输物联网传感器相比,我们实际上最终获得了巨大的能量奖励。我们使用部分能量奖金进行加密和哈希,以确保数据的机密性和完整性。我们分析了这种方法在两种数据传输场景下的六种不同物联网应用程序上的有效性:警报通知和连续通知。实验结果表明,与传统的感测和传输传感器相比,基于心电图(ECG)传感器的心律失常检测的物联网传感器能量降低了57.1times$,帕金森病背景下步态检测的冻结降低了379.8times$;基于脑电图(EEG)传感器的癫痫检测降低了139.7times$,人类活动分类为216.6times$,神经假体棘突分类为162.8times$$,化学气体分类为912.6times$。我们的方法不仅使物联网系统能够将信号处理和决策推向边缘侧(即传感器节点)的极致,还同时解决了数据安全和能效问题。
{"title":"Smart, Secure, Yet Energy-Efficient, Internet-of-Things Sensors","authors":"Ayten Ozge Akmandor;Hongxu YIN;Niraj K. Jha","doi":"10.1109/TMSCS.2018.2864297","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2864297","url":null,"abstract":"The proliferation of Internet-of-Things (IoT) has led to the generation of zettabytes of sensitive data each year. The generated data are usually raw, requiring cloud resources for processing and decision-making operations to extract valuable information (i.e., distill smartness). Use of cloud resources raises serious design issues: limited bandwidth, insufficient energy, and security concerns. Edge-side computing and cryptographic techniques have been proposed to get around these problems. However, as a result of increased computational load and energy consumption, it is difficult to simultaneously achieve smartness, security, and energy efficiency. We propose a novel way out of this predicament by employing signal compression and machine learning inference on the IoT sensor node. An important sensor operation scenario is for the sensor to transmit data to the base station immediately when an event of interest occurs, e.g., arrhythmia is detected by a smart electrocardiogram sensor or seizure is detected by a smart electroencephalogram sensor, and transmit data on a less urgent basis otherwise. Since on-sensor compression and inference drastically reduce the amount of data that need to be transmitted, we actually end up with a dramatic energy bonus relative to the traditional sense-and-transmit IoT sensor. We use a part of this energy bonus to carry out encryption and hashing to ensure data confidentiality and integrity. We analyze the effectiveness of this approach on six different IoT applications with two data transmission scenarios: alert notification and continuous notification. The experimental results indicate that relative to the traditional sense-and-transmit sensor, IoT sensor energy is reduced by \u0000<inline-formula><tex-math>$57.1times$</tex-math></inline-formula>\u0000 for electrocardiogram (ECG) sensor based arrhythmia detection, \u0000<inline-formula><tex-math>$379.8times$</tex-math></inline-formula>\u0000 for freezing of gait detection in the context of Parkinson's disease, \u0000<inline-formula><tex-math>$139.7times$</tex-math></inline-formula>\u0000 for electroencephalogram (EEG) sensor based seizure detection, \u0000<inline-formula><tex-math>$216.6times$</tex-math></inline-formula>\u0000 for human activity classification, \u0000<inline-formula><tex-math>$162.8times$</tex-math></inline-formula>\u0000 for neural prosthesis spike sorting, and \u0000<inline-formula><tex-math>$912.6times$</tex-math></inline-formula>\u0000 for chemical gas classification. Our approach not only enables the IoT system to push signal processing and decision-making to the extreme of the edge-side (i.e., the sensor node), but also solves data security and energy efficiency problems simultaneously.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"914-930"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2864297","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68024189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Hardware Accelerated Mappers for Hadoop MapReduce Streaming Hadoop MapReduce流的硬件加速映射器
Pub Date : 2018-07-12 DOI: 10.1109/TMSCS.2018.2854787
Katayoun Neshatpour;Maria Malik;Avesta Sasan;Setareh Rafatirad;Houman Homayoun
Heterogeneous architectures have emerged as an effective solution to address the energy-efficiency challenges. This is particularly happening in data centers where the integration of FPGA hardware accelerators with general purpose processors such as big Xeon or little Atom cores introduces enormous opportunities to address the power, scalability, and energy-efficiency challenges of processing emerging applications, in particular in domain of big data. Therefore, the rise of hardware accelerators in data centers, raises several important research questions: What is the potential for hardware acceleration in MapReduce, a defacto standard for big data analytics? What is the role of processor after acceleration; whether big or little core is most suited to run big data applications post hardware acceleration? This paper answers these questions through methodical real-system experiments on state-of-the-art hardware acceleration platforms. We first present the implementation of four highly used big data applications in a heterogeneous CPU+FPGA architecture. We develop the MapReduce implementation of K-means, K nearest neighbor, support vector machine, and naive Bayes in a Hadoop Streaming environment that allows developing mapper functions in a non-Java based language suited for interfacing with FPGA based hardware accelerating environment. We present a full implementation of the HW+SW mappers on existing FPGA+core platform and evaluate how a cluster of CPUs equipped with FPGAs uses the accelerated mapper to enhance the overall performance of MapReduce. Moreover, we study how various parameters at the application, system, and architecture levels affect the performance and power-efficiency benefits of Hadoop streaming hardware acceleration. This analysis helps to better understand how presence of HW accelerators for Hadoop MapReduce, changes the choice of CPU, tuning optimization parameters, and scheduling decisions for performance and energy-efficiency improvement. The results show a promising speedup as well as energy-efficiency gains of upto 5.7× and 16× is achieved, respectively, in an end-to-end Hadoop implementation using a semi-automated HLS framework. Results suggest that HW+SW acceleration yields significantly higher speedup on little cores, reducing the performance gap between little and big cores after the acceleration. On the other hand, the energy-efficiency benefit of HW+SW acceleration is higher on the big cores, which reduces the energy-efficiency gap between little and big cores. Overall, the experimental results show that a low cost embedded FPGA platform, programmed using a semi-automated HW+SW co-design methodology, brings significant performance and energy-efficiency gains for Hadoop MapReduce computing in cloud-based architectures and significantly reduces the reliance on large number of big high-performance cores.
异构体系结构已成为解决能源效率挑战的有效解决方案。这种情况尤其发生在数据中心,FPGA硬件加速器与通用处理器(如大Xeon或小Atom核)的集成为解决处理新兴应用程序(尤其是大数据领域)的功率、可扩展性和能效挑战带来了巨大机遇。因此,数据中心硬件加速器的兴起提出了几个重要的研究问题:MapReduce作为大数据分析的实际标准,硬件加速的潜力是什么?加速后处理器的作用是什么;大核心还是小核心最适合在硬件加速后运行大数据应用程序?本文通过在最先进的硬件加速平台上进行系统的实际系统实验来回答这些问题。我们首先介绍了四个高度使用的大数据应用程序在异构CPU+FPGA架构中的实现。我们在Hadoop流式处理环境中开发了K-means、K近邻、支持向量机和朴素贝叶斯的MapReduce实现,该环境允许用适合与基于FPGA的硬件加速环境接口的非Java语言开发映射器函数。我们在现有的FPGA+核心平台上展示了HW+SW映射器的完整实现,并评估了配备FPGA的CPU集群如何使用加速映射器来增强MapReduce的整体性能。此外,我们还研究了应用程序、系统和体系结构级别的各种参数如何影响Hadoop流硬件加速的性能和能效优势。此分析有助于更好地了解Hadoop MapReduce硬件加速器的存在如何改变CPU的选择、调整优化参数和调度决策,以提高性能和能效。结果表明,在使用半自动化HLS框架的端到端Hadoop实现中,分别实现了5.7倍和16倍的有希望的加速和能效增益。结果表明,HW+SW加速在小内核上产生了显著更高的加速,减小了加速后小内核和大内核之间的性能差距。另一方面,HW+SW加速在大核心上的能效效益更高,这缩小了小核心和大核心之间的能效差距。总体而言,实验结果表明,使用半自动化HW+SW协同设计方法编程的低成本嵌入式FPGA平台,为基于云的架构中的Hadoop MapReduce计算带来了显著的性能和能效提升,并显著减少了对大量大型高性能核心的依赖。
{"title":"Hardware Accelerated Mappers for Hadoop MapReduce Streaming","authors":"Katayoun Neshatpour;Maria Malik;Avesta Sasan;Setareh Rafatirad;Houman Homayoun","doi":"10.1109/TMSCS.2018.2854787","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2854787","url":null,"abstract":"Heterogeneous architectures have emerged as an effective solution to address the energy-efficiency challenges. This is particularly happening in data centers where the integration of FPGA hardware accelerators with general purpose processors such as big Xeon or little Atom cores introduces enormous opportunities to address the power, scalability, and energy-efficiency challenges of processing emerging applications, in particular in domain of big data. Therefore, the rise of hardware accelerators in data centers, raises several important research questions: What is the potential for hardware acceleration in MapReduce, a defacto standard for big data analytics? What is the role of processor after acceleration; whether big or little core is most suited to run big data applications post hardware acceleration? This paper answers these questions through methodical real-system experiments on state-of-the-art hardware acceleration platforms. We first present the implementation of four highly used big data applications in a heterogeneous CPU+FPGA architecture. We develop the MapReduce implementation of K-means, K nearest neighbor, support vector machine, and naive Bayes in a Hadoop Streaming environment that allows developing mapper functions in a non-Java based language suited for interfacing with FPGA based hardware accelerating environment. We present a full implementation of the HW+SW mappers on existing FPGA+core platform and evaluate how a cluster of CPUs equipped with FPGAs uses the accelerated mapper to enhance the overall performance of MapReduce. Moreover, we study how various parameters at the application, system, and architecture levels affect the performance and power-efficiency benefits of Hadoop streaming hardware acceleration. This analysis helps to better understand how presence of HW accelerators for Hadoop MapReduce, changes the choice of CPU, tuning optimization parameters, and scheduling decisions for performance and energy-efficiency improvement. The results show a promising speedup as well as energy-efficiency gains of upto 5.7× and 16× is achieved, respectively, in an end-to-end Hadoop implementation using a semi-automated HLS framework. Results suggest that HW+SW acceleration yields significantly higher speedup on little cores, reducing the performance gap between little and big cores after the acceleration. On the other hand, the energy-efficiency benefit of HW+SW acceleration is higher on the big cores, which reduces the energy-efficiency gap between little and big cores. Overall, the experimental results show that a low cost embedded FPGA platform, programmed using a semi-automated HW+SW co-design methodology, brings significant performance and energy-efficiency gains for Hadoop MapReduce computing in cloud-based architectures and significantly reduces the reliance on large number of big high-performance cores.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"734-748"},"PeriodicalIF":0.0,"publicationDate":"2018-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2854787","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68023998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Placement of Virtual Network Functions in Hybrid Data Center Networks 混合数据中心网络中虚拟网络功能的配置
Pub Date : 2018-06-25 DOI: 10.1109/TMSCS.2018.2848949
Zhenhua Li;Yuanyuan Yang
Hybrid data center networks (HDCNs), where each ToR switch is installed with a directional antenna, emerge as a candidate helping alleviate the over-subscription problem in traditional data centers. Meanwhile, as virtualization techniques develop rapidly, there is a trend that traditional network functions that are implemented in hardware will also be virtualized into virtual machines. However, how to place virtual network functions (VNFs) into data centers to meet the customer requirements in a hybrid data center network environment is a challenging problem. In this paper, we study the VNF placement in hybrid data center networks, and provide a joint VNF placement and antenna scheduling model. We further simplify it to a mixed integer programming (MIP) problem. Due to the hardness of a MIP problem, we develop a heuristic algorithm to solve it, and also give an on-line algorithm to meet the requirements from real-time scenarios. To the best of our knowledge, this is the first work concerning VNF placement in the context of HDCNs. Our extensive simulations demonstrate the effectiveness of the proposed algorithms, which make them a promising solution for VNF placement in HDCN environment.
混合数据中心网络(HDCN),每个ToR交换机都安装有一个定向天线,成为帮助缓解传统数据中心过度订阅问题的候选网络。同时,随着虚拟化技术的快速发展,在硬件中实现的传统网络功能也将虚拟化为虚拟机。然而,在混合数据中心网络环境中,如何将虚拟网络功能(VNF)放入数据中心以满足客户需求是一个具有挑战性的问题。在本文中,我们研究了混合数据中心网络中的VNF布局,并提供了一个VNF布局和天线调度的联合模型。我们进一步将其简化为一个混合整数规划(MIP)问题。由于MIP问题的复杂性,我们开发了一种启发式算法来解决它,并给出了一种在线算法来满足实时场景的要求。据我们所知,这是关于HDCN背景下VNF放置的第一项工作。我们的大量仿真证明了所提出的算法的有效性,这使它们成为HDCN环境中VNF放置的一个有前途的解决方案。
{"title":"Placement of Virtual Network Functions in Hybrid Data Center Networks","authors":"Zhenhua Li;Yuanyuan Yang","doi":"10.1109/TMSCS.2018.2848949","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2848949","url":null,"abstract":"Hybrid data center networks (HDCNs), where each ToR switch is installed with a directional antenna, emerge as a candidate helping alleviate the over-subscription problem in traditional data centers. Meanwhile, as virtualization techniques develop rapidly, there is a trend that traditional network functions that are implemented in hardware will also be virtualized into virtual machines. However, how to place virtual network functions (VNFs) into data centers to meet the customer requirements in a hybrid data center network environment is a challenging problem. In this paper, we study the VNF placement in hybrid data center networks, and provide a joint VNF placement and antenna scheduling model. We further simplify it to a mixed integer programming (MIP) problem. Due to the hardness of a MIP problem, we develop a heuristic algorithm to solve it, and also give an on-line algorithm to meet the requirements from real-time scenarios. To the best of our knowledge, this is the first work concerning VNF placement in the context of HDCNs. Our extensive simulations demonstrate the effectiveness of the proposed algorithms, which make them a promising solution for VNF placement in HDCN environment.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"861-873"},"PeriodicalIF":0.0,"publicationDate":"2018-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2848949","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68023992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Guest Editorial: Emerging Technologies and Architectures for Many core Computing Part 1: Hardware Techniques 客座编辑:多核心计算的新兴技术和体系结构第1部分:硬件技术
Pub Date : 2018-06-18 DOI: 10.1109/TMSCS.2018.2826758
Sébastien Le Beux;Paul V. Gratz;Ian O'Connor
The papers included in this special section focus on emerging technologies and architectures for manycore computing, with particular emphasis on hardware techniques. THE pursuit of Moore’s Law is slowing and the exploration of alternative devices is underway to replace the CMOS transistor and traditional architectures at the heart of data processing. Moreover, the emergence of stringent application constraints, particularly those linked to energy consumption, require new system architectural strategies (e.g. manycore) and real-time operational adaptability approaches. Such complex systems require new and powerful design and programming methods to ensure optimal and reliable operation. Thus, this special issue aims at collating new research along all the dimensions of emerging technologies and architectures for computing in manycores.
本节中的论文主要关注多核计算的新兴技术和体系结构,特别强调硬件技术。对摩尔定律的追求正在放缓,替代CMOS晶体管和数据处理核心传统架构的替代设备的探索正在进行中。此外,严格的应用约束的出现,特别是与能耗相关的应用约束,需要新的系统架构策略(例如manycore)和实时操作适应性方法。这种复杂的系统需要新的强大的设计和编程方法来确保最佳和可靠的操作。因此,本期特刊旨在整理多个领域中新兴技术和计算架构的所有维度的新研究。
{"title":"Guest Editorial: Emerging Technologies and Architectures for Many core Computing Part 1: Hardware Techniques","authors":"Sébastien Le Beux;Paul V. Gratz;Ian O'Connor","doi":"10.1109/TMSCS.2018.2826758","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2826758","url":null,"abstract":"The papers included in this special section focus on emerging technologies and architectures for manycore computing, with particular emphasis on hardware techniques. THE pursuit of Moore’s Law is slowing and the exploration of alternative devices is underway to replace the CMOS transistor and traditional architectures at the heart of data processing. Moreover, the emergence of stringent application constraints, particularly those linked to energy consumption, require new system architectural strategies (e.g. manycore) and real-time operational adaptability approaches. Such complex systems require new and powerful design and programming methods to ensure optimal and reliable operation. Thus, this special issue aims at collating new research along all the dimensions of emerging technologies and architectures for computing in manycores.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 2","pages":"97-98"},"PeriodicalIF":0.0,"publicationDate":"2018-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2826758","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67858194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fast TCAM-Based Multi-Match Packet Classification Using Discriminators 基于TCAM的基于鉴别器的快速多匹配分组分类
Pub Date : 2018-06-15 DOI: 10.1109/TMSCS.2018.2847677
Hsin-Tsung Lin;Pi-Chung Wang
Ternary content addressable memory (TCAM) is a widely used technology for network devices to perform packet classification. TCAM compares a search key with all ternary entries in parallel to yield the first matching entry. To generate all matching entries, either storage or speed penalty is inevitable. Because of the inherit disadvantages of TCAM, including power hungry and limited capacity, the feasibility of TCAM-based multi-match packet classification (TMPC) is thus debatable. Discriminators appended to each TCAM entry have been used to avoid storage penalty for TMPC. We are motivated to minimize speed penalty for TMPC with discriminators. In this paper, a novel scheme, which utilizes unused TCAM entries to accelerate the search performance, is presented. It selectively generates TCAM entries to merge overlapping match conditions so that the number of accessed TCAM entries can be significantly reduced. By limiting the number of generated TCAM entries, the storage penalty is minimized since our scheme does not need extra TCAM chips. We further present several refinements to the search procedure. The experimental results show that our scheme can drastically improve the search performance with extra 10-20 percent TCAM entries. As a result, the power consumption, which correlates to the number of accessed TCAM entries per classification, can be reduced.
三元内容可寻址存储器(TCAM)是一种广泛用于网络设备执行分组分类的技术。TCAM将搜索关键字与所有三元条目并行比较,以产生第一个匹配条目。为了生成所有匹配的条目,存储或速度惩罚是不可避免的。由于TCAM的继承缺点,包括功耗大和容量有限,因此基于TCAM的多匹配分组分类(TMPC)的可行性存在争议。附加到每个TCAM条目的鉴别器已被用于避免TMPC的存储惩罚。我们的动机是尽量减少TMPC的速度惩罚与鉴别器。本文提出了一种新的方案,利用未使用的TCAM条目来提高搜索性能。它选择性地生成TCAM条目以合并重叠的匹配条件,从而可以显著减少访问的TCAM条目的数量。通过限制生成的TCAM条目的数量,由于我们的方案不需要额外的TCAM芯片,因此存储损失最小化。我们进一步介绍了搜索程序的一些改进。实验结果表明,我们的方案可以显著提高搜索性能,增加10-20%的TCAM条目。结果,可以降低与每个分类访问的TCAM条目的数量相关的功耗。
{"title":"Fast TCAM-Based Multi-Match Packet Classification Using Discriminators","authors":"Hsin-Tsung Lin;Pi-Chung Wang","doi":"10.1109/TMSCS.2018.2847677","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2847677","url":null,"abstract":"Ternary content addressable memory (TCAM) is a widely used technology for network devices to perform packet classification. TCAM compares a search key with all ternary entries in parallel to yield the first matching entry. To generate all matching entries, either storage or speed penalty is inevitable. Because of the inherit disadvantages of TCAM, including power hungry and limited capacity, the feasibility of TCAM-based multi-match packet classification (TMPC) is thus debatable. Discriminators appended to each TCAM entry have been used to avoid storage penalty for TMPC. We are motivated to minimize speed penalty for TMPC with discriminators. In this paper, a novel scheme, which utilizes unused TCAM entries to accelerate the search performance, is presented. It selectively generates TCAM entries to merge overlapping match conditions so that the number of accessed TCAM entries can be significantly reduced. By limiting the number of generated TCAM entries, the storage penalty is minimized since our scheme does not need extra TCAM chips. We further present several refinements to the search procedure. The experimental results show that our scheme can drastically improve the search performance with extra 10-20 percent TCAM entries. As a result, the power consumption, which correlates to the number of accessed TCAM entries per classification, can be reduced.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"686-697"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2847677","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68024196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
LIBRA: Thermal and Process Variation Aware Reliability Management in Photonic Networks-on-Chip LIBRA:片上光子网络中的热和过程变化感知可靠性管理
Pub Date : 2018-06-12 DOI: 10.1109/TMSCS.2018.2846274
sai vineel reddy chittamuru;Ishan G. Thakkar;Sudeep Pasricha
Silicon nanophotonics technology is being considered for future networks-on-chip (NoCs) as it can enable high bandwidth density and lower latency with traversal of data at the speed of light. But, the operation of photonic NoCs (PNoCs) is very sensitive to on-chip temperature and process variations. These variations can create significant reliability issues for PNoCs. For example, a microring resonator (MR) may resonate at another wavelength instead of its designated wavelength due to thermal and/or process variations, which can lead to bandwidth wastage and data corruption in PNoCs. This paper proposes a novel run-time framework called LIBRA to overcome temperature- and process variation- induced reliability issues in PNoCs. The framework consists of (i) a device-level reactive MR assignment mechanism that dynamically assigns a group of MRs to reliably modulate/receive data in a waveguide based on the chip thermal and process variation characteristics; and (ii) a system-level proactive thread migration technique to avoid on-chip thermal threshold violations and reduce MR tuning/ trimming power by dynamically migrating threads between cores. Our simulation results indicate that LIBRA can reliably satisfy on-chip thermal thresholds and maintain high network bandwidth while reducing total power by up to 61.3 percent, and thermal tuning/trimming power by up to 76.2 percent over state-of-the-art thermal and process variation aware solutions.
硅纳米光子学技术正被考虑用于未来的片上网络(NoCs),因为它可以实现高带宽密度和更低的延迟,以光速遍历数据。但是,光子NoCs(PNoCs)的操作对片上温度和工艺变化非常敏感。这些变化可能会给PNoCs带来重大的可靠性问题。例如,由于热和/或工艺变化,微环谐振器(MR)可能以另一波长而不是其指定波长谐振,这可能导致PNoCs中的带宽浪费和数据损坏。本文提出了一种新的运行时框架,称为LIBRA,以克服PNoCs中由温度和工艺变化引起的可靠性问题。该框架由(i)设备级反应性MR分配机制组成,该机制基于芯片热和工艺变化特性动态地分配一组MR以可靠地调制/接收波导中的数据;以及(ii)系统级主动线程迁移技术,以通过在内核之间动态迁移线程来避免芯片上热阈值违反并降低MR调谐/微调功率。我们的模拟结果表明,与最先进的热和工艺变化感知解决方案相比,LIBRA可以可靠地满足片上热阈值并保持高网络带宽,同时将总功率降低61.3%,热调谐/微调功率降低76.2%。
{"title":"LIBRA: Thermal and Process Variation Aware Reliability Management in Photonic Networks-on-Chip","authors":"sai vineel reddy chittamuru;Ishan G. Thakkar;Sudeep Pasricha","doi":"10.1109/TMSCS.2018.2846274","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2846274","url":null,"abstract":"Silicon nanophotonics technology is being considered for future networks-on-chip (NoCs) as it can enable high bandwidth density and lower latency with traversal of data at the speed of light. But, the operation of photonic NoCs (PNoCs) is very sensitive to on-chip temperature and process variations. These variations can create significant reliability issues for PNoCs. For example, a microring resonator (MR) may resonate at another wavelength instead of its designated wavelength due to thermal and/or process variations, which can lead to bandwidth wastage and data corruption in PNoCs. This paper proposes a novel run-time framework called \u0000<italic>LIBRA</i>\u0000 to overcome temperature- and process variation- induced reliability issues in PNoCs. The framework consists of (i) a device-level reactive MR assignment mechanism that dynamically assigns a group of MRs to reliably modulate/receive data in a waveguide based on the chip thermal and process variation characteristics; and (ii) a system-level proactive thread migration technique to avoid on-chip thermal threshold violations and reduce MR tuning/ trimming power by dynamically migrating threads between cores. Our simulation results indicate that \u0000<italic>LIBRA</i>\u0000 can reliably satisfy on-chip thermal thresholds and maintain high network bandwidth while reducing total power by up to 61.3 percent, and thermal tuning/trimming power by up to 76.2 percent over state-of-the-art thermal and process variation aware solutions.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"758-772"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2846274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68023999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
DLoBD: A Comprehensive Study of Deep Learning over Big Data Stacks on HPC Clusters DLoBD:HPC集群上基于大数据堆栈的深度学习综合研究
Pub Date : 2018-06-11 DOI: 10.1109/TMSCS.2018.2845886
Xiaoyi Lu;Haiyang Shi;Rajarshi Biswas;M. Haseeb Javed;Dhabaleswar K. Panda
Deep Learning over Big Data (DLoBD) is an emerging paradigm to mine value from the massive amount of gathered data. Many Deep Learning frameworks, like Caffe, TensorFlow, etc., start running over Big Data stacks, such as Apache Hadoop and Spark. Even though a lot of activities are happening in the field, there is a lack of comprehensive studies on analyzing the impact of RDMA-capable networks and CPUs/GPUs on DLoBD stacks. To fill this gap, we propose a systematical characterization methodology and conduct extensive performance evaluations on four representative DLoBD stacks (i.e., CaffeOnSpark, TensorFlowOnSpark, MMLSpark/CNTKOnSpark, and BigDL) to expose the interesting trends regarding performance, scalability, accuracy, and resource utilization. Our observations show that RDMA-based design for DLoBD stacks can achieve up to 2.7x speedup compared to the IPoIB-based scheme. The RDMA scheme also scales better and utilizes resources more efficiently than IPoIB. For most cases, GPU-based schemes can outperform CPU-based designs, but we see that for LeNet on MNIST, CPU + MKL can achieve better performance than GPU and GPU + cuDNN on 16 nodes. Through our evaluation and an in-depth analysis on TensorFlowOnSpark, we find that there are large rooms to improve the designs of current-generation DLoBD stacks.
基于大数据的深度学习(DLoBD)是一种从大量收集的数据中挖掘价值的新兴范式。许多深度学习框架,如Caffe、TensorFlow等,开始在大数据堆栈上运行,如Apache Hadoop和Spark。尽管该领域正在开展大量活动,但缺乏全面的研究来分析支持RDMA的网络和CPU/GPU对DLoBD堆栈的影响。为了填补这一空白,我们提出了一种系统的表征方法,并对四个具有代表性的DLoBD堆栈(即CaffeOnPark、TensorFlowOnSpark、MMLSpark/CNTKOnSpark和BigDL)进行了广泛的性能评估,以揭示性能、可扩展性、准确性和资源利用率方面的有趣趋势。我们的观察结果表明,与基于IPoIB的方案相比,基于RDMA的DLoBD堆栈设计可以实现高达2.7倍的加速。RDMA方案还比IPoIB更好地扩展并且更有效地利用资源。在大多数情况下,基于GPU的方案可以优于基于CPU的设计,但我们看到,对于MNIST上的LeNet,CPU+MKL可以在16个节点上实现比GPU和GPU+cuDNN更好的性能。通过对TensorFlowOnSpark的评估和深入分析,我们发现当前一代DLoBD堆栈的设计还有很大的改进空间。
{"title":"DLoBD: A Comprehensive Study of Deep Learning over Big Data Stacks on HPC Clusters","authors":"Xiaoyi Lu;Haiyang Shi;Rajarshi Biswas;M. Haseeb Javed;Dhabaleswar K. Panda","doi":"10.1109/TMSCS.2018.2845886","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2845886","url":null,"abstract":"<underline>D</u>\u0000eep \u0000<underline>L</u>\u0000earning \u0000<underline>o</u>\u0000ver \u0000<underline>B</u>\u0000ig \u0000<underline>D</u>\u0000ata (DLoBD) is an emerging paradigm to mine value from the massive amount of gathered data. Many Deep Learning frameworks, like Caffe, TensorFlow, etc., start running over Big Data stacks, such as Apache Hadoop and Spark. Even though a lot of activities are happening in the field, there is a lack of comprehensive studies on analyzing the impact of RDMA-capable networks and CPUs/GPUs on DLoBD stacks. To fill this gap, we propose a systematical characterization methodology and conduct extensive performance evaluations on four representative DLoBD stacks (i.e., CaffeOnSpark, TensorFlowOnSpark, MMLSpark/CNTKOnSpark, and BigDL) to expose the interesting trends regarding performance, scalability, accuracy, and resource utilization. Our observations show that RDMA-based design for DLoBD stacks can achieve up to 2.7x speedup compared to the IPoIB-based scheme. The RDMA scheme also scales better and utilizes resources more efficiently than IPoIB. For most cases, GPU-based schemes can outperform CPU-based designs, but we see that for LeNet on MNIST, CPU + MKL can achieve better performance than GPU and GPU + cuDNN on 16 nodes. Through our evaluation and an in-depth analysis on TensorFlowOnSpark, we find that there are large rooms to improve the designs of current-generation DLoBD stacks.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"635-648"},"PeriodicalIF":0.0,"publicationDate":"2018-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2845886","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67861364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Fast Hill Climbing Algorithm for Defect and Variation Tolerant Logic Mapping of Nano-Crossbar Arrays 一种用于纳米横杆阵列缺陷和容差逻辑映射的快速爬山算法
Pub Date : 2018-04-23 DOI: 10.1109/TMSCS.2018.2829518
Furkan Peker;Mustafa Altun
Nano-crossbar arrays are area and power efficient structures, generally realized with self-assembly based bottom-up fabrication methods as opposed to relatively costly traditional top-down lithography techniques. This advantage comes with a price: very high process variations. In this work, we focus on the worst-case delay optimization problem in the presence of high process variations. As a variation tolerant logic mapping scheme, a fast hill climbing algorithm is proposed; it offers similar or better delay improvements with much smaller runtimes compared to the methods in the literature. Our algorithm first performs a reducing operation for the crossbar motivated by the fact that the whole crossbar is not necessarily needed for the problem. This significantly decreases the computational load up to 72 percent for benchmark functions. Next, initial column mapping is applied. After the first two steps that can be considered as preparatory, the algorithm proceeds to the last step of hill climbing row search with column reordering where optimization for variation tolerance is performed. As an extension to this work, we directly apply our hill climbing algorithm on defective arrays to perform both defect and variation tolerance. Again, simulation results approve the speed of our algorithm, up to 600 times higher compared to the related algorithms in the literature without sacrificing defect and variation tolerance performance.
纳米交叉阵列是面积和功率高效的结构,通常通过基于自组装的自下而上的制造方法实现,而不是相对昂贵的传统自上而下的光刻技术。这种优势伴随着一个代价:非常高的工艺变化。在这项工作中,我们专注于存在高工艺变化的最坏情况下的延迟优化问题。作为一种容忍变化的逻辑映射方案,提出了一种快速爬山算法;与文献中的方法相比,它以更小的运行时间提供了类似或更好的延迟改进。我们的算法首先对交叉开关执行减少操作,其动机是问题不一定需要整个交叉开关。这显著降低了基准函数高达72%的计算负载。接下来,应用初始列映射。在可以被视为准备的前两个步骤之后,该算法进行到具有列重新排序的爬山行搜索的最后一个步骤,其中执行变异容限的优化。作为这项工作的扩展,我们直接将我们的爬山算法应用于缺陷阵列,以执行缺陷和变化容限。同样,仿真结果证实了我们算法的速度,与文献中的相关算法相比,在不牺牲缺陷和变异容忍性能的情况下,速度高出600倍。
{"title":"A Fast Hill Climbing Algorithm for Defect and Variation Tolerant Logic Mapping of Nano-Crossbar Arrays","authors":"Furkan Peker;Mustafa Altun","doi":"10.1109/TMSCS.2018.2829518","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2829518","url":null,"abstract":"Nano-crossbar arrays are area and power efficient structures, generally realized with self-assembly based bottom-up fabrication methods as opposed to relatively costly traditional top-down lithography techniques. This advantage comes with a price: very high process variations. In this work, we focus on the worst-case delay optimization problem in the presence of high process variations. As a variation tolerant logic mapping scheme, a fast hill climbing algorithm is proposed; it offers similar or better delay improvements with much smaller runtimes compared to the methods in the literature. Our algorithm first performs a reducing operation for the crossbar motivated by the fact that the whole crossbar is not necessarily needed for the problem. This significantly decreases the computational load up to 72 percent for benchmark functions. Next, initial column mapping is applied. After the first two steps that can be considered as preparatory, the algorithm proceeds to the last step of hill climbing row search with column reordering where optimization for variation tolerance is performed. As an extension to this work, we directly apply our hill climbing algorithm on defective arrays to perform both defect and variation tolerance. Again, simulation results approve the speed of our algorithm, up to 600 times higher compared to the related algorithms in the literature without sacrificing defect and variation tolerance performance.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"522-532"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2829518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68023989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Low-Power Multi-Sensor System with Power Management and Nonvolatile Memory Access Control for IoT Applications 物联网应用中具有电源管理和非易失性存储器访问控制的低功耗多传感器系统
Pub Date : 2018-04-20 DOI: 10.1109/TMSCS.2018.2827388
Masanori Hayashikoshi;Hideyuki Noda;Hiroyuki Kawai;Yasumitsu Murai;Sugako Otani;Koji Nii;Yoshio Matsuda;Hiroyuki Kondo
The low-power multi-sensor system with power management and nonvolatile memory access control for IoT applications are proposed, which achieves almost zero standby power at the no-operation modes. A power management scheme with activity localization can reduce the number of transitions between power-on and power-off modes with rescheduling and bundling task procedures. In addition, autonomously standby mode transition control selects the optimum standby mode of microcontrollers, reducing total power consumption. We demonstrate with evaluation board as a use case of IoT applications, observing 91 percent power reductions by adopting task scheduling and autonomously standby mode transition control combination. Furthermore, we propose a new nonvolatile memory access control technology, and estimate the possibility for future low-power effect.
提出了一种用于物联网应用的具有电源管理和非易失性存储器访问控制的低功耗多传感器系统,该系统在无操作模式下几乎实现零待机功率。具有活动本地化的电源管理方案可以通过重新调度和捆绑任务过程来减少通电和断电模式之间的转换次数。此外,自主待机模式转换控制选择微控制器的最佳待机模式,降低总功耗。我们将评估板作为物联网应用的用例进行了演示,通过采用任务调度和自主待机模式转换控制组合,观察到91%的功耗降低。此外,我们提出了一种新的非易失性存储器访问控制技术,并估计了未来低功耗效应的可能性。
{"title":"Low-Power Multi-Sensor System with Power Management and Nonvolatile Memory Access Control for IoT Applications","authors":"Masanori Hayashikoshi;Hideyuki Noda;Hiroyuki Kawai;Yasumitsu Murai;Sugako Otani;Koji Nii;Yoshio Matsuda;Hiroyuki Kondo","doi":"10.1109/TMSCS.2018.2827388","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2827388","url":null,"abstract":"The low-power multi-sensor system with power management and nonvolatile memory access control for IoT applications are proposed, which achieves almost zero standby power at the no-operation modes. A power management scheme with activity localization can reduce the number of transitions between power-on and power-off modes with rescheduling and bundling task procedures. In addition, autonomously standby mode transition control selects the optimum standby mode of microcontrollers, reducing total power consumption. We demonstrate with evaluation board as a use case of IoT applications, observing 91 percent power reductions by adopting task scheduling and autonomously standby mode transition control combination. Furthermore, we propose a new nonvolatile memory access control technology, and estimate the possibility for future low-power effect.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"784-792"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2827388","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68024165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Adaptive and Roll-Forward Error Recovery in MEDA Biochips Based on Droplet-Aliquot Operations and Predictive Analysis 基于液滴等分运算和预测分析的MEDA生物芯片自适应前滚误差恢复
Pub Date : 2018-04-17 DOI: 10.1109/TMSCS.2018.2827030
Zhanwei Zhong;Zipeng Li;Krishnendu Chakrabarty
Digital microfluidic biochips (DMFBs) are being increasingly used in biochemistry labs for automating bioassays. However, traditional DMFBs suffer from some key shortcomings: 1) inability to vary droplet volume in a flexible manner; 2) difficulty of integrating on-chip sensors; and 3) the need for special fabrication processes. To overcome these problems, DMFBs based on micro-electrode-dot -array (MEDA) have recently been proposed. However, errors are likely to occur on a MEDA DMFB due to chip defects and the unpredictability inherent to biochemical experiments. We present fine-grained error-recovery solutions for MEDA by exploiting real-time sensing and advanced MEDA-specific droplet operations. The proposed methods rely on adaptive droplet-aliquot operations and predictive analysis of mixing. In addition, a roll-forward error-recovery method is proposed to efficiently utilize the unused part of the biochip and reduce the time required for error recovery. Experimental results on three representative benchmarks demonstrate the efficiency of the proposed error-recovery strategy.
数字微流控生物芯片(DMFBs)在生物化学实验室中越来越多地用于自动化生物测定。然而,传统的DMFB存在一些关键缺点:1)无法以灵活的方式改变液滴体积;2) 集成片上传感器的困难;以及3)对特殊制造工艺的需要。为了克服这些问题,最近提出了基于微电极点阵列(MEDA)的DMFB。然而,由于芯片缺陷和生化实验固有的不可预测性,MEDA-DMFB可能会出现错误。我们通过利用实时传感和先进的MEDA特定液滴操作,为MEDA提供了细粒度的错误恢复解决方案。所提出的方法依赖于自适应液滴等分操作和混合的预测分析。此外,提出了一种前滚误差恢复方法,以有效利用生物芯片的未使用部分,并减少误差恢复所需的时间。在三个具有代表性的基准上的实验结果证明了所提出的错误恢复策略的有效性。
{"title":"Adaptive and Roll-Forward Error Recovery in MEDA Biochips Based on Droplet-Aliquot Operations and Predictive Analysis","authors":"Zhanwei Zhong;Zipeng Li;Krishnendu Chakrabarty","doi":"10.1109/TMSCS.2018.2827030","DOIUrl":"https://doi.org/10.1109/TMSCS.2018.2827030","url":null,"abstract":"Digital microfluidic biochips (DMFBs) are being increasingly used in biochemistry labs for automating bioassays. However, traditional DMFBs suffer from some key shortcomings: 1) inability to vary droplet volume in a flexible manner; 2) difficulty of integrating on-chip sensors; and 3) the need for special fabrication processes. To overcome these problems, DMFBs based on micro-electrode-dot -array (MEDA) have recently been proposed. However, errors are likely to occur on a MEDA DMFB due to chip defects and the unpredictability inherent to biochemical experiments. We present fine-grained error-recovery solutions for MEDA by exploiting real-time sensing and advanced MEDA-specific droplet operations. The proposed methods rely on adaptive droplet-aliquot operations and predictive analysis of mixing. In addition, a roll-forward error-recovery method is proposed to efficiently utilize the unused part of the biochip and reduce the time required for error recovery. Experimental results on three representative benchmarks demonstrate the efficiency of the proposed error-recovery strategy.","PeriodicalId":100643,"journal":{"name":"IEEE Transactions on Multi-Scale Computing Systems","volume":"4 4","pages":"577-592"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMSCS.2018.2827030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68025493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
IEEE Transactions on Multi-Scale Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1