首页 > 最新文献

ACM Transactions on Computer Systems最新文献

英文 中文
Charlotte: Reformulating Blockchains into a Web of Composable Attested Data Structures for Cross-Domain Applications Charlotte:将区块链重新定义为跨领域应用的可组合认证数据结构网络
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2023-07-22 DOI: https://dl.acm.org/doi/10.1145/3607534
Isaac Sheff, Xinwen Wang, Kushal Babel, Haobin Ni, Robbert van Renesse, Andrew C. Myers

Cross-domain applications are rapidly adopting blockchain techniques for immutability, availability, integrity, and interoperability. However, for most applications, global consensus is unnecessary and may not even provide sufficient guarantees.

We propose a new distributed data structure: Attested Data Structures (ADS), which generalize not only blockchains, but also many other structures used by distributed applications. As in blockchains, data in ADSs is immutable and self-authenticating. ADSs go further by supporting application-defined proofs (attestations). Attestations enable applications to plug in their own mechanisms to ensure availability and integrity.

We present Charlotte, a framework for composable ADSs. Charlotte deconstructs conventional blockchains into more primitive mechanisms. Charlotte can be used to construct blockchains, but does not impose the usual global-ordering overhead. Charlotte offers a flexible foundation for interacting applications that define their own policies for availability and integrity. Unlike traditional distributed systems, Charlotte supports heterogeneous trust: different observers have their own beliefs about who might fail, and how. Nevertheless, each observer has a consistent, available view of data.

Charlotte’s data structures are interoperable and composable: applications and data structures can operate fully independently, or can share data when desired. Charlotte defines a language-independent format for data blocks and a network API for servers.

To demonstrate Charlotte’s flexibility, we implement several integrity mechanisms, including consensus and proof of work. We explore the power of disentangling availability and integrity mechanisms in prototype applications. The results suggest that Charlotte can be used to build flexible, fast, composable applications with strong guarantees.

跨域应用正在迅速采用区块链技术来实现不变性、可用性、完整性和互操作性。然而,对于大多数应用程序来说,全局共识是不必要的,甚至可能无法提供足够的保证。我们提出了一种新的分布式数据结构:证明数据结构(ADS),它不仅概括了区块链,还概括了分布式应用程序使用的许多其他结构。与区块链一样,ads中的数据是不可变的和自我认证的。ads进一步支持应用程序定义的证明(证明)。认证使应用程序能够插入自己的机制,以确保可用性和完整性。我们提出了Charlotte,一个可组合ads的框架。Charlotte将传统的区块链解构为更原始的机制。Charlotte可以用来构建区块链,但不会强加通常的全局排序开销。Charlotte为交互应用程序提供了一个灵活的基础,这些应用程序定义了自己的可用性和完整性策略。与传统的分布式系统不同,Charlotte支持异构信任:不同的观察者对谁可能失败以及如何失败有自己的看法。然而,每个观察者都有一个一致的、可用的数据视图。Charlotte的数据结构是可互操作和可组合的:应用程序和数据结构可以完全独立运行,也可以在需要时共享数据。Charlotte为数据块定义了一种与语言无关的格式,并为服务器定义了一个网络API。为了展示Charlotte的灵活性,我们实现了几个完整性机制,包括共识和工作量证明。我们将探索在原型应用程序中分离可用性和完整性机制的力量。结果表明,Charlotte可用于构建具有强大保证的灵活、快速、可组合的应用程序。
{"title":"Charlotte: Reformulating Blockchains into a Web of Composable Attested Data Structures for Cross-Domain Applications","authors":"Isaac Sheff, Xinwen Wang, Kushal Babel, Haobin Ni, Robbert van Renesse, Andrew C. Myers","doi":"https://dl.acm.org/doi/10.1145/3607534","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3607534","url":null,"abstract":"<p>Cross-domain applications are rapidly adopting blockchain techniques for immutability, availability, integrity, and interoperability. However, for most applications, global consensus is unnecessary and may not even provide sufficient guarantees. </p><p>We propose a new distributed data structure: <i>Attested Data Structures</i> (ADS), which generalize not only blockchains, but also many other structures used by distributed applications. As in blockchains, data in ADSs is immutable and self-authenticating. ADSs go further by supporting application-defined proofs (<i>attestations</i>). Attestations enable applications to plug in their own mechanisms to ensure availability and integrity. </p><p>We present <i>Charlotte</i>, a framework for composable ADSs. Charlotte deconstructs conventional blockchains into more primitive mechanisms. Charlotte can be used to construct blockchains, but does not impose the usual global-ordering overhead. Charlotte offers a flexible foundation for interacting applications that define their own policies for availability and integrity. Unlike traditional distributed systems, Charlotte supports heterogeneous trust: different observers have their own beliefs about who might fail, and how. Nevertheless, each observer has a consistent, available view of data. </p><p>Charlotte’s data structures are interoperable and <i>composable</i>: applications and data structures can operate fully independently, or can share data when desired. Charlotte defines a language-independent format for data blocks and a network API for servers. </p><p>To demonstrate Charlotte’s flexibility, we implement several integrity mechanisms, including consensus and proof of work. We explore the power of disentangling availability and integrity mechanisms in prototype applications. The results suggest that Charlotte can be used to build flexible, fast, composable applications with strong guarantees.</p>","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"4 6","pages":""},"PeriodicalIF":1.5,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Partial Network Partitioning 部分网络分区
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-12-19 DOI: 10.1145/3576192
Basil Alkhatib, Sreeharsha Udayashankar, Sara Qunaibi, Ahmed Alquraan, Mohammed Alfatafta, Wael Al-Manasrah, Alex Depoutovitch, S. Al-Kiswany
We present an extensive study focused on partial network partitioning. Partial network partitions disrupt the communication between some but not all nodes in a cluster. First, we conduct a comprehensive study of system failures caused by this fault in 13 popular systems. Our study reveals that the studied failures are catastrophic (e.g., lead to data loss), easily manifest, and are mainly due to design flaws. Our analysis identifies vulnerabilities in core systems mechanisms including scheduling, membership management, and ZooKeeper-based configuration management. Second, we dissect the design of nine popular systems and identify four principled approaches for tolerating partial partitions. Unfortunately, our analysis shows that implemented fault tolerance techniques are inadequate for modern systems; they either patch a particular mechanism or lead to a complete cluster shutdown, even when alternative network paths exist. Finally, our findings motivate us to build Nifty, a transparent communication layer that masks partial network partitions. Nifty builds an overlay between nodes to detour packets around partial partitions. Nifty provides an approach for applications to optimize their operation during a partial partition. We demonstrate the benefit of this approach through integrating Nifty with VoltDB, HDFS, and Kafka.
我们提出了一个广泛的研究集中在部分网络划分。部分网络分区会中断集群中某些但不是所有节点之间的通信。首先,我们对13个流行系统中由该故障引起的系统故障进行了全面研究。我们的研究表明,所研究的故障是灾难性的(例如,导致数据丢失),很容易表现出来,并且主要是由于设计缺陷造成的。我们的分析发现了核心系统机制中的漏洞,包括调度、成员管理和基于ZooKeeper的配置管理。其次,我们剖析了九个流行系统的设计,并确定了容忍部分分区的四种原则性方法。不幸的是,我们的分析表明,实现的容错技术不适合现代系统;它们要么修补特定的机制,要么导致集群完全关闭,即使存在替代网络路径。最后,我们的发现激励我们构建Nifty,这是一个透明的通信层,可以屏蔽部分网络分区。Nifty在节点之间构建了一个覆盖层,以绕过部分分区的数据包。Nifty为应用程序在部分分区期间优化其操作提供了一种方法。我们通过将Nifty与VoltDB、HDFS和Kafka集成,展示了这种方法的好处。
{"title":"Partial Network Partitioning","authors":"Basil Alkhatib, Sreeharsha Udayashankar, Sara Qunaibi, Ahmed Alquraan, Mohammed Alfatafta, Wael Al-Manasrah, Alex Depoutovitch, S. Al-Kiswany","doi":"10.1145/3576192","DOIUrl":"https://doi.org/10.1145/3576192","url":null,"abstract":"We present an extensive study focused on partial network partitioning. Partial network partitions disrupt the communication between some but not all nodes in a cluster. First, we conduct a comprehensive study of system failures caused by this fault in 13 popular systems. Our study reveals that the studied failures are catastrophic (e.g., lead to data loss), easily manifest, and are mainly due to design flaws. Our analysis identifies vulnerabilities in core systems mechanisms including scheduling, membership management, and ZooKeeper-based configuration management. Second, we dissect the design of nine popular systems and identify four principled approaches for tolerating partial partitions. Unfortunately, our analysis shows that implemented fault tolerance techniques are inadequate for modern systems; they either patch a particular mechanism or lead to a complete cluster shutdown, even when alternative network paths exist. Finally, our findings motivate us to build Nifty, a transparent communication layer that masks partial network partitions. Nifty builds an overlay between nodes to detour packets around partial partitions. Nifty provides an approach for applications to optimize their operation during a partial partition. We demonstrate the benefit of this approach through integrating Nifty with VoltDB, HDFS, and Kafka.","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"1 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43450613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Partial Network Partitioning 部分网络分区
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-12-19 DOI: https://dl.acm.org/doi/10.1145/3576192
Basil Alkhatib, Sreeharsha Udayashankar, Sara Qunaibi, Ahmed Alquraan, Mohammed Alfatafta, Wael Al-Manasrah, Alex Depoutovitch, Samer Al-Kiswany

We present an extensive study focused on partial network partitioning. Partial network partitions disrupt the communication between some but not all nodes in a cluster. First, we conduct a comprehensive study of system failures caused by this fault in 13 popular systems. Our study reveals that the studied failures are catastrophic (e.g., lead to data loss), easily manifest, and are mainly due to design flaws. Our analysis identifies vulnerabilities in core systems mechanisms including scheduling, membership management, and ZooKeeper-based configuration management.

Second, we dissect the design of nine popular systems and identify four principled approaches for tolerating partial partitions. Unfortunately, our analysis shows that implemented fault tolerance techniques are inadequate for modern systems; they either patch a particular mechanism or lead to a complete cluster shutdown, even when alternative network paths exist.

Finally, our findings motivate us to build Nifty, a transparent communication layer that masks partial network partitions. Nifty builds an overlay between nodes to detour packets around partial partitions. Nifty provides an approach for applications to optimize their operation during a partial partition. We demonstrate the benefit of this approach through integrating Nifty with VoltDB, HDFS, and Kafka.

我们提出了一个广泛的研究集中在部分网络分区。部分网络分区会破坏集群中部分节点之间的通信,而不是所有节点之间的通信。首先,我们对13个常用系统中由该故障引起的系统故障进行了全面的研究。我们的研究表明,所研究的故障是灾难性的(例如,导致数据丢失),容易显现,主要是由于设计缺陷。我们的分析发现了核心系统机制中的漏洞,包括调度、成员管理和基于zookeeper的配置管理。其次,我们剖析了九种流行系统的设计,并确定了容忍部分分区的四种原则方法。不幸的是,我们的分析表明,实现的容错技术对现代系统来说是不够的;它们要么修补特定机制,要么导致集群完全关闭,即使存在替代网络路径。最后,我们的发现激励我们构建Nifty,这是一个透明的通信层,可以屏蔽部分网络分区。Nifty在节点之间构建一个覆盖层,以绕过部分分区的数据包。Nifty为应用程序在部分分区期间优化其操作提供了一种方法。我们通过将Nifty与voldb、HDFS和Kafka集成来展示这种方法的好处。
{"title":"Partial Network Partitioning","authors":"Basil Alkhatib, Sreeharsha Udayashankar, Sara Qunaibi, Ahmed Alquraan, Mohammed Alfatafta, Wael Al-Manasrah, Alex Depoutovitch, Samer Al-Kiswany","doi":"https://dl.acm.org/doi/10.1145/3576192","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3576192","url":null,"abstract":"<p>We present an extensive study focused on partial network partitioning. Partial network partitions disrupt the communication between some but not all nodes in a cluster. First, we conduct a comprehensive study of system failures caused by this fault in 13 popular systems. Our study reveals that the studied failures are catastrophic (e.g., lead to data loss), easily manifest, and are mainly due to design flaws. Our analysis identifies vulnerabilities in core systems mechanisms including scheduling, membership management, and ZooKeeper-based configuration management. </p><p>Second, we dissect the design of nine popular systems and identify four principled approaches for tolerating partial partitions. Unfortunately, our analysis shows that implemented fault tolerance techniques are inadequate for modern systems; they either patch a particular mechanism or lead to a complete cluster shutdown, even when alternative network paths exist. </p><p>Finally, our findings motivate us to build Nifty, a transparent communication layer that masks partial network partitions. Nifty builds an overlay between nodes to detour packets around partial partitions. Nifty provides an approach for applications to optimize their operation during a partial partition. We demonstrate the benefit of this approach through integrating Nifty with VoltDB, HDFS, and Kafka.</p>","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"6 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Instruction Scheduling Using Real-time Load Delay Tracking 基于实时负载延迟跟踪的高效指令调度
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-11-24 DOI: https://dl.acm.org/doi/10.1145/3548681
Andreas Diavastos, Trevor E. Carlson

Issue time prediction processors use dataflow dependencies and predefined instruction latencies to predict issue times of repeated instructions. In this work, we make two key observations: (1) memory accesses often take additional time to arrive than the static, predefined access latency that is used to describe these systems. This is due to contention in the memory hierarchy and variability in DRAM access times, and (2) we find that these memory access delays often repeat across iterations of the same code. We propose a new processor microarchitecture that replaces a complex reservation-station-based scheduler with an efficient, scalable alternative. Our scheduling technique tracks real-time delays of loads to accurately predict instruction issue times and uses a reordering mechanism to prioritize instructions based on that prediction. To accomplish this in an energy-efficient manner we introduce (1) an instruction delay learning mechanism that monitors repeated load instructions and learns their latest delay, (2) an issue time predictor that uses learned delays and dataflow dependencies to predict instruction issue times, and (3) priority queues that reorder instructions based on their issue time prediction. Our processor achieves 86.2% of the performance of a traditional out-of-order processor, higher than previous efficient scheduler proposals, while consuming 30% less power.

发布时间预测处理器使用数据流依赖关系和预定义的指令延迟来预测重复指令的发布时间。在这项工作中,我们做了两个关键的观察:(1)内存访问通常比用于描述这些系统的静态、预定义的访问延迟花费更多的时间。这是由于内存层次结构中的争用和DRAM访问时间的可变性,并且(2)我们发现这些内存访问延迟经常在相同代码的迭代中重复。我们提出了一种新的处理器微架构,它用一种高效、可扩展的替代方案取代了复杂的基于预订站的调度程序。我们的调度技术跟踪负载的实时延迟,以准确预测指令发布时间,并使用基于该预测的重新排序机制来确定指令的优先级。为了以一种节能的方式实现这一目标,我们引入了(1)一个指令延迟学习机制,该机制监控重复的负载指令并学习它们的最新延迟,(2)一个问题时间预测器,它使用学习到的延迟和数据流依赖关系来预测指令的问题时间,以及(3)基于问题时间预测对指令进行重新排序的优先级队列。我们的处理器达到了传统无序处理器的86.2%的性能,比以前的高效调度器方案更高,同时消耗的功率减少了30%。
{"title":"Efficient Instruction Scheduling Using Real-time Load Delay Tracking","authors":"Andreas Diavastos, Trevor E. Carlson","doi":"https://dl.acm.org/doi/10.1145/3548681","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3548681","url":null,"abstract":"<p>Issue time prediction processors use dataflow dependencies and predefined instruction latencies to predict issue times of repeated instructions. In this work, we make two key observations: (1) memory accesses often take additional time to arrive than the static, predefined access latency that is used to describe these systems. This is due to contention in the memory hierarchy and variability in DRAM access times, and (2) we find that these memory access delays often repeat across iterations of the same code. We propose a new processor microarchitecture that replaces a complex reservation-station-based scheduler with an efficient, scalable alternative. Our scheduling technique tracks real-time delays of loads to accurately predict instruction issue times and uses a reordering mechanism to prioritize instructions based on that prediction. To accomplish this in an energy-efficient manner we introduce (1) an <i>instruction delay learning mechanism</i> that monitors repeated load instructions and learns their latest delay, (2) an <i>issue time predictor</i> that uses learned delays and dataflow dependencies to predict instruction issue times, and (3) <i>priority queues</i> that reorder instructions based on their issue time prediction. Our processor achieves 86.2% of the performance of a traditional out-of-order processor, higher than previous efficient scheduler proposals, while consuming 30% less power.</p>","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"6 4","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Pattern of On-Off Routers and Links and Router Delays to Protect Network-on-Chip Intellectual Property 利用开关式路由器和链路模式及路由器延迟保护片上网络知识产权
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-11-24 DOI: https://dl.acm.org/doi/10.1145/3548680
Arnab Kumar Biswas

Intellectual Property (IP) reuse is a well known practice in chip design processes. Nowadays, network-on-chips (NoCs) are increasingly used as IP and sold by various vendors to be integrated in a multiprocessor system-on-chip (MPSoC). However, IP reuse exposes the design to IP theft, and an attacker can launch IP stealing attacks against NoC IPs. With the growing adoption of MPSoC, such attacks can result in huge financial losses. In this article, we propose four NoC IP protection techniques using fingerprint embedding: ON-OFF router-based fingerprinting (ORF), ON-OFF link-based fingerprinting (OLF), Router delay-based fingerprinting (RTDF), and Row delay-based fingerprinting (RWDF). ORF and OLF techniques use patterns of ON-OFF routers and links, respectively, while RTDF and RWDF techniques use router delays to embed fingerprints. We show that all of our proposed techniques require much less hardware overhead compared to an existing NoC IP security solution (square spiral routing) and also provide better security from removal and masking attacks. In particular, our proposed techniques require between 40.75% and 48.43% less router area compared to the existing solution. We also show that our solutions do not affect the normal packet latency and hence do not degrade the NoC performance.

在芯片设计过程中,知识产权(IP)重用是一种众所周知的做法。如今,片上网络(noc)越来越多地被用作IP,并由各种供应商销售,以集成在多处理器片上系统(MPSoC)中。然而,IP重用使设计暴露于IP窃取,攻击者可以对NoC IP发起IP窃取攻击。随着MPSoC的日益普及,此类攻击可能导致巨大的经济损失。在本文中,我们提出了四种使用指纹嵌入的NoC IP保护技术:基于开关的基于路由器的指纹识别(ORF),基于开关的基于链路的指纹识别(OLF),基于路由器延迟的指纹识别(RTDF)和基于行延迟的指纹识别(RWDF)。ORF和OLF技术分别使用ON-OFF路由器和链路模式,而RTDF和RWDF技术使用路由器延迟来嵌入指纹。我们表明,与现有的NoC IP安全解决方案(方螺旋路由)相比,我们提出的所有技术所需的硬件开销都要少得多,并且还提供了更好的安全性,可以防止移除和屏蔽攻击。特别是,与现有解决方案相比,我们提出的技术需要的路由器面积减少了40.75%到48.43%。我们还表明,我们的解决方案不会影响正常的数据包延迟,因此不会降低NoC性能。
{"title":"Using Pattern of On-Off Routers and Links and Router Delays to Protect Network-on-Chip Intellectual Property","authors":"Arnab Kumar Biswas","doi":"https://dl.acm.org/doi/10.1145/3548680","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3548680","url":null,"abstract":"<p>Intellectual Property (IP) reuse is a well known practice in chip design processes. Nowadays, network-on-chips (NoCs) are increasingly used as IP and sold by various vendors to be integrated in a multiprocessor system-on-chip (MPSoC). However, IP reuse exposes the design to IP theft, and an attacker can launch IP stealing attacks against NoC IPs. With the growing adoption of MPSoC, such attacks can result in huge financial losses. In this article, we propose four NoC IP protection techniques using fingerprint embedding: ON-OFF router-based fingerprinting (ORF), ON-OFF link-based fingerprinting (OLF), Router delay-based fingerprinting (RTDF), and Row delay-based fingerprinting (RWDF). ORF and OLF techniques use patterns of ON-OFF routers and links, respectively, while RTDF and RWDF techniques use router delays to embed fingerprints. We show that all of our proposed techniques require much less hardware overhead compared to an existing NoC IP security solution (square spiral routing) and also provide better security from removal and masking attacks. In particular, our proposed techniques require between 40.75% and 48.43% less router area compared to the existing solution. We also show that our solutions do not affect the normal packet latency and hence do not degrade the NoC performance.</p>","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"6 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Pattern of On-Off Routers and Links and Router Delays to Protect Network-on-Chip Intellectual Property 利用路由器和链路及路由器延迟模式保护片上网络知识产权
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-07-13 DOI: 10.1145/3548680
A. Biswas
Intellectual Property (IP) reuse is a well known practice in chip design processes. Nowadays, network-on-chips (NoCs) are increasingly used as IP and sold by various vendors to be integrated in a multiprocessor system-on-chip (MPSoC). However, IP reuse exposes the design to IP theft, and an attacker can launch IP stealing attacks against NoC IPs. With the growing adoption of MPSoC, such attacks can result in huge financial losses. In this article, we propose four NoC IP protection techniques using fingerprint embedding: ON-OFF router-based fingerprinting (ORF), ON-OFF link-based fingerprinting (OLF), Router delay-based fingerprinting (RTDF), and Row delay-based fingerprinting (RWDF). ORF and OLF techniques use patterns of ON-OFF routers and links, respectively, while RTDF and RWDF techniques use router delays to embed fingerprints. We show that all of our proposed techniques require much less hardware overhead compared to an existing NoC IP security solution (square spiral routing) and also provide better security from removal and masking attacks. In particular, our proposed techniques require between 40.75% and 48.43% less router area compared to the existing solution. We also show that our solutions do not affect the normal packet latency and hence do not degrade the NoC performance.
知识产权(IP)复用是芯片设计过程中众所周知的实践。如今,片上网络(NoC)越来越多地被用作IP,并被各种供应商出售以集成在多处理器片上系统(MPSoC)中。然而,IP重用使设计暴露于IP盗窃,攻击者可以对NoC IP发起IP盗窃攻击。随着MPSoC的日益普及,此类攻击可能会导致巨大的财务损失。在本文中,我们提出了四种使用指纹嵌入的NoC IP保护技术:基于开-关路由器的指纹识别(ORF)、基于开-断链路的指纹识别、基于路由器延迟的指纹识别和基于行延迟的指纹检测。ORF和OLF技术分别使用ON-OFF路由器和链路的模式,而RTDF和RWDF技术使用路由器延迟来嵌入指纹。我们表明,与现有的NoC IP安全解决方案(方形螺旋路由)相比,我们提出的所有技术所需的硬件开销都要小得多,并且还提供了更好的安全性,免受移除和屏蔽攻击。特别是,与现有解决方案相比,我们提出的技术所需的路由器面积减少了40.75%至48.43%。我们还表明,我们的解决方案不会影响正常的数据包延迟,因此不会降低NoC的性能。
{"title":"Using Pattern of On-Off Routers and Links and Router Delays to Protect Network-on-Chip Intellectual Property","authors":"A. Biswas","doi":"10.1145/3548680","DOIUrl":"https://doi.org/10.1145/3548680","url":null,"abstract":"Intellectual Property (IP) reuse is a well known practice in chip design processes. Nowadays, network-on-chips (NoCs) are increasingly used as IP and sold by various vendors to be integrated in a multiprocessor system-on-chip (MPSoC). However, IP reuse exposes the design to IP theft, and an attacker can launch IP stealing attacks against NoC IPs. With the growing adoption of MPSoC, such attacks can result in huge financial losses. In this article, we propose four NoC IP protection techniques using fingerprint embedding: ON-OFF router-based fingerprinting (ORF), ON-OFF link-based fingerprinting (OLF), Router delay-based fingerprinting (RTDF), and Row delay-based fingerprinting (RWDF). ORF and OLF techniques use patterns of ON-OFF routers and links, respectively, while RTDF and RWDF techniques use router delays to embed fingerprints. We show that all of our proposed techniques require much less hardware overhead compared to an existing NoC IP security solution (square spiral routing) and also provide better security from removal and masking attacks. In particular, our proposed techniques require between 40.75% and 48.43% less router area compared to the existing solution. We also show that our solutions do not affect the normal packet latency and hence do not degrade the NoC performance.","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"40 1","pages":"1 - 19"},"PeriodicalIF":1.5,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41886004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ROME: All Overlays Lead to Aggregation, but Some Are Faster than Others 罗马:所有的叠加都会导致聚合,但有些会比其他更快
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-07-05 DOI: https://dl.acm.org/doi/full/10.1145/3516430
Marcel Blöcher, Emilio Coppa, Pascal Kleber, Patrick Eugster, William Culhane, Masoud Saeida Ardekani

Aggregation is common in data analytics and crucial to distilling information from large datasets, but current data analytics frameworks do not fully exploit the potential for optimization in such phases. The lack of optimization is particularly notable in current “online” approaches that store data in main memory across nodes, shifting the bottleneck away from disk I/O toward network and compute resources, thus increasing the relative performance impact of distributed aggregation phases.

We present ROME, an aggregation system for use within data analytics frameworks or in isolation. ROME uses a set of novel heuristics based primarily on basic knowledge of aggregation functions combined with deployment constraints to efficiently aggregate results from computations performed on individual data subsets across nodes (e.g., merging sorted lists resulting from top-k). The user can either provide minimal information that allows our heuristics to be applied directly, or ROME can autodetect the relevant information at little cost. We integrated ROME as a subsystem into the Spark and Flink data analytics frameworks. We use real-world data to experimentally demonstrate speedups up to 3× over single-level aggregation overlays, up to 21% over other multi-level overlays, and 50% for iterative algorithms like gradient descent at 100 iterations.

聚合在数据分析中很常见,对于从大型数据集中提取信息至关重要,但当前的数据分析框架并没有充分利用这些阶段的优化潜力。当前的“在线”方法将数据跨节点存储在主存中,将瓶颈从磁盘I/O转移到网络和计算资源,从而增加了分布式聚合阶段的相对性能影响,这种方法尤其缺乏优化。我们提出了ROME,一个用于数据分析框架或单独使用的聚合系统。ROME使用一组新颖的启发式方法,主要基于聚合函数的基本知识,结合部署约束,有效地聚合跨节点对单个数据子集执行的计算结果(例如,合并由top-k产生的排序列表)。用户可以提供最少的信息,以便直接应用我们的启发式算法,或者ROME可以以很少的成本自动检测相关信息。我们将ROME作为一个子系统集成到Spark和Flink数据分析框架中。我们使用真实世界的数据来实验证明,与单级聚合叠加相比,加速高达3倍,与其他多级叠加相比,加速高达21%,对于像梯度下降这样的迭代算法,在100次迭代中加速高达50%。
{"title":"ROME: All Overlays Lead to Aggregation, but Some Are Faster than Others","authors":"Marcel Blöcher, Emilio Coppa, Pascal Kleber, Patrick Eugster, William Culhane, Masoud Saeida Ardekani","doi":"https://dl.acm.org/doi/full/10.1145/3516430","DOIUrl":"https://doi.org/https://dl.acm.org/doi/full/10.1145/3516430","url":null,"abstract":"<p>Aggregation is common in data analytics and crucial to distilling information from large datasets, but current data analytics frameworks do not fully exploit the potential for optimization in such phases. The lack of optimization is particularly notable in current “online” approaches that store data in main memory across nodes, shifting the bottleneck away from disk I/O toward network and compute resources, thus increasing the relative performance impact of distributed aggregation phases.</p><p>We present ROME, an aggregation system for use within data analytics frameworks or in isolation. ROME uses a set of novel heuristics based primarily on basic knowledge of aggregation functions combined with deployment constraints to efficiently aggregate results from computations performed on individual data subsets across nodes (e.g., merging sorted lists resulting from top-<i>k</i>). The user can either provide minimal information that allows our heuristics to be applied directly, or ROME can autodetect the relevant information at little cost. We integrated ROME as a subsystem into the Spark and Flink data analytics frameworks. We use real-world data to experimentally demonstrate speedups up to 3× over single-level aggregation overlays, up to 21% over other multi-level overlays, and 50% for iterative algorithms like gradient descent at 100 iterations.</p>","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"7 2","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An OpenMP Runtime for Transparent Work Sharing across Cache-Incoherent Heterogeneous Nodes OpenMP运行时跨缓存不一致异构节点的透明工作共享
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-07-05 DOI: https://dl.acm.org/doi/full/10.1145/3505224
Robert Lyerly, Carlos Bilbao, Changwoo Min, Christopher J. Rossbach, Binoy Ravindran

In this work, we present libHetMP, an OpenMP runtime for automatically and transparently distributing parallel computation across heterogeneous nodes. libHetMP targets platforms comprising CPUs with different instruction set architectures (ISA) coupled by a high-speed memory interconnect, where cross-ISA binary incompatibility and non-coherent caches require application data be marshaled to be shared across CPUs. Because of this, work distribution decisions must take into account both relative compute performance of asymmetric CPUs and communication overheads. libHetMP drives workload distribution decisions without programmer intervention by measuring performance characteristics during cross-node execution. A novel HetProbe loop iteration scheduler decides if cross-node execution is beneficial and either distributes work according to the relative performance of CPUs when it is or places all work on the set of homogeneous CPUs providing the best performance when it is not. We evaluate libHetMP using compute kernels from several OpenMP benchmark suites and show a geometric mean 41% speedup in execution time across asymmetric CPUs. Because some workloads may showcase irregular behavior among iterations, we extend libHetMP with a second scheduler called HetProbe-I. The evaluation of HetProbe-I shows it can further improve speedup for irregular computation, in some cases up to a 24%, by triggering periodic distribution decisions.

在这项工作中,我们提出了libHetMP,一个OpenMP运行时,用于在异构节点上自动透明地分布并行计算。libHetMP的目标平台包括具有不同指令集架构(ISA)的cpu,通过高速内存互连,其中跨ISA二进制不兼容性和非相干缓存要求应用程序数据被封送以跨cpu共享。因此,工作分配决策必须同时考虑非对称cpu的相对计算性能和通信开销。libHetMP通过测量跨节点执行期间的性能特征,在没有程序员干预的情况下驱动工作负载分配决策。一个新颖的HetProbe循环迭代调度器决定跨节点执行是否有益,并根据cpu的相对性能分配工作,或者将所有工作放在提供最佳性能的同构cpu集上。我们使用来自几个OpenMP基准套件的计算内核来评估libHetMP,并显示在非对称cpu上执行时间的几何平均加速为41%。由于某些工作负载可能在迭代中表现出不规则的行为,因此我们使用第二个称为HetProbe-I的调度器扩展libHetMP。对HetProbe-I的评估表明,通过触发周期性分配决策,它可以进一步提高不规则计算的加速速度,在某些情况下高达24%。
{"title":"An OpenMP Runtime for Transparent Work Sharing across Cache-Incoherent Heterogeneous Nodes","authors":"Robert Lyerly, Carlos Bilbao, Changwoo Min, Christopher J. Rossbach, Binoy Ravindran","doi":"https://dl.acm.org/doi/full/10.1145/3505224","DOIUrl":"https://doi.org/https://dl.acm.org/doi/full/10.1145/3505224","url":null,"abstract":"<p>In this work, we present <monospace>libHetMP</monospace>, an OpenMP runtime for automatically and transparently distributing parallel computation across heterogeneous nodes. <monospace>libHetMP</monospace> targets platforms comprising CPUs with different instruction set architectures (ISA) coupled by a high-speed memory interconnect, where cross-ISA binary incompatibility and non-coherent caches require application data be marshaled to be shared across CPUs. Because of this, work distribution decisions must take into account both relative compute performance of asymmetric CPUs and communication overheads. <monospace>libHetMP</monospace> drives workload distribution decisions without programmer intervention by measuring performance characteristics during cross-node execution. A novel HetProbe loop iteration scheduler decides if cross-node execution is beneficial and either distributes work according to the relative performance of CPUs when it is or places all work on the set of homogeneous CPUs providing the best performance when it is not. We evaluate <monospace>libHetMP</monospace> using compute kernels from several OpenMP benchmark suites and show a geometric mean 41% speedup in execution time across asymmetric CPUs. Because some workloads may showcase irregular behavior among iterations, we extend <monospace>libHetMP</monospace> with a second scheduler called HetProbe-I. The evaluation of HetProbe-I shows it can further improve speedup for irregular computation, in some cases up to a 24%, by triggering periodic distribution decisions.</p>","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"7 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Holistic Memory Management Supporting Multiple Big Data Processing Frameworks over Hybrid Memories 支持混合内存上多个大数据处理框架的统一整体内存管理
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-07-05 DOI: https://dl.acm.org/doi/full/10.1145/3511211
Lei Chen, Jiacheng Zhao, Chenxi Wang, Ting Cao, John Zigman, Haris Volos, Onur Mutlu, Fang Lv, Xiaobing Feng, Guoqing Harry Xu, Huimin Cui

To process real-world datasets, modern data-parallel systems often require extremely large amounts of memory, which are both costly and energy inefficient. Emerging non-volatile memory (NVM) technologies offer high capacity compared to DRAM and low energy compared to SSDs. Hence, NVMs have the potential to fundamentally change the dichotomy between DRAM and durable storage in Big Data processing. However, most Big Data applications are written in managed languages and executed on top of a managed runtime that already performs various dimensions of memory management. Supporting hybrid physical memories adds a new dimension, creating unique challenges in data replacement. This article proposes Panthera, a semantics-aware, fully automated memory management technique for Big Data processing over hybrid memories. Panthera analyzes user programs on a Big Data system to infer their coarse-grained access patterns, which are then passed to the Panthera runtime for efficient data placement and migration. For Big Data applications, the coarse-grained data division information is accurate enough to guide the GC for data layout, which hardly incurs overhead in data monitoring and moving. We implemented Panthera in OpenJDK and Apache Spark. Based on Big Data applications’ memory access pattern, we also implemented a new profiling-guided optimization strategy, which is transparent to applications. With this optimization, our extensive evaluation demonstrates that Panthera reduces energy by 32–53% at less than 1% time overhead on average. To show Panthera’s applicability, we extend it to QuickCached, a pure Java implementation of Memcached. Our evaluation results show that Panthera reduces energy by 28.7% at 5.2% time overhead on average.

为了处理真实世界的数据集,现代数据并行系统通常需要非常大的内存,这既昂贵又节能。新兴的非易失性存储器(NVM)技术与DRAM相比容量大,与ssd相比能耗低。因此,nvm有可能从根本上改变大数据处理中DRAM和耐用存储之间的二分法。然而,大多数大数据应用程序是用托管语言编写的,并在托管运行时上执行,该运行时已经执行了各种内存管理。支持混合物理存储器增加了一个新的维度,在数据替换方面带来了独特的挑战。本文提出了Panthera,一种语义感知的、完全自动化的内存管理技术,用于混合内存上的大数据处理。Panthera分析大数据系统上的用户程序,以推断其粗粒度访问模式,然后将其传递给Panthera运行时,以实现有效的数据放置和迁移。对于大数据应用,粗粒度的数据划分信息足够精确,可以指导GC进行数据布局,几乎不会产生数据监控和移动的开销。我们在OpenJDK和Apache Spark中实现了Panthera。基于大数据应用的内存访问模式,我们还实现了一种新的基于性能分析的优化策略,该策略对应用透明。通过这种优化,我们的广泛评估表明,Panthera在平均不到1%的时间开销下减少了32-53%的能源。为了展示Panthera的适用性,我们将其扩展到QuickCached,这是Memcached的纯Java实现。我们的评估结果表明,Panthera在平均5.2%的时间开销下减少了28.7%的能源。
{"title":"Unified Holistic Memory Management Supporting Multiple Big Data Processing Frameworks over Hybrid Memories","authors":"Lei Chen, Jiacheng Zhao, Chenxi Wang, Ting Cao, John Zigman, Haris Volos, Onur Mutlu, Fang Lv, Xiaobing Feng, Guoqing Harry Xu, Huimin Cui","doi":"https://dl.acm.org/doi/full/10.1145/3511211","DOIUrl":"https://doi.org/https://dl.acm.org/doi/full/10.1145/3511211","url":null,"abstract":"<p>To process real-world datasets, modern data-parallel systems often require extremely large amounts of memory, which are both costly and energy inefficient. Emerging <b>non-volatile memory (NVM)</b> technologies offer high capacity compared to DRAM and low energy compared to SSDs. Hence, NVMs have the potential to fundamentally change the dichotomy between DRAM and durable storage in Big Data processing. However, most Big Data applications are written in <i>managed languages</i> and executed on top of a <i>managed runtime</i> that already performs various dimensions of memory management. Supporting hybrid physical memories adds a new dimension, creating unique challenges in data replacement. This article proposes Panthera, a <i>semantics-aware, fully automated</i> memory management technique for Big Data processing over hybrid memories. Panthera analyzes user programs on a Big Data system to infer their coarse-grained access patterns, which are then passed to the Panthera runtime for efficient data placement and migration. For Big Data applications, the coarse-grained data division information is accurate enough to guide the GC for data layout, which hardly incurs overhead in data monitoring and moving. We implemented Panthera in OpenJDK and Apache Spark. Based on Big Data applications’ memory access pattern, we also implemented a new profiling-guided optimization strategy, which is <i>transparent</i> to applications. With this optimization, our extensive evaluation demonstrates that Panthera reduces energy by 32–53% at less than 1% time overhead on average. To show Panthera’s applicability, we extend it to QuickCached, a pure Java implementation of Memcached. Our evaluation results show that Panthera reduces energy by 28.7% at 5.2% time overhead on average.</p>","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"6 6","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Instruction Scheduling Using Real-time Load Delay Tracking 基于实时负载延迟跟踪的高效指令调度
IF 1.5 4区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2021-09-07 DOI: 10.1145/3548681
Andreas Diavastos, Trevor E. Carlson
Issue time prediction processors use dataflow dependencies and predefined instruction latencies to predict issue times of repeated instructions. In this work, we make two key observations: (1) memory accesses often take additional time to arrive than the static, predefined access latency that is used to describe these systems. This is due to contention in the memory hierarchy and variability in DRAM access times, and (2) we find that these memory access delays often repeat across iterations of the same code. We propose a new processor microarchitecture that replaces a complex reservation-station-based scheduler with an efficient, scalable alternative. Our scheduling technique tracks real-time delays of loads to accurately predict instruction issue times and uses a reordering mechanism to prioritize instructions based on that prediction. To accomplish this in an energy-efficient manner we introduce (1) an instruction delay learning mechanism that monitors repeated load instructions and learns their latest delay, (2) an issue time predictor that uses learned delays and dataflow dependencies to predict instruction issue times, and (3) priority queues that reorder instructions based on their issue time prediction. Our processor achieves 86.2% of the performance of a traditional out-of-order processor, higher than previous efficient scheduler proposals, while consuming 30% less power.
发布时间预测处理器使用数据流相关性和预定义的指令延迟来预测重复指令的发布时间。在这项工作中,我们提出了两个关键的观察结果:(1)与用于描述这些系统的静态预定义访问延迟相比,内存访问通常需要额外的时间才能到达。这是由于内存层次结构中的争用和DRAM访问时间的可变性,以及(2)我们发现这些内存访问延迟经常在同一代码的迭代中重复。我们提出了一种新的处理器微体系结构,用一种高效、可扩展的替代方案取代了复杂的基于预留站的调度器。我们的调度技术跟踪负载的实时延迟,以准确预测指令发布时间,并使用重新排序机制根据该预测对指令进行优先级排序。为了以高效节能的方式实现这一点,我们引入了(1)一种指令延迟学习机制,该机制监测重复的加载指令并学习它们的最新延迟,(2)一种发布时间预测器,该预测器使用学习的延迟和数据流依赖性来预测指令发布时间,以及(3)优先级队列,该队列基于指令的发布时间预测对指令进行重新排序。我们的处理器实现了传统无序处理器86.2%的性能,高于以前的高效调度方案,同时功耗降低了30%。
{"title":"Efficient Instruction Scheduling Using Real-time Load Delay Tracking","authors":"Andreas Diavastos, Trevor E. Carlson","doi":"10.1145/3548681","DOIUrl":"https://doi.org/10.1145/3548681","url":null,"abstract":"Issue time prediction processors use dataflow dependencies and predefined instruction latencies to predict issue times of repeated instructions. In this work, we make two key observations: (1) memory accesses often take additional time to arrive than the static, predefined access latency that is used to describe these systems. This is due to contention in the memory hierarchy and variability in DRAM access times, and (2) we find that these memory access delays often repeat across iterations of the same code. We propose a new processor microarchitecture that replaces a complex reservation-station-based scheduler with an efficient, scalable alternative. Our scheduling technique tracks real-time delays of loads to accurately predict instruction issue times and uses a reordering mechanism to prioritize instructions based on that prediction. To accomplish this in an energy-efficient manner we introduce (1) an instruction delay learning mechanism that monitors repeated load instructions and learns their latest delay, (2) an issue time predictor that uses learned delays and dataflow dependencies to predict instruction issue times, and (3) priority queues that reorder instructions based on their issue time prediction. Our processor achieves 86.2% of the performance of a traditional out-of-order processor, higher than previous efficient scheduler proposals, while consuming 30% less power.","PeriodicalId":50918,"journal":{"name":"ACM Transactions on Computer Systems","volume":"40 1","pages":"1 - 21"},"PeriodicalIF":1.5,"publicationDate":"2021-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45728573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
ACM Transactions on Computer Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1