首页 > 最新文献

IEEE Transactions on Computers最新文献

英文 中文
A Numerical Variability Approach to Results Stability Tests and Its Application to Neuroimaging 结果稳定性测试的数值变异性方法及其在神经影像学中的应用
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-08 DOI: 10.1109/TC.2024.3475586
Yohan Chatelain;Loïc Tetrel;Christopher J. Markiewicz;Mathias Goncalves;Gregory Kiar;Oscar Esteban;Pierre Bellec;Tristan Glatard
Ensuring the long-term reproducibility of data analyses requires results stability tests to verify that analysis results remain within acceptable variation bounds despite inevitable software updates and hardware evolutions. This paper introduces a numerical variability approach for results stability tests, which determines acceptable variation bounds using random rounding of floating-point calculations. By applying the resulting stability test to fMRIPrep, a widely-used neuroimaging tool, we show that the test is sensitive enough to detect subtle updates in image processing methods while remaining specific enough to accept numerical variations within a reference version of the application. This result contributes to enhancing the reliability and reproducibility of data analyses by providing a robust and flexible method for stability testing.
确保数据分析的长期可重复性需要结果稳定性测试,以验证尽管不可避免的软件更新和硬件演变,分析结果仍保持在可接受的变化范围内。本文介绍了一种用于结果稳定性检验的数值变异性方法,该方法利用浮点计算的随机舍入来确定可接受的变异性范围。通过将结果稳定性测试应用于fMRIPrep(一种广泛使用的神经成像工具),我们表明该测试足够敏感,可以检测到图像处理方法中的细微更新,同时保持足够的特异性,可以接受应用程序参考版本中的数值变化。该结果为稳定性测试提供了一种稳健灵活的方法,有助于提高数据分析的可靠性和可重复性。
{"title":"A Numerical Variability Approach to Results Stability Tests and Its Application to Neuroimaging","authors":"Yohan Chatelain;Loïc Tetrel;Christopher J. Markiewicz;Mathias Goncalves;Gregory Kiar;Oscar Esteban;Pierre Bellec;Tristan Glatard","doi":"10.1109/TC.2024.3475586","DOIUrl":"https://doi.org/10.1109/TC.2024.3475586","url":null,"abstract":"Ensuring the long-term reproducibility of data analyses requires results stability tests to verify that analysis results remain within acceptable variation bounds despite inevitable software updates and hardware evolutions. This paper introduces a numerical variability approach for results stability tests, which determines acceptable variation bounds using random rounding of floating-point calculations. By applying the resulting stability test to \u0000<italic>fMRIPrep</i>\u0000, a widely-used neuroimaging tool, we show that the test is sensitive enough to detect subtle updates in image processing methods while remaining specific enough to accept numerical variations within a reference version of the application. This result contributes to enhancing the reliability and reproducibility of data analyses by providing a robust and flexible method for stability testing.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"200-209"},"PeriodicalIF":3.6,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and Fast High-Performance Library Generation for Deep Learning Accelerators
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-08 DOI: 10.1109/TC.2024.3475575
Jun Bi;Yuanbo Wen;Xiaqing Li;Yongwei Zhao;Yuxuan Guo;Enshuai Zhou;Xing Hu;Zidong Du;Ling Li;Huaping Chen;Tianshi Chen;Qi Guo
The widespread adoption of deep learning accelerators (DLAs) underscores their pivotal role in improving the performance and energy efficiency of neural networks. To fully leverage the capabilities of these accelerators, exploration-based library generation approaches have been widely used to substantially reduce software development overhead. However, these approaches have been challenged by issues related to sub-optimal optimization results and excessive optimization overheads. In this paper, we propose Heron to generate high-performance libraries of DLAs in an efficient and fast way. The key is automatically enforcing massive constraints through the entire program generation process and guiding the exploration with an accurate pre-trained cost model. Heron represents the search space as a constrained satisfaction problem (CSP) and explores the space via evolving the CSPs. Thus, the sophisticated constraints of the search space are strictly preserved during the entire exploration process. The exploration algorithm has the flexibility to engage in space exploration using either online-trained models or pre-trained models. Experimental results demonstrate that Heron averagely achieves 2.71$times$ speedup over three state-of-the-art automatic generation approaches. Also, compared to vendor-provided hand-tuned libraries, Heron achieves a 2.00$times$ speedup on average. When employing a pre-trained model, Heron achieves 11.6$times$ compilation time speedup, incurring a minor impact on execution time.
​为了充分利用这些加速器的功能,基于探索的库生成方法已被广泛使用,以大大减少软件开发开销。然而,这些方法受到了与次优优化结果和过多优化开销相关的问题的挑战。在本文中,我们提出了Heron来高效、快速地生成高性能的dla库。关键是在整个程序生成过程中自动执行大量约束,并使用精确的预训练成本模型指导探索。Heron将搜索空间表示为约束满足问题(constrained satisfaction problem, CSP),并通过演化CSP来探索空间。因此,在整个搜索过程中严格保留了搜索空间的复杂约束条件。探索算法具有灵活性,可以使用在线训练模型或预训练模型进行空间探索。实验结果表明,在三种最先进的自动生成方法中,Heron平均实现了2.71美元的加速。此外,与供应商提供的手动调优库相比,Heron平均实现了2.00美元的加速。当使用预训练模型时,Heron实现了11.6$times$编译时间加速,对执行时间的影响很小。
{"title":"Efficient and Fast High-Performance Library Generation for Deep Learning Accelerators","authors":"Jun Bi;Yuanbo Wen;Xiaqing Li;Yongwei Zhao;Yuxuan Guo;Enshuai Zhou;Xing Hu;Zidong Du;Ling Li;Huaping Chen;Tianshi Chen;Qi Guo","doi":"10.1109/TC.2024.3475575","DOIUrl":"https://doi.org/10.1109/TC.2024.3475575","url":null,"abstract":"The widespread adoption of deep learning accelerators (DLAs) underscores their pivotal role in improving the performance and energy efficiency of neural networks. To fully leverage the capabilities of these accelerators, exploration-based library generation approaches have been widely used to substantially reduce software development overhead. However, these approaches have been challenged by issues related to sub-optimal optimization results and excessive optimization overheads. In this paper, we propose \u0000<small>Heron</small>\u0000 to generate high-performance libraries of DLAs in an efficient and fast way. The key is automatically enforcing massive constraints through the entire program generation process and guiding the exploration with an accurate pre-trained cost model. \u0000<small>Heron</small>\u0000 represents the search space as a constrained satisfaction problem (CSP) and explores the space via evolving the CSPs. Thus, the sophisticated constraints of the search space are strictly preserved during the entire exploration process. The exploration algorithm has the flexibility to engage in space exploration using either online-trained models or pre-trained models. Experimental results demonstrate that \u0000<small>Heron</small>\u0000 averagely achieves 2.71\u0000<inline-formula><tex-math>$times$</tex-math></inline-formula>\u0000 speedup over three state-of-the-art automatic generation approaches. Also, compared to vendor-provided hand-tuned libraries, \u0000<small>Heron</small>\u0000 achieves a 2.00\u0000<inline-formula><tex-math>$times$</tex-math></inline-formula>\u0000 speedup on average. When employing a pre-trained model, \u0000<small>Heron</small>\u0000 achieves 11.6\u0000<inline-formula><tex-math>$times$</tex-math></inline-formula>\u0000 compilation time speedup, incurring a minor impact on execution time.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"155-169"},"PeriodicalIF":3.6,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Service Function Chain Placement Over Heterogeneous Devices in Deviceless Edge Computing Environments 无设备边缘计算环境下异构设备上高效的业务功能链布局
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-08 DOI: 10.1109/TC.2024.3475590
Yaodong Huang;Tingting Yao;Zelin Lin;Xiaojun Shang;Yukun Yuan;Laizhong Cui;Yuanyuan Yang
Heterogeneous devices in edge computing bring challenges as well as opportunities for edge computing to utilize powerful and heterogeneous hardware for a variety of complex tasks. In this paper, we propose a service function chain placement strategy considering the heterogeneity of devices in deviceless edge computing environments. The service function chain system utilizes lightweight virtualization technologies to manage resources, considering the heterogeneity of devices to support various complex tasks, and offer low latency services to user requests. We propose an optimal service function chain placement problem minimizing the service delay and formulate it into a quasi-convex problem. We implement different edge applications that can be served by function chains and conduct extensive experiments over real heterogeneous edge devices. Results from the experiments and simulations show that our proposed service function chain scheme is applicable in edge environments, and perform well over services latency, resource utilization as well as the power consumption of edge devices.
边缘计算中的异构设备为边缘计算利用强大的异构硬件完成各种复杂任务带来了挑战和机遇。在本文中,我们提出了一种考虑无设备边缘计算环境中设备异质性的业务功能链布局策略。业务功能链系统采用轻量级虚拟化技术进行资源管理,考虑到设备的异构性,支持各种复杂任务,为用户提供低时延的服务。提出了一个最小化服务延迟的最优服务功能链布局问题,并将其形式化为拟凸问题。我们实现了可以由功能链服务的不同边缘应用程序,并在真实的异构边缘设备上进行了广泛的实验。实验和仿真结果表明,我们提出的业务功能链方案适用于边缘环境,并且在边缘设备的业务延迟、资源利用率和功耗方面表现良好。
{"title":"Efficient Service Function Chain Placement Over Heterogeneous Devices in Deviceless Edge Computing Environments","authors":"Yaodong Huang;Tingting Yao;Zelin Lin;Xiaojun Shang;Yukun Yuan;Laizhong Cui;Yuanyuan Yang","doi":"10.1109/TC.2024.3475590","DOIUrl":"https://doi.org/10.1109/TC.2024.3475590","url":null,"abstract":"Heterogeneous devices in edge computing bring challenges as well as opportunities for edge computing to utilize powerful and heterogeneous hardware for a variety of complex tasks. In this paper, we propose a service function chain placement strategy considering the heterogeneity of devices in deviceless edge computing environments. The service function chain system utilizes lightweight virtualization technologies to manage resources, considering the heterogeneity of devices to support various complex tasks, and offer low latency services to user requests. We propose an optimal service function chain placement problem minimizing the service delay and formulate it into a quasi-convex problem. We implement different edge applications that can be served by function chains and conduct extensive experiments over real heterogeneous edge devices. Results from the experiments and simulations show that our proposed service function chain scheme is applicable in edge environments, and perform well over services latency, resource utilization as well as the power consumption of edge devices.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"222-236"},"PeriodicalIF":3.6,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Control Flow in Spatial Architectures: Insights Into Control Flow Plane Design 重新思考空间架构中的控制流:控制流平面设计的见解
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-08 DOI: 10.1109/TC.2024.3475582
Jinyi Deng;Xinru Tang;Jiahao Zhang;Yuxuan Li;Linyun Zhang;Fengbin Tu;Shaojun Wei;Yang Hu;Shouyi Yin
Spatial architecture is a high-performance paradigm that employs control flow graphs and data flow graphs as computation model, and producer/consumer models as execution model. However, existing spatial architectures struggle with control flow handling challenges. Upon thoroughly characterizing their PE execution models, we observe that they lack autonomous, peer-to-peer, and temporally loosely-coupled control flow handling capability. This degrades its performance in intensive control programs. To tackle the existing control flow handling challenges, Marionette, a spatial architecture with an explicit-designed control flow plane, is proposed. We elaborately develop a full stack of Marionette architecture, from ISA, compiler, simulator to RTL. Marionette's flexible Control Flow Plane enables autonomous, peer-to-peer, and temporally loosely-coupled control flow management. Its Proactive PE Configuration ensures computation-overlapped and timely configuration to promote Branch Divergence handling capability. Besides, Marionette's Agile PE Assignment improves pipeline performance of imperfect loops. Compared to state-of-the-art spatial architectures, the experimental results demonstrate that Marionette outperforms Softbrain, TIA, REVEL, and RipTide by geomean 2.88$mathbf{times}$, 3.38$mathbf{times}$, 1.55$mathbf{times}$, and 2.66$mathbf{times}$ in a variety of challenging intensive control programs.
空间架构是一种高性能的范式,它采用控制流图和数据流图作为计算模型,生产者/消费者模型作为执行模型。然而,现有的空间架构与控制流处理的挑战作斗争。在彻底描述了它们的PE执行模型后,我们观察到它们缺乏自治的、点对点的、暂时松耦合的控制流处理能力。这降低了它在密集控制程序中的性能。为解决现有控制流处理难题,提出了一种具有显式设计控制流平面的空间结构“木偶”。我们精心开发了一个完整的木偶架构,从ISA,编译器,模拟器到RTL。木偶灵活的控制流平面支持自主的、点对点的、暂时松耦合的控制流管理。主动PE配置,保证计算重叠,及时配置,提升分支发散处理能力。此外,木偶的敏捷PE分配提高了不完美循环的管道性能。与最先进的空间架构相比,实验结果表明,在各种具有挑战性的密集控制程序中,Marionette优于Softbrain, TIA, REVEL和RipTide的几何值为2.88$mathbf{times}$, 3.38$mathbf{times}$, 1.55$mathbf{times}$和2.66$mathbf{times}$。
{"title":"Rethinking Control Flow in Spatial Architectures: Insights Into Control Flow Plane Design","authors":"Jinyi Deng;Xinru Tang;Jiahao Zhang;Yuxuan Li;Linyun Zhang;Fengbin Tu;Shaojun Wei;Yang Hu;Shouyi Yin","doi":"10.1109/TC.2024.3475582","DOIUrl":"https://doi.org/10.1109/TC.2024.3475582","url":null,"abstract":"Spatial architecture is a high-performance paradigm that employs control flow graphs and data flow graphs as computation model, and producer/consumer models as execution model. However, existing spatial architectures struggle with control flow handling challenges. Upon thoroughly characterizing their PE execution models, we observe that they lack autonomous, peer-to-peer, and temporally loosely-coupled control flow handling capability. This degrades its performance in intensive control programs. To tackle the existing control flow handling challenges, Marionette, a spatial architecture with an explicit-designed control flow plane, is proposed. We elaborately develop a full stack of Marionette architecture, from ISA, compiler, simulator to RTL. Marionette's flexible Control Flow Plane enables autonomous, peer-to-peer, and temporally loosely-coupled control flow management. Its Proactive PE Configuration ensures computation-overlapped and timely configuration to promote Branch Divergence handling capability. Besides, Marionette's Agile PE Assignment improves pipeline performance of imperfect loops. Compared to state-of-the-art spatial architectures, the experimental results demonstrate that Marionette outperforms Softbrain, TIA, REVEL, and RipTide by geomean 2.88\u0000<inline-formula><tex-math>$mathbf{times}$</tex-math></inline-formula>\u0000, 3.38\u0000<inline-formula><tex-math>$mathbf{times}$</tex-math></inline-formula>\u0000, 1.55\u0000<inline-formula><tex-math>$mathbf{times}$</tex-math></inline-formula>\u0000, and 2.66\u0000<inline-formula><tex-math>$mathbf{times}$</tex-math></inline-formula>\u0000 in a variety of challenging intensive control programs.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"185-199"},"PeriodicalIF":3.6,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel Modular Multiplication Using Variable Length Algorithms 使用可变长度算法的并行模乘法
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-08 DOI: 10.1109/TC.2024.3475574
Shahab Mirzaei-Teshnizi;Parviz Keshavarzi
This paper presents two improved modular multiplication algorithms: variable length Interleaved modular multiplication (VLIM) algorithm and parallel modular multiplication (P_MM) method using variable length algorithms to achieve high throughput rates. The new Interleaved modular multiplication algorithm applies the zero counting and partitioning algorithm to a multiplier’s non-adjacent form (NAF). It divides this input into sections with variable-radix. The sections include a digit of zero sequences and a non-zero digit (-1 or 1) in the most valuable place. Therefore, in addition to reducing the number of required clock pulses, high-radix partial multiplication $mathbf{X}^{left(mathbf{i}right)}cdot mathbf{Y}$ is simplified and performed as a binary addition or subtraction operation, and multiplication operations for consecutive zero bits are executed in one clock cycle instead of several clock cycles. The proposed parallel modular multiplication algorithm divides the multiplier into two parts. It utilizes (VLIM) and variable length Montgomery modular multiplication (VLM3) methods to compute the modular multiplication for the upper and lower portions in parallel, according to the proximity of their multiplication time. The implementation results on a Xilinx Virtex-7 FPGA show that the parallel modular multiplication computes a 2048-bit modular multiplication in 0.903 µs, with a maximum clock frequency of 387 MHz and area × time per bit value equal to 9.14.
本文提出了两种改进的模乘法算法:变长交错模乘法(VLIM)算法和利用变长算法实现高吞吐量的并行模乘法(P_MM)方法。新的交错模乘法算法将零计数和分划算法应用于乘数的非相邻形式(NAF)。它将这个输入分成基数可变的部分。节包括一个零序列的数字和一个非零数字(-1或1)在最有值的地方。因此,除了减少所需时钟脉冲的数量外,高基数部分乘法$mathbf{X}^{left(mathbf{i}right)}cdot mathbf{Y}$被简化并作为二进制加减操作执行,并且连续零位的乘法操作在一个时钟周期内执行,而不是几个时钟周期。提出的并行模乘法算法将乘法器分为两部分。利用(VLIM)和变长Montgomery模乘法(VLM3)方法,根据上下部分相乘时间的接近程度,并行计算上下部分的模乘法。在Xilinx Virtex-7 FPGA上的实现结果表明,并行模块化乘法在0.903µs内计算2048位模块化乘法,最大时钟频率为387 MHz,每比特面积×时间值为9.14。
{"title":"Parallel Modular Multiplication Using Variable Length Algorithms","authors":"Shahab Mirzaei-Teshnizi;Parviz Keshavarzi","doi":"10.1109/TC.2024.3475574","DOIUrl":"https://doi.org/10.1109/TC.2024.3475574","url":null,"abstract":"This paper presents two improved modular multiplication algorithms: variable length Interleaved modular multiplication (VLIM) algorithm and parallel modular multiplication (P_MM) method using variable length algorithms to achieve high throughput rates. The new Interleaved modular multiplication algorithm applies the zero counting and partitioning algorithm to a multiplier’s non-adjacent form (NAF). It divides this input into sections with variable-radix. The sections include a digit of zero sequences and a non-zero digit (-1 or 1) in the most valuable place. Therefore, in addition to reducing the number of required clock pulses, high-radix partial multiplication \u0000<inline-formula><tex-math>$mathbf{X}^{left(mathbf{i}right)}cdot mathbf{Y}$</tex-math></inline-formula>\u0000 is simplified and performed as a binary addition or subtraction operation, and multiplication operations for consecutive zero bits are executed in one clock cycle instead of several clock cycles. The proposed parallel modular multiplication algorithm divides the multiplier into two parts. It utilizes (VLIM) and variable length Montgomery modular multiplication (VLM3) methods to compute the modular multiplication for the upper and lower portions in parallel, according to the proximity of their multiplication time. The implementation results on a Xilinx Virtex-7 FPGA show that the parallel modular multiplication computes a 2048-bit modular multiplication in 0.903 µs, with a maximum clock frequency of 387 MHz and area × time per bit value equal to 9.14.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"143-154"},"PeriodicalIF":3.6,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClusPar: A Game-Theoretic Approach for Efficient and Scalable Streaming Edge Partitioning clusterpar:一种高效可扩展流边缘分区的博弈论方法
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-08 DOI: 10.1109/TC.2024.3475568
Zezhong Ding;Deyu Kong;Zhuoxu Zhang;Xike Xie;Jianliang Xu
Streaming edge partitioning plays a crucial role in the distributed processing of large-scale web graphs, such as pagerank. The quality of partitioning is of utmost importance and directly affects the runtime cost of distributed graph processing. However, streaming graph clustering, a key component of mainstream streaming edge partitioning, is vertex-centric. This incurs a mismatch with the edge-centric partitioning strategy, necessitating additional post-processing and several graph traversals to transition from vertex-centric clusters to edge-centric partitions. This transition not only adds extra runtime overhead but also risks a decline in partitioning quality. In this paper, we propose a novel algorithm, called ClusPar, to address the problem of streaming edge partitioning. The ClusPar framework consists of two steps, streaming edge clustering and edge cluster partitioning. Different from prior studies, the first step traverses the input graph in a single pass to generate edge-centric clusters, while the second step applies game theory over these edge-centric clusters to produce partitions. Extensive experiments show that ClusPar outperforms the state-of-the-art streaming edge partitioning methods in terms of the partitioning quality, efficiency, and scalability.
流边界划分在大规模web图(如pagerank)的分布式处理中起着至关重要的作用。分区的质量至关重要,直接影响分布式图处理的运行成本。然而,流图聚类是主流流边缘划分的关键组成部分,它是以顶点为中心的。这会导致与以边缘为中心的分区策略不匹配,需要额外的后处理和几次图遍历才能从以顶点为中心的集群过渡到以边缘为中心的分区。这种转换不仅增加了额外的运行时开销,而且还存在分区质量下降的风险。在本文中,我们提出了一种新的算法,称为ClusPar,以解决流边缘分区问题。clusterpar框架包括两个步骤,流式边缘集群和边缘集群分区。与之前的研究不同的是,第一步通过一次遍历输入图来生成以边为中心的聚类,而第二步则在这些以边为中心的聚类上应用博弈论来生成分区。大量的实验表明,ClusPar在分区质量、效率和可伸缩性方面优于最先进的流边缘分区方法。
{"title":"ClusPar: A Game-Theoretic Approach for Efficient and Scalable Streaming Edge Partitioning","authors":"Zezhong Ding;Deyu Kong;Zhuoxu Zhang;Xike Xie;Jianliang Xu","doi":"10.1109/TC.2024.3475568","DOIUrl":"https://doi.org/10.1109/TC.2024.3475568","url":null,"abstract":"Streaming edge partitioning plays a crucial role in the distributed processing of large-scale web graphs, such as pagerank. The quality of partitioning is of utmost importance and directly affects the runtime cost of distributed graph processing. However, streaming graph clustering, a key component of mainstream streaming edge partitioning, is vertex-centric. This incurs a mismatch with the edge-centric partitioning strategy, necessitating additional post-processing and several graph traversals to transition from vertex-centric clusters to edge-centric partitions. This transition not only adds extra runtime overhead but also risks a decline in partitioning quality. In this paper, we propose a novel algorithm, called ClusPar, to address the problem of streaming edge partitioning. The ClusPar framework consists of two steps, streaming edge clustering and edge cluster partitioning. Different from prior studies, the first step traverses the input graph in a single pass to generate edge-centric clusters, while the second step applies game theory over these edge-centric clusters to produce partitions. Extensive experiments show that ClusPar outperforms the state-of-the-art streaming edge partitioning methods in terms of the partitioning quality, efficiency, and scalability.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"116-130"},"PeriodicalIF":3.6,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Learning Based DDoS Attacks Detection in Large Scale Software-Defined Network 大规模软件定义网络中基于联邦学习的DDoS攻击检测
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-10-04 DOI: 10.1109/TC.2024.3474180
Yannis Steve Nsuloun Fotse;Vianney Kengne Tchendji;Mthulisi Velempini
Software-Defined Networking (SDN) is an innovative concept that segments the network into three planes: a control plane comprising of one or multiple controllers; a data plane responsible for data transmission; and an application plane which enables the reconfiguration of network functionalities. Nevertheless, this approach has exposed the controller as a prime target for malicious elements to attack it, such as Distributed Denial of Service (DDoS) attacks. Current DDoS defense schemes often increased the controller load and resource consumption. These schemes are typically tailored for single-controller architectures, a significant limitation when considering the scalability requirements of large-scale SDN. To address these limitations, we introduce an efficient Federated Learning approach, named “FedLAD,” designed to counter DDoS attacks in SDN-based large-scale networks, particularly in multi-controller architectures. Federated learning is a decentralized approach to machine learning where models are trained across multiple devices as controllers store local data samples, without exchanging them. The evaluation of the proposed scheme's performance, using InSDN, CICDDoS2019, and CICDoS2017 datasets, shows an accuracy exceeding 98%, a significant improvement compared to related works. Furthermore, the evaluation of the FedLAD protocol with real-time traffic in an SDN context demonstrates its ability to detect DDoS attacks with high accuracy and minimal resource consumption. To the best of our knowledge, this work introduces a new technique in applying FL for DDoS attack detection in large-scale SDN.
软件定义网络(SDN)是一种创新概念,它将网络划分为三个平面:由一个或多个控制器组成的控制平面;负责数据传输的数据平面;和一个应用程序平面,使网络功能的重新配置。然而,这种方法将控制器暴露为恶意元素攻击的主要目标,例如分布式拒绝服务(DDoS)攻击。当前的DDoS防御方案往往会增加控制器的负载和资源消耗。这些方案通常是为单控制器架构量身定制的,在考虑大规模SDN的可伸缩性需求时,这是一个重大的限制。为了解决这些限制,我们引入了一种高效的联邦学习方法,名为“federad”,旨在对抗基于sdn的大规模网络中的DDoS攻击,特别是在多控制器架构中。联邦学习是一种分散的机器学习方法,其中模型在多个设备上进行训练,因为控制器存储本地数据样本,而不交换它们。使用InSDN、CICDDoS2019和CICDoS2017数据集对所提出方案的性能进行评估,结果表明准确率超过98%,与相关工作相比有显着提高。此外,在SDN环境下对实时流量的federad协议的评估表明,它能够以高精度和最小的资源消耗检测DDoS攻击。据我们所知,这项工作介绍了一种将FL应用于大规模SDN中DDoS攻击检测的新技术。
{"title":"Federated Learning Based DDoS Attacks Detection in Large Scale Software-Defined Network","authors":"Yannis Steve Nsuloun Fotse;Vianney Kengne Tchendji;Mthulisi Velempini","doi":"10.1109/TC.2024.3474180","DOIUrl":"https://doi.org/10.1109/TC.2024.3474180","url":null,"abstract":"Software-Defined Networking (SDN) is an innovative concept that segments the network into three planes: a control plane comprising of one or multiple controllers; a data plane responsible for data transmission; and an application plane which enables the reconfiguration of network functionalities. Nevertheless, this approach has exposed the controller as a prime target for malicious elements to attack it, such as Distributed Denial of Service (DDoS) attacks. Current DDoS defense schemes often increased the controller load and resource consumption. These schemes are typically tailored for single-controller architectures, a significant limitation when considering the scalability requirements of large-scale SDN. To address these limitations, we introduce an efficient Federated Learning approach, named “FedLAD,” designed to counter DDoS attacks in SDN-based large-scale networks, particularly in multi-controller architectures. Federated learning is a decentralized approach to machine learning where models are trained across multiple devices as controllers store local data samples, without exchanging them. The evaluation of the proposed scheme's performance, using InSDN, CICDDoS2019, and CICDoS2017 datasets, shows an accuracy exceeding 98%, a significant improvement compared to related works. Furthermore, the evaluation of the FedLAD protocol with real-time traffic in an SDN context demonstrates its ability to detect DDoS attacks with high accuracy and minimal resource consumption. To the best of our knowledge, this work introduces a new technique in applying FL for DDoS attack detection in large-scale SDN.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"101-115"},"PeriodicalIF":3.6,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705345","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Olive-Like Networking: A Uniformity Driven Robust Topology Generation Scheme for IoT System 橄榄型网络:一种均匀驱动的物联网系统鲁棒拓扑生成方案
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-23 DOI: 10.1109/TC.2024.3465934
Tie Qiu;Jingchen Sun;Ning Chen;Songwei Zhang;Weisheng Si;Xingwei Wang
With the scale of the Internet of Things (IoT) system growing constantly, node failures frequently occur due to device malfunctions or cyberattacks. Existing robust network generation methods utilize heuristic algorithms or neural network approaches to optimize the initial topology. These methods do not explore the core of topology robustness, namely how edges are allocated to each node in the topology. As a result, these methods use massive iterative processes to optimize the initial topology, leading to substantial time overhead when the scale of the topology is large. We examine various robust networks and observe that uniform degree distribution is the core of topology robustness. Consequently, we propose a novel UNIformity driven robusT topologY generation scheme (UNITY) for IoT systems to prevent the node degree from becoming excessively high or low, thereby balancing node degrees. Comprehensive experimental results demonstrate that networks generated with UNITY have an “olive-like” topology consisting of a substantial number of medium-degree nodes and possess strong robustness against both random node failures and targeted attacks. This promising result indicates that the UNITY makes a significant advancement in designing robust IoT systems.
随着物联网(IoT)系统规模的不断扩大,由于设备故障或网络攻击导致的节点故障频频发生。现有的鲁棒网络生成方法利用启发式算法或神经网络方法来优化初始拓扑。这些方法没有探索拓扑鲁棒性的核心,即如何将边分配给拓扑中的每个节点。因此,这些方法使用大量的迭代过程来优化初始拓扑,当拓扑规模较大时,导致大量的时间开销。研究了各种鲁棒网络,发现均匀度分布是拓扑鲁棒性的核心。因此,我们为物联网系统提出了一种新颖的统一性驱动鲁棒拓扑生成方案(UNITY),以防止节点度过高或过低,从而平衡节点度。综合实验结果表明,使用UNITY生成的网络具有由大量中等程度节点组成的“橄榄状”拓扑结构,对随机节点故障和针对性攻击都具有较强的鲁棒性。这一有希望的结果表明,UNITY在设计健壮的物联网系统方面取得了重大进展。
{"title":"Olive-Like Networking: A Uniformity Driven Robust Topology Generation Scheme for IoT System","authors":"Tie Qiu;Jingchen Sun;Ning Chen;Songwei Zhang;Weisheng Si;Xingwei Wang","doi":"10.1109/TC.2024.3465934","DOIUrl":"https://doi.org/10.1109/TC.2024.3465934","url":null,"abstract":"With the scale of the Internet of Things (IoT) system growing constantly, node failures frequently occur due to device malfunctions or cyberattacks. Existing robust network generation methods utilize heuristic algorithms or neural network approaches to optimize the initial topology. These methods do not explore the core of topology robustness, namely how edges are allocated to each node in the topology. As a result, these methods use massive iterative processes to optimize the initial topology, leading to substantial time overhead when the scale of the topology is large. We examine various robust networks and observe that uniform degree distribution is the core of topology robustness. Consequently, we propose a novel UNIformity driven robusT topologY generation scheme (UNITY) for IoT systems to prevent the node degree from becoming excessively high or low, thereby balancing node degrees. Comprehensive experimental results demonstrate that networks generated with UNITY have an “olive-like” topology consisting of a substantial number of medium-degree nodes and possess strong robustness against both random node failures and targeted attacks. This promising result indicates that the UNITY makes a significant advancement in designing robust IoT systems.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"86-100"},"PeriodicalIF":3.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FiDRL: Flexible Invocation-Based Deep Reinforcement Learning for DVFS Scheduling in Embedded Systems 基于灵活调用的嵌入式系统DVFS调度深度强化学习
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-23 DOI: 10.1109/TC.2024.3465933
Jingjin Li;Weixiong Jiang;Yuting He;Qingyu Yang;Anqi Gao;Yajun Ha;Ender Özcan;Ruibin Bai;Tianxiang Cui;Heng Yu
Deep Reinforcement Learning (DRL)-based Dynamic Voltage Frequency Scaling (DVFS) has shown great promise for energy conservation in embedded systems. While many works were devoted to validating its efficacy or improving its performance, few discuss the feasibility of the DRL agent deployment for embedded computing. State-of-the-art approaches focus on the miniaturization of agents’ inferential networks, such as pruning and quantization, to minimize their energy and resource consumption. However, this spatial-based paradigm still proves inadequate for resource-stringent systems. In this paper, we address the feasibility from a temporal perspective, where FiDRL, a flexible invocation-based DRL model is proposed to judiciously invoke itself to minimize the overall system energy consumption, given that the DRL agent incurs non-negligible energy overhead during invocations. Our approach is three-fold: (1) FiDRL that extends DRL by incorporating the agent's invocation interval into the action space to achieve invocation flexibility; (2) a FiDRL-based DVFS approach for both inter- and intra-task scheduling that minimizes the overall execution energy consumption; and (3) a FiDRL-based DVFS platform design and an on/off-chip hybrid algorithm specialized for training the DRL agent for embedded systems. Experiment results show that FiDRL achieves 55.1% agent invocation cost reduction, under 23.3% overall energy reduction, compared to state-of-the-art approaches.
基于深度强化学习(DRL)的动态电压频率缩放(DVFS)在嵌入式系统节能方面显示出巨大的前景。虽然许多工作致力于验证其有效性或提高其性能,但很少讨论嵌入式计算部署DRL代理的可行性。最先进的方法侧重于智能体推理网络的小型化,如修剪和量化,以最大限度地减少它们的能量和资源消耗。然而,这种基于空间的范式仍然被证明不适合资源紧张的系统。在本文中,我们从时间角度讨论了可行性,其中提出了FiDRL,一个灵活的基于调用的DRL模型,考虑到DRL代理在调用期间产生不可忽略的能量开销,它可以明智地调用自身以最小化整个系统的能量消耗。我们的方法有三个方面:(1)FiDRL,它通过将代理的调用间隔合并到动作空间中来扩展DRL,以实现调用灵活性;(2)基于fidrl的任务间和任务内调度的DVFS方法,使总体执行能耗最小化;(3)基于fidrl的DVFS平台设计和专门用于训练嵌入式系统DRL代理的片上/片外混合算法。实验结果表明,与最先进的方法相比,FiDRL实现了55.1%的代理调用成本降低,而总能量降低了23.3%。
{"title":"FiDRL: Flexible Invocation-Based Deep Reinforcement Learning for DVFS Scheduling in Embedded Systems","authors":"Jingjin Li;Weixiong Jiang;Yuting He;Qingyu Yang;Anqi Gao;Yajun Ha;Ender Özcan;Ruibin Bai;Tianxiang Cui;Heng Yu","doi":"10.1109/TC.2024.3465933","DOIUrl":"https://doi.org/10.1109/TC.2024.3465933","url":null,"abstract":"Deep Reinforcement Learning (DRL)-based Dynamic Voltage Frequency Scaling (DVFS) has shown great promise for energy conservation in embedded systems. While many works were devoted to validating its efficacy or improving its performance, few discuss the feasibility of the DRL agent deployment for embedded computing. State-of-the-art approaches focus on the miniaturization of agents’ inferential networks, such as pruning and quantization, to minimize their energy and resource consumption. However, this spatial-based paradigm still proves inadequate for resource-stringent systems. In this paper, we address the feasibility from a temporal perspective, where FiDRL, a flexible invocation-based DRL model is proposed to judiciously invoke itself to minimize the overall system energy consumption, given that the DRL agent incurs non-negligible energy overhead during invocations. Our approach is three-fold: (1) FiDRL that extends DRL by incorporating the agent's invocation interval into the action space to achieve invocation flexibility; (2) a FiDRL-based DVFS approach for both inter- and intra-task scheduling that minimizes the overall execution energy consumption; and (3) a FiDRL-based DVFS platform design and an on/off-chip hybrid algorithm specialized for training the DRL agent for embedded systems. Experiment results show that FiDRL achieves 55.1% agent invocation cost reduction, under 23.3% overall energy reduction, compared to state-of-the-art approaches.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"71-85"},"PeriodicalIF":3.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remora: A Low-Latency DAG-Based BFT Through Optimistic Paths 通过乐观路径实现低延迟dag的BFT
IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-09-16 DOI: 10.1109/TC.2024.3461309
Xiaohai Dai;Wei Li;Guanxiong Wang;Jiang Xiao;Haoyang Chen;Shufei Li;Albert Y. Zomaya;Hai Jin
Standing as a foundational element within blockchain systems, the Byzantine Fault Tolerant (BFT) consensus has garnered significant attention over the past decade. The introduction of a Directed Acyclic Directed (DAG) structure into BFT consensus design, termed DAG-based BFT, has emerged to bolster throughput. However, prevalent DAG-based protocols grapple with substantial latency issues, suffering from a latency gap compared to non-DAG protocols. For instance, leading-edge DAG-based protocols named GradedDAG and BullShark exhibit a good-case latency of $4$ and $6$ communication rounds, respectively. In contrast, the non-DAG protocol, exemplified by PBFT, attains a latency of $3$ rounds in favorable conditions. To bridge this latency gap, we propose Remora, a novel DAG-based BFT protocol. Remora achieves a reduced latency of $3$ rounds by incorporating optimistic paths. At its core, Remora endeavors to commit blocks through the optimistic path initially, facilitating low latency in favorable situations. Conversely, in unfavorable scenarios, Remora seamlessly transitions to a pessimistic path to ensure liveness. Various experiments validate Remora's feasibility and efficiency, highlighting its potential as a robust solution in the realm of BFT consensus protocols.
作为区块链系统的基础元素,拜占庭容错(BFT)共识在过去十年中获得了极大的关注。在BFT共识设计中引入有向无环有向(DAG)结构,称为基于DAG的BFT,已经出现以提高吞吐量。然而,流行的基于dag的协议与大量的延迟问题作斗争,与非dag协议相比,存在延迟差距。例如,名为GradedDAG和BullShark的基于dag的前沿协议分别表现出4美元和6美元的良好情况延迟。相比之下,以PBFT为例的非dag协议在有利条件下的延迟为3美元。为了弥补这种延迟差距,我们提出了一种新的基于dag的BFT协议Remora。通过整合乐观路径,《Remora》实现了3美元回合的延迟减少。在其核心,remoa努力通过乐观路径提交区块,在有利的情况下促进低延迟。相反,在不利的情况下,䲟鱼无缝过渡到悲观的路径,以确保生存。各种实验验证了rema的可行性和效率,突出了其作为BFT共识协议领域强大解决方案的潜力。
{"title":"Remora: A Low-Latency DAG-Based BFT Through Optimistic Paths","authors":"Xiaohai Dai;Wei Li;Guanxiong Wang;Jiang Xiao;Haoyang Chen;Shufei Li;Albert Y. Zomaya;Hai Jin","doi":"10.1109/TC.2024.3461309","DOIUrl":"https://doi.org/10.1109/TC.2024.3461309","url":null,"abstract":"Standing as a foundational element within blockchain systems, the \u0000<i>Byzantine Fault Tolerant</i>\u0000 (BFT) consensus has garnered significant attention over the past decade. The introduction of a \u0000<i>Directed Acyclic Directed</i>\u0000 (DAG) structure into BFT consensus design, termed DAG-based BFT, has emerged to bolster throughput. However, prevalent DAG-based protocols grapple with substantial latency issues, suffering from a latency gap compared to non-DAG protocols. For instance, leading-edge DAG-based protocols named GradedDAG and BullShark exhibit a good-case latency of \u0000<inline-formula><tex-math>$4$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$6$</tex-math></inline-formula>\u0000 communication rounds, respectively. In contrast, the non-DAG protocol, exemplified by PBFT, attains a latency of \u0000<inline-formula><tex-math>$3$</tex-math></inline-formula>\u0000 rounds in favorable conditions. To bridge this latency gap, we propose Remora, a novel DAG-based BFT protocol. Remora achieves a reduced latency of \u0000<inline-formula><tex-math>$3$</tex-math></inline-formula>\u0000 rounds by incorporating optimistic paths. At its core, Remora endeavors to commit blocks through the optimistic path initially, facilitating low latency in favorable situations. Conversely, in unfavorable scenarios, Remora seamlessly transitions to a pessimistic path to ensure liveness. Various experiments validate Remora's feasibility and efficiency, highlighting its potential as a robust solution in the realm of BFT consensus protocols.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 1","pages":"57-70"},"PeriodicalIF":3.6,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1