首页 > 最新文献

IET Computers and Digital Techniques最新文献

英文 中文
Event-based high throughput computing: A series of case studies on a massively parallel softcore machine 基于事件的高吞吐量计算:大规模并行软核机器的一系列案例研究
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-12-19 DOI: 10.1049/cdt2.12051
Mark Vousden, Jordan Morris, Graeme McLachlan Bragg, Jonathan Beaumont, Ashur Rafiev, Wayne Luk, David Thomas, Andrew Brown

This paper introduces an event-based computing paradigm, where workers only perform computation in response to external stimuli (events). This approach is best employed on hardware with many thousands of smaller compute cores with a fast, low-latency interconnect, as opposed to traditional computers with fewer and faster cores. Event-based computing is timely because it provides an alternative to traditional big computing, which suffers from immense infrastructural and power costs. This paper presents four case study applications, where an event-based computing approach finds solutions to orders of magnitude more quickly than the equivalent traditional big compute approach, including problems in computational chemistry and condensed matter physics.

本文介绍了一种基于事件的计算范式,其中工作人员只对外部刺激(事件)进行计算。这种方法最好用于具有数千个具有快速、低延迟互连的较小计算核心的硬件,而不是具有更少、更快核心的传统计算机。基于事件的计算是及时的,因为它提供了传统大型计算的替代方案,而传统大型计算面临巨大的基础设施和电力成本。本文介绍了四个案例研究应用,其中基于事件的计算方法比等效的传统大型计算方法更快地找到数量级的解决方案,包括计算化学和凝聚态物理学中的问题。
{"title":"Event-based high throughput computing: A series of case studies on a massively parallel softcore machine","authors":"Mark Vousden,&nbsp;Jordan Morris,&nbsp;Graeme McLachlan Bragg,&nbsp;Jonathan Beaumont,&nbsp;Ashur Rafiev,&nbsp;Wayne Luk,&nbsp;David Thomas,&nbsp;Andrew Brown","doi":"10.1049/cdt2.12051","DOIUrl":"https://doi.org/10.1049/cdt2.12051","url":null,"abstract":"<p>This paper introduces an event-based computing paradigm, where workers only perform computation in response to external stimuli (events). This approach is best employed on hardware with many thousands of smaller compute cores with a fast, low-latency interconnect, as opposed to traditional computers with fewer and faster cores. Event-based computing is timely because it provides an alternative to traditional big computing, which suffers from immense infrastructural and power costs. This paper presents four case study applications, where an event-based computing approach finds solutions to orders of magnitude more quickly than the equivalent traditional big compute approach, including problems in computational chemistry and condensed matter physics.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"17 1","pages":"29-42"},"PeriodicalIF":1.2,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50137733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Voltage over-scaling CNT-based 8-bit multiplier by high-efficient GDI-based counters 通过高效的基于GDI的计数器对基于CNT的8位乘法器进行电压过缩放
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-11-28 DOI: 10.1049/cdt2.12049
Ayoub Sadeghi, Nabiollah Shiri, Mahmood Rafiee, Abdolreza Darabi, Ebrahim Abiri

A new low-power and high-speed multiplier is presented based on the voltage over scaling (VOS) technique and new 5:3 and 7:3 counter cells. The VOS reduces power consumption in digital circuits, but different voltage levels of the VOS increase the delay in different stages of a multiplier. Hence, the proposed counters are implemented by the gate-diffusion input technique to solve the speed limitation of the VOS-based circuits. The proposed GDI-based 5:3 and 7:3 counters save power and reduce the area by 2x and 2.5x, respectively. To prevent the threshold voltage (Vth) drop in the suggested GDI-based circuits, carbon nanotube field-effect transistor (CNTFET) technology is used. In the counters, the chirality vector and tubes of the CNTFETs are properly adjusted to attain full-swing outputs with high driving capability. Also, their validation against heat distribution under different time intervals, as a major issue in the CNTFET technology is investigated, and their very low sensitivity is confirmed. The low complexity, high stability and efficient performance of the presented counter cells introduce the proposed VOS-CNTFET-GDI-based multiplier as an alternative to the previous designs.

基于电压过缩放(VOS)技术和新的5:3和7:3计数器单元,提出了一种新的低功耗高速乘法器。VOS降低了数字电路中的功耗,但VOS的不同电压电平增加了乘法器不同级中的延迟。因此,所提出的计数器是通过栅极扩散输入技术来实现的,以解决基于VOS的电路的速度限制。所提出的基于GDI的5:3和7:3计数器分别节能2倍和2.5倍。为了防止所提出的基于GDI的电路中的阈值电压(Vth)下降,使用了碳纳米管场效应晶体管(CNTFET)技术。在计数器中,CNTFET的手性矢量和管被适当地调节,以获得具有高驱动能力的全摆幅输出。此外,作为CNTFET技术中的一个主要问题,研究了它们在不同时间间隔下对热分布的验证,并证实了它们非常低的灵敏度。所提出的计数器单元的低复杂性、高稳定性和高效性能引入了所提出的基于VOS-CNTFET GDI的乘法器作为先前设计的替代方案。
{"title":"Voltage over-scaling CNT-based 8-bit multiplier by high-efficient GDI-based counters","authors":"Ayoub Sadeghi,&nbsp;Nabiollah Shiri,&nbsp;Mahmood Rafiee,&nbsp;Abdolreza Darabi,&nbsp;Ebrahim Abiri","doi":"10.1049/cdt2.12049","DOIUrl":"https://doi.org/10.1049/cdt2.12049","url":null,"abstract":"<p>A new low-power and high-speed multiplier is presented based on the voltage over scaling (VOS) technique and new 5:3 and 7:3 counter cells. The VOS reduces power consumption in digital circuits, but different voltage levels of the VOS increase the delay in different stages of a multiplier. Hence, the proposed counters are implemented by the gate-diffusion input technique to solve the speed limitation of the VOS-based circuits. The proposed GDI-based 5:3 and 7:3 counters save power and reduce the area by 2x and 2.5x, respectively. To prevent the threshold voltage (<i>V</i><sub>th</sub>) drop in the suggested GDI-based circuits, carbon nanotube field-effect transistor (CNTFET) technology is used. In the counters, the chirality vector and tubes of the CNTFETs are properly adjusted to attain full-swing outputs with high driving capability. Also, their validation against heat distribution under different time intervals, as a major issue in the CNTFET technology is investigated, and their very low sensitivity is confirmed. The low complexity, high stability and efficient performance of the presented counter cells introduce the proposed VOS-CNTFET-GDI-based multiplier as an alternative to the previous designs.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"17 1","pages":"1-19"},"PeriodicalIF":1.2,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50146867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A four-stage yield optimization technique for analog integrated circuits using optimal computing budget allocation and evolutionary algorithms 基于最优计算预算分配和进化算法的模拟集成电路四阶段成品率优化技术
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-10-09 DOI: 10.1049/cdt2.12048
Abbas Yaseri, Mohammad Hossein Maghami, Mehdi Radmehr

A high yield estimation is necessary for designing analogue integrated circuits. In the Monte-Carlo (MC) method, many transistor-level simulations should be performed to obtain the desired result. Therefore, some methods are needed to be combined with MC simulations to reach high yield with high speed at the same time. In this paper, a four-stage yield optimisation approach is presented, which employs computational intelligence to accelerate yield estimation without losing accuracy. Firstly, the designs that met the desired characteristics are provided using critical analysis (CA). The aim of utilising CA is to avoid unnecessary MC simulations repeating for non-critical solutions. Then in the second and third stages, the shuffled frog-leaping algorithm and the Non-dominated Sorting Genetic Algorithm-III are proposed to improve the performance. Finally, MC simulations are performed to present the final result. The yield value obtained from the simulation results for two-stage class-AB Operational Transconductance Amplifer (OTA) in 180 nm Complementary Metal-Oxide-Semiconductor (CMOS) technology is 99.85%. The proposed method has less computational effort and high accuracy than the MC-based approaches. Another advantage of using CA is that the initial population of multi-objective optimisation algorithms will no longer be random. Simulation results prove the efficiency of the proposed technique.

在模拟集成电路的设计中,高良率估计是必要的。在蒙特卡罗(MC)方法中,为了获得期望的结果,需要进行许多晶体管级的模拟。因此,需要一些方法与MC模拟相结合,以达到高成品率和高速度的同时。本文提出了一种四阶段产量优化方法,该方法利用计算智能在不损失精度的情况下加速产量估计。首先,使用关键分析(CA)提供满足所需特性的设计。利用CA的目的是避免不必要的MC模拟重复非关键的解决方案。在第二阶段和第三阶段,分别提出了shuffle frog- jump算法和non - dominant Sorting Genetic algorithm - iii来提高算法的性能。最后进行了MC模拟,给出了最终结果。仿真结果表明,180nm互补金属氧化物半导体(CMOS)工艺的两级ab类操作跨导放大器(OTA)的良率值为99.85%。与基于mc的方法相比,该方法计算量少,精度高。使用CA的另一个优点是多目标优化算法的初始人口将不再是随机的。仿真结果证明了该方法的有效性。
{"title":"A four-stage yield optimization technique for analog integrated circuits using optimal computing budget allocation and evolutionary algorithms","authors":"Abbas Yaseri,&nbsp;Mohammad Hossein Maghami,&nbsp;Mehdi Radmehr","doi":"10.1049/cdt2.12048","DOIUrl":"10.1049/cdt2.12048","url":null,"abstract":"<p>A high yield estimation is necessary for designing analogue integrated circuits. In the Monte-Carlo (MC) method, many transistor-level simulations should be performed to obtain the desired result. Therefore, some methods are needed to be combined with MC simulations to reach high yield with high speed at the same time. In this paper, a four-stage yield optimisation approach is presented, which employs computational intelligence to accelerate yield estimation without losing accuracy. Firstly, the designs that met the desired characteristics are provided using critical analysis (CA). The aim of utilising CA is to avoid unnecessary MC simulations repeating for non-critical solutions. Then in the second and third stages, the shuffled frog-leaping algorithm and the Non-dominated Sorting Genetic Algorithm-III are proposed to improve the performance. Finally, MC simulations are performed to present the final result. The yield value obtained from the simulation results for two-stage class-AB Operational Transconductance Amplifer (OTA) in 180 nm Complementary Metal-Oxide-Semiconductor (CMOS) technology is 99.85%. The proposed method has less computational effort and high accuracy than the MC-based approaches. Another advantage of using CA is that the initial population of multi-objective optimisation algorithms will no longer be random. Simulation results prove the efficiency of the proposed technique.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 5-6","pages":"183-195"},"PeriodicalIF":1.2,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12048","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87528214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Illegal Trojan design and detection in asynchronous NULL Convention Logic and Sleep Convention Logic circuits 异步NULL约定逻辑与休眠约定逻辑电路中的非法木马设计与检测
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-09-16 DOI: 10.1049/cdt2.12047
Kushal K. Ponugoti, Sudarshan K. Srinivasan, Scott C. Smith, Nimish Mathure

With Cyber warfare, detection of hardware Trojans, malicious digital circuit components that can leak data and degrade performance, is an urgent issue. Quasi-Delay Insensitive asynchronous digital circuits, such as NULL Convention Logic (NCL) and Sleep Convention Logic, also known as Multi-Threshold NULL Convention Logic (MTNCL), have inherent security properties and resilience to large fluctuations in temperatures, which make them very alluring to extreme environment applications, such as space exploration, automotive, power industry etc. This paper shows how dual-rail encoding used in NCL and MTNCL can be exploited to design Trojans, which would not be detected using existing methods. Generic threat models for Trojans are given. Formal verification methods that are capable of accurate detection of Trojans at the Register-Transfer-Level are also provided. The detection methods were tested by embedding Trojans in NCL and MTNCL Rivest-Shamir-Adleman (RSA) decryption circuits. The methods were applied to 25 NCL and 25 MTNCL RSA benchmarks of various data path width and provided 100% rate of detection.

在网络战争中,检测硬件木马,即可以泄露数据和降低性能的恶意数字电路组件,是一个紧迫的问题。准延迟不敏感异步数字电路,如NULL约定逻辑(NCL)和睡眠约定逻辑(MTNCL),也称为多阈值NULL约定逻辑(MTNCL),具有固有的安全特性和对温度大幅波动的弹性,这使得它们对极端环境应用非常有吸引力,例如太空探索,汽车,电力工业等。本文展示了如何利用NCL和MTNCL中使用的双轨编码来设计木马,使用现有方法无法检测到。给出了木马的一般威胁模型。还提供了能够在注册-传输级别准确检测木马的正式验证方法。通过在NCL和MTNCL RSA (Rivest-Shamir-Adleman)解密电路中嵌入木马,对检测方法进行了测试。将该方法应用于25个不同数据路径宽度的NCL和25个MTNCL RSA基准,并提供100%的检出率。
{"title":"Illegal Trojan design and detection in asynchronous NULL Convention Logic and Sleep Convention Logic circuits","authors":"Kushal K. Ponugoti,&nbsp;Sudarshan K. Srinivasan,&nbsp;Scott C. Smith,&nbsp;Nimish Mathure","doi":"10.1049/cdt2.12047","DOIUrl":"10.1049/cdt2.12047","url":null,"abstract":"<p>With Cyber warfare, detection of hardware Trojans, malicious digital circuit components that can leak data and degrade performance, is an urgent issue. Quasi-Delay Insensitive asynchronous digital circuits, such as NULL Convention Logic (NCL) and Sleep Convention Logic, also known as Multi-Threshold NULL Convention Logic (MTNCL), have inherent security properties and resilience to large fluctuations in temperatures, which make them very alluring to extreme environment applications, such as space exploration, automotive, power industry etc. This paper shows how dual-rail encoding used in NCL and MTNCL can be exploited to design Trojans, which would not be detected using existing methods. Generic threat models for Trojans are given. Formal verification methods that are capable of accurate detection of Trojans at the Register-Transfer-Level are also provided. The detection methods were tested by embedding Trojans in NCL and MTNCL Rivest-Shamir-Adleman (RSA) decryption circuits. The methods were applied to 25 NCL and 25 MTNCL RSA benchmarks of various data path width and provided 100% rate of detection.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 5-6","pages":"172-182"},"PeriodicalIF":1.2,"publicationDate":"2022-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85767635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TLP: Towards three-level loop parallelisation TLP:迈向三层循环并行化
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-08-09 DOI: 10.1049/cdt2.12046
Shabnam Mahjoub, Mehdi Golsorkhtabaramiri, Seyed Sadegh Salehi Amiri

Due to the design of computer systems in the multi-core and/or multi-processor form, it is possible to use the maximum capacity of processors to run an application with the least time consumed through parallelisation. This is the responsibility of parallel compilers, which perform parallelisation in several steps by distributing iterations between different processors and executing them simultaneously to achieve lower runtime. The present paper focuses on the uniformisation of three-level perfect nested loops as an important step in parallelisation and proposes a method called Towards Three-Level Loop Parallelisation (TLP) that uses a combination of a Frog Leaping Algorithm and Fuzzy to achieve optimal results because in recent years, many algorithms have worked on volumetric data, that is, three-dimensional spaces. Results of the implementation of the TLP algorithm in comparison with existing methods lead to a wide variety of optimal results at desired times, with minimum cone size resulting from the vectors. Besides, the maximum number of input dependence vectors is decomposed by this algorithm. These results can accelerate the process of generating parallel codes and facilitate their development for High-Performance Computing purposes.

由于计算机系统是以多核和/或多处理器形式设计的,因此可以使用处理器的最大容量来通过并行化以最少的时间消耗运行应用程序。这是并行编译器的责任,它通过在不同的处理器之间分配迭代并同时执行它们来分几个步骤执行并行化,以实现更低的运行时间。本文将三层完美嵌套循环的均匀化作为并行化的重要步骤,并提出了一种称为“迈向三层循环并行化”(TLP)的方法,该方法使用青蛙跳跃算法和模糊算法的组合来实现最佳结果,因为近年来,许多算法都用于体积数据,即三维空间。与现有方法相比,TLP算法的实现结果在所需时间内产生了各种各样的最优结果,并且由向量产生的锥尺寸最小。并对输入依赖向量的最大数量进行了分解。这些结果可以加速生成并行代码的过程,并促进其用于高性能计算目的的开发。
{"title":"TLP: Towards three-level loop parallelisation","authors":"Shabnam Mahjoub,&nbsp;Mehdi Golsorkhtabaramiri,&nbsp;Seyed Sadegh Salehi Amiri","doi":"10.1049/cdt2.12046","DOIUrl":"10.1049/cdt2.12046","url":null,"abstract":"<p>Due to the design of computer systems in the multi-core and/or multi-processor form, it is possible to use the maximum capacity of processors to run an application with the least time consumed through parallelisation. This is the responsibility of parallel compilers, which perform parallelisation in several steps by distributing iterations between different processors and executing them simultaneously to achieve lower runtime. The present paper focuses on the uniformisation of three-level perfect nested loops as an important step in parallelisation and proposes a method called Towards Three-Level Loop Parallelisation (TLP) that uses a combination of a Frog Leaping Algorithm and Fuzzy to achieve optimal results because in recent years, many algorithms have worked on volumetric data, that is, three-dimensional spaces. Results of the implementation of the TLP algorithm in comparison with existing methods lead to a wide variety of optimal results at desired times, with minimum cone size resulting from the vectors. Besides, the maximum number of input dependence vectors is decomposed by this algorithm. These results can accelerate the process of generating parallel codes and facilitate their development for High-Performance Computing purposes.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 5-6","pages":"159-171"},"PeriodicalIF":1.2,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74517978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Guest Editorial: Special issue on battery-free computing 嘉宾评论:关于无电池计算的特刊
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-06-09 DOI: 10.1049/cdt2.12043
Geoff V. Merrett, Bernd-Christian Renner, Brandon Lucia
<p>In order to realise the vision and scale of the Internet of Things (IoT), we cannot rely on mains electricity or batteries to power devices due to environmental, maintenance, cost and physical volume implications. Considerable research has been undertaken in energy harvesting, allowing systems to extract electrical energy from their surrounding environments. However, such energy is typically highly dynamic, both spatially and temporally. In recent years, there has been an increase in research around how computing can be effectively performed from energy harvesting supplies, moving beyond the concepts of battery-powered and energy-neutral systems, thus enabling battery-free computing.</p><p>Challenges in battery-free computing are broad and wide-ranging, cutting across the spectrum of electronics and computer science—for example, circuits, algorithms, computer architecture, communication and networking, middleware, applications, deployments, and modelling and simulation tools.</p><p>This special issue explores the challenges, issues and opportunities in the research, design, and engineering of energy-harvesting, energy-neutral and intermittent sensing systems. These are enabling technologies for future applications in smart energy, transportation, environmental monitoring and smart cities. Innovative solutions are needed to enable either uninterrupted or intermittent operation.</p><p>This special issue contains two papers on different aspects of battery-free computing, as described below.</p><p>Hanschke et al.‘s article on ‘EmRep: Energy Management Relying on State-of-Charge Extrema Prediction’ considers energy management in energy-neutral systems, particularly those with small energy storage elements (e.g. a supercapacitor). They observe that existing energy-neutral management approaches have a tendency to operate inefficiently when exposed to extremes in the harvesting environment, for example, wasting harvested power in times of abundant energy due to saturation of the energy storage device. To resolve this, the authors present an approach to predict extremes in device state-of-charge (SoC) when such conditions are occurring and hence switch to a less conservative and more immediate policy for device activity (and hence, consumption). This decouples energy management of high-intake from low-intake harvest periods and ensures that the saturation of energy storage is reduced by design. The approach is thoroughly experimentally evaluated in combination with a variety of different prediction algorithms, time resolutions, and energy storage sizes. Promising results indicate the potential for a doubling in effective utility in systems with only small energy storage elements.</p><p>The second paper in the special issue, authored by Stricker et al., continues the theme of energy prediction by considering the impact of harvesting source prediction errors on the system scheduler and hence the system's performance. Their article, ‘Robustness of Predict
为了实现物联网(IoT)的愿景和规模,由于环境、维护、成本和物理体积的影响,我们不能依赖电源或电池为设备供电。在能量收集方面已经进行了大量的研究,使系统能够从周围环境中提取电能。然而,这种能量在空间和时间上都是高度动态的。近年来,关于如何从能量收集供应中有效地执行计算的研究有所增加,超越了电池供电和能量中性系统的概念,从而实现了无电池计算。无电池计算的挑战是广泛而广泛的,跨越了电子和计算机科学的各个领域,例如电路、算法、计算机体系结构、通信和网络、中间件、应用程序、部署以及建模和仿真工具。本期特刊探讨了能量收集、能量中性和间歇传感系统的研究、设计和工程中的挑战、问题和机遇。这些都是未来智能能源、交通、环境监测和智能城市应用的使能技术。需要创新的解决方案来实现不间断或间歇操作。本期特刊包含两篇关于无电池计算不同方面的论文,如下所述。Hanschke等人的文章“EmRep:基于充电状态极值预测的能源管理”考虑了能量中性系统中的能源管理,特别是那些具有小型储能元件(例如超级电容器)的系统。他们观察到,当暴露在极端的收集环境中时,现有的能量中性管理方法有一种低效率的趋势,例如,由于能量存储设备的饱和,在能量充足的时候浪费了收集的能量。为了解决这个问题,作者提出了一种方法,当这种情况发生时,可以预测设备充电状态(SoC)的极端情况,从而切换到不那么保守和更直接的设备活动(因此,消耗)策略。这将高摄入的能量管理与低摄入的收获期解耦,并确保通过设计降低能量储存的饱和度。该方法与各种不同的预测算法、时间分辨率和能量存储大小相结合,进行了彻底的实验评估。有希望的结果表明,在只有小型储能元件的系统中,有效效用有可能翻倍。特刊中的第二篇论文由Stricker等人撰写,通过考虑收集源预测误差对系统调度程序的影响以及系统性能,继续了能量预测的主题。他们的文章《预测能量收集系统的稳健性——分析和自适应预测缩放》定义了一个新的稳健性度量来描述预测误差的影响,并使用来自室内和室外收集场景的数据集演示了这一概念。作者随后提出了一种自适应预测缩放方法,该方法从本地环境和系统行为中学习,在现实环境中证明了高达13.8倍的性能改进。我们希望这期特刊能激励工业界和学术界的研究人员在这一具有挑战性的领域进行进一步的研究。
{"title":"Guest Editorial: Special issue on battery-free computing","authors":"Geoff V. Merrett,&nbsp;Bernd-Christian Renner,&nbsp;Brandon Lucia","doi":"10.1049/cdt2.12043","DOIUrl":"10.1049/cdt2.12043","url":null,"abstract":"&lt;p&gt;In order to realise the vision and scale of the Internet of Things (IoT), we cannot rely on mains electricity or batteries to power devices due to environmental, maintenance, cost and physical volume implications. Considerable research has been undertaken in energy harvesting, allowing systems to extract electrical energy from their surrounding environments. However, such energy is typically highly dynamic, both spatially and temporally. In recent years, there has been an increase in research around how computing can be effectively performed from energy harvesting supplies, moving beyond the concepts of battery-powered and energy-neutral systems, thus enabling battery-free computing.&lt;/p&gt;&lt;p&gt;Challenges in battery-free computing are broad and wide-ranging, cutting across the spectrum of electronics and computer science—for example, circuits, algorithms, computer architecture, communication and networking, middleware, applications, deployments, and modelling and simulation tools.&lt;/p&gt;&lt;p&gt;This special issue explores the challenges, issues and opportunities in the research, design, and engineering of energy-harvesting, energy-neutral and intermittent sensing systems. These are enabling technologies for future applications in smart energy, transportation, environmental monitoring and smart cities. Innovative solutions are needed to enable either uninterrupted or intermittent operation.&lt;/p&gt;&lt;p&gt;This special issue contains two papers on different aspects of battery-free computing, as described below.&lt;/p&gt;&lt;p&gt;Hanschke et al.‘s article on ‘EmRep: Energy Management Relying on State-of-Charge Extrema Prediction’ considers energy management in energy-neutral systems, particularly those with small energy storage elements (e.g. a supercapacitor). They observe that existing energy-neutral management approaches have a tendency to operate inefficiently when exposed to extremes in the harvesting environment, for example, wasting harvested power in times of abundant energy due to saturation of the energy storage device. To resolve this, the authors present an approach to predict extremes in device state-of-charge (SoC) when such conditions are occurring and hence switch to a less conservative and more immediate policy for device activity (and hence, consumption). This decouples energy management of high-intake from low-intake harvest periods and ensures that the saturation of energy storage is reduced by design. The approach is thoroughly experimentally evaluated in combination with a variety of different prediction algorithms, time resolutions, and energy storage sizes. Promising results indicate the potential for a doubling in effective utility in systems with only small energy storage elements.&lt;/p&gt;&lt;p&gt;The second paper in the special issue, authored by Stricker et al., continues the theme of energy prediction by considering the impact of harvesting source prediction errors on the system scheduler and hence the system's performance. Their article, ‘Robustness of Predict","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 4","pages":"89-90"},"PeriodicalIF":1.2,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77386084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASATM: Automated security assistant of threat models in intelligent transportation systems ASATM:智能交通系统中威胁模型的自动安全助手
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-05-30 DOI: 10.1049/cdt2.12045
Mohammad Ali Ramazanzadeh, Behnam Barzegar, Homayun Motameni

The evolution of technology has led to the appearance of smart cities. An essential element in such cities is smart mobility that covers the subjects related to Intelligent Transportation Systems (ITS). The problem is that the ITS vulnerabilities may considerably harm the life quality and safety status of human beings living in smart cities. In fact, software and hardware systems are more exposed to security risks and threats. To reduce threats and secure software design, threat modelling has been proposed as a preventive solution in the software design phase. On the other hand, threat modelling is always criticised for being time consuming, complex, difficult, and error prone. The approach proposed in this study, that is, Automated Security Assistant of Threat Models (ASATM), is an automated solution that is capable of achieving a high level of security assurance. By defining concepts and conceptual modelling as well as implementing automated security assistant algorithms, ASATM introduces a new approach to identifying threats, extracting security requirements, and designing secure software. The proposed approach demonstrates a quantitative classification of security at three levels (insecure, secure, and threat), twelve sub-levels (nominal scale and colour scale), and a five-layer depth (human understandability and conditional probability). In this study, to evaluate the effectiveness of our approach, an example with various security parameters and scenarios was tested and the results confirmed the superiority of the proposed approach over the latest threat modelling approaches in terms of method, learning, and model understanding.

科技的发展导致了智慧城市的出现。这些城市的一个基本要素是智能交通,它涵盖了与智能交通系统(ITS)相关的主题。问题是ITS的脆弱性可能会严重损害智慧城市中人类的生活质量和安全状况。实际上,软件和硬件系统更容易受到安全风险和威胁。为了减少威胁和确保软件设计的安全,威胁建模被提出作为软件设计阶段的预防性解决方案。另一方面,威胁建模总是被批评为耗时、复杂、困难和容易出错。本研究提出的方法,即威胁模型的自动化安全助手(ASATM),是一种能够实现高级别安全保障的自动化解决方案。通过定义概念和概念建模以及实现自动安全辅助算法,ASATM引入了一种识别威胁、提取安全需求和设计安全软件的新方法。所提出的方法在三个级别(不安全,安全和威胁),十二个子级别(名义尺度和颜色尺度)和五层深度(人类可理解性和条件概率)上展示了安全的定量分类。在本研究中,为了评估我们的方法的有效性,对一个具有各种安全参数和场景的示例进行了测试,结果证实了所提出的方法在方法、学习和模型理解方面优于最新的威胁建模方法。
{"title":"ASATM: Automated security assistant of threat models in intelligent transportation systems","authors":"Mohammad Ali Ramazanzadeh,&nbsp;Behnam Barzegar,&nbsp;Homayun Motameni","doi":"10.1049/cdt2.12045","DOIUrl":"10.1049/cdt2.12045","url":null,"abstract":"<p>The evolution of technology has led to the appearance of smart cities. An essential element in such cities is smart mobility that covers the subjects related to Intelligent Transportation Systems (ITS). The problem is that the ITS vulnerabilities may considerably harm the life quality and safety status of human beings living in smart cities. In fact, software and hardware systems are more exposed to security risks and threats. To reduce threats and secure software design, threat modelling has been proposed as a preventive solution in the software design phase. On the other hand, threat modelling is always criticised for being time consuming, complex, difficult, and error prone. The approach proposed in this study, that is, Automated Security Assistant of Threat Models (ASATM), is an automated solution that is capable of achieving a high level of security assurance. By defining concepts and conceptual modelling as well as implementing automated security assistant algorithms, ASATM introduces a new approach to identifying threats, extracting security requirements, and designing secure software. The proposed approach demonstrates a quantitative classification of security at three levels (insecure, secure, and threat), twelve sub-levels (nominal scale and colour scale), and a five-layer depth (human understandability and conditional probability). In this study, to evaluate the effectiveness of our approach, an example with various security parameters and scenarios was tested and the results confirmed the superiority of the proposed approach over the latest threat modelling approaches in terms of method, learning, and model understanding.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 5-6","pages":"141-158"},"PeriodicalIF":1.2,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76261435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Q-scheduler: A temperature and energy-aware deep Q-learning technique to schedule tasks in real-time multiprocessor embedded systems Q-scheduler:一种温度和能量感知的深度q -学习技术,用于调度实时多处理器嵌入式系统中的任务
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-05-26 DOI: 10.1049/cdt2.12044
Mahsa Mohammadi, Hakem Beitollahi

Reducing energy consumption under processors' temperature constraints has recently become a pressing issue in real-time multiprocessor systems on chips (MPSoCs). The high temperature of processors affects the power and reliability of the MPSoC. Low energy consumption is necessary for real-time embedded systems, as most of them are portable devices. Efficient task mapping on processors has a significant impact on reducing energy consumption and the thermal profile of processors. Several state-of-the-art techniques have recently been proposed for this issue. This paper proposes Q-scheduler, a novel technique based on the deep Q-learning technology, to dispatch tasks between processors in a real-time MPSoC. Thousands of simulated tasks train Q-scheduler offline to reduce the system's power consumption under temperature constraints of processors. The trained Q-scheduler dispatches real tasks in a real-time MPSoC online while also being trained regularly online. Q-scheduler dispatches multiple tasks in the system simultaneously with a single process; the effectiveness of this ability is significant, especially in a harmonic real-time system. Experimental results illustrate that Q-scheduler reduces energy consumption and temperature of processors on average by 15% and 10%, respectively, compared to previous state-of-the-art techniques.

在处理器温度限制下降低能耗已成为当前实时多处理器系统(mpsoc)面临的一个紧迫问题。处理器温度过高会影响MPSoC的功耗和可靠性。低能耗是实时嵌入式系统的必要条件,因为它们大多数是便携式设备。处理器上高效的任务映射对降低处理器的能耗和热分布有重要的影响。最近提出了几种最先进的技术来解决这个问题。本文提出了一种基于深度q学习技术的新技术Q-scheduler,用于实时MPSoC的处理器间任务调度。数千个模拟任务离线训练Q-scheduler,以减少处理器温度限制下的系统功耗。经过训练的Q-scheduler在实时MPSoC中在线调度实际任务,同时也定期在线接受培训。Q-scheduler用一个进程同时调度系统中的多个任务;这种能力的有效性是显著的,特别是在谐波实时系统中。实验结果表明,与之前的先进技术相比,Q-scheduler平均可将处理器的能耗和温度分别降低15%和10%。
{"title":"Q-scheduler: A temperature and energy-aware deep Q-learning technique to schedule tasks in real-time multiprocessor embedded systems","authors":"Mahsa Mohammadi,&nbsp;Hakem Beitollahi","doi":"10.1049/cdt2.12044","DOIUrl":"10.1049/cdt2.12044","url":null,"abstract":"<p>Reducing energy consumption under processors' temperature constraints has recently become a pressing issue in real-time multiprocessor systems on chips (MPSoCs). The high temperature of processors affects the power and reliability of the MPSoC. Low energy consumption is necessary for real-time embedded systems, as most of them are portable devices. Efficient task mapping on processors has a significant impact on reducing energy consumption and the thermal profile of processors. Several state-of-the-art techniques have recently been proposed for this issue. This paper proposes Q-scheduler, a novel technique based on the deep Q-learning technology, to dispatch tasks between processors in a real-time MPSoC. Thousands of simulated tasks train Q-scheduler offline to reduce the system's power consumption under temperature constraints of processors. The trained Q-scheduler dispatches real tasks in a real-time MPSoC online while also being trained regularly online. Q-scheduler dispatches multiple tasks in the system simultaneously with a single process; the effectiveness of this ability is significant, especially in a harmonic real-time system. Experimental results illustrate that Q-scheduler reduces energy consumption and temperature of processors on average by 15% and 10%, respectively, compared to previous state-of-the-art techniques.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 4","pages":"125-140"},"PeriodicalIF":1.2,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79367644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robustness of predictive energy harvesting systems: Analysis and adaptive prediction scaling 预测能量收集系统的鲁棒性:分析和自适应预测缩放
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-05-11 DOI: 10.1049/cdt2.12042
Naomi Stricker, Reto Da Forno, Lothar Thiele

Internet of Things (IoT) systems can rely on energy harvesting to extend battery lifetimes or even render batteries obsolete. Such systems employ an energy scheduler to optimise their behaviour and thus performance by adapting the system's operation. Predictive models of harvesting sources, which are inherently non-deterministic and consequently challenging to predict, are often necessary for the scheduler to optimise performance. Because the inaccurate predictions are utilised by the scheduler, the predictive model's accuracy inevitably impacts the scheduler and system performance. This fact has largely been overlooked in the vast amount of available results on energy schedulers and predictors for harvesting-based systems. The authors systematically describe the effect prediction errors have on the scheduler and thus system performance by defining a novel robustness metric. To alleviate the severe impact prediction errors can have on the system performance, the authors propose an adaptive prediction scaling method that learns from the local environment and system behaviour. The authors demonstrate the concept of robustness with datasets from both outdoor and indoor scenarios. In addition, the authors highlight the improvement and overhead of the proposed adaptive prediction scaling method for both scenarios. It improves a non-robust system's performance by up to 13.8 times in a real-world setting.

物联网(IoT)系统可以依靠能量收集来延长电池寿命,甚至使电池过时。这样的系统采用一个能量调度器来优化它们的行为,从而通过调整系统的运行来优化性能。收获源的预测模型本质上是不确定的,因此很难预测,这通常是调度器优化性能所必需的。由于调度程序使用了不准确的预测,因此预测模型的准确性不可避免地会影响调度程序和系统性能。这一事实在基于收获的系统的能量调度器和预测器的大量可用结果中很大程度上被忽视了。通过定义一个新的鲁棒性度量,系统地描述了预测误差对调度程序和系统性能的影响。为了减轻预测误差对系统性能的严重影响,作者提出了一种从局部环境和系统行为中学习的自适应预测缩放方法。作者用来自室外和室内场景的数据集演示了鲁棒性的概念。此外,作者强调了所提出的自适应预测缩放方法在这两种情况下的改进和开销。在现实环境中,它将非鲁棒系统的性能提高了13.8倍。
{"title":"Robustness of predictive energy harvesting systems: Analysis and adaptive prediction scaling","authors":"Naomi Stricker,&nbsp;Reto Da Forno,&nbsp;Lothar Thiele","doi":"10.1049/cdt2.12042","DOIUrl":"10.1049/cdt2.12042","url":null,"abstract":"<p>Internet of Things (IoT) systems can rely on energy harvesting to extend battery lifetimes or even render batteries obsolete. Such systems employ an energy scheduler to optimise their behaviour and thus performance by adapting the system's operation. Predictive models of harvesting sources, which are inherently non-deterministic and consequently challenging to predict, are often necessary for the scheduler to optimise performance. Because the inaccurate predictions are utilised by the scheduler, the predictive model's accuracy inevitably impacts the scheduler and system performance. This fact has largely been overlooked in the vast amount of available results on energy schedulers and predictors for harvesting-based systems. The authors systematically describe the effect prediction errors have on the scheduler and thus system performance by defining a novel robustness metric. To alleviate the severe impact prediction errors can have on the system performance, the authors propose an adaptive prediction scaling method that learns from the local environment and system behaviour. The authors demonstrate the concept of robustness with datasets from both outdoor and indoor scenarios. In addition, the authors highlight the improvement and overhead of the proposed adaptive prediction scaling method for both scenarios. It improves a non-robust system's performance by up to 13.8 times in a real-world setting.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 4","pages":"106-124"},"PeriodicalIF":1.2,"publicationDate":"2022-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83346208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Synchronization in graph analysis algorithms on the Partially Ordered Event-Triggered Systems many-core architecture 部分有序事件触发系统多核架构图分析算法中的同步
IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2022-04-03 DOI: 10.1049/cdt2.12041
Ashur Rafiev, Alex Yakovlev, Ghaith Tarawneh, Matthew F. Naylor, Simon W. Moore, David B. Thomas, Graeme M. Bragg, Mark L. Vousden, Andrew D. Brown

One of the key problems in designing and implementing graph analysis algorithms for distributed platforms is to find an optimal way of managing communication flows in the massively parallel processing network. Message-passing and global synchronization are powerful abstractions in this regard, especially when used in combination. This paper studies the use of a hardware-implemented refutable global barrier as a design optimization technique aimed at unifying these abstractions at the API level. The paper explores the trade-offs between the related overheads and performance factors on a message-passing prototype machine with 49,152 RISC-V threads distributed over 48 FPGAs (called the Partially Ordered Event-Triggered Systems platform). Our experiments show that some graph applications favour synchronized communication, but the effect is hard to predict in general because of the interplay between multiple hardware and software factors. A classifier model is therefore proposed and implemented to perform such a prediction based on the application graph topology parameters: graph diameter, degree of connectivity, and reconvergence metric. The presented experimental results demonstrate that the correct choice of communication mode, granted by the new model-driven approach, helps to achieve 3.22 times faster computation time on average compared to the baseline platform operation.

在分布式平台上设计和实现图形分析算法的关键问题之一是在大规模并行处理网络中找到一种最优的通信流管理方式。在这方面,消息传递和全局同步是强大的抽象,特别是在组合使用时。本文研究了使用硬件实现的可辩驳的全局屏障作为一种设计优化技术,目的是在API级别统一这些抽象。本文探讨了在48个fpga(称为部分有序事件触发系统平台)上分布49152个RISC-V线程的消息传递原型机上相关开销和性能因素之间的权衡。我们的实验表明,一些图形应用程序支持同步通信,但由于多个硬件和软件因素之间的相互作用,通常很难预测效果。因此,提出并实现了一个分类器模型来执行基于应用图拓扑参数的预测:图直径、连通性程度和再收敛度量。实验结果表明,在正确选择通信模式的情况下,与基线平台操作相比,计算时间平均提高了3.22倍。
{"title":"Synchronization in graph analysis algorithms on the Partially Ordered Event-Triggered Systems many-core architecture","authors":"Ashur Rafiev,&nbsp;Alex Yakovlev,&nbsp;Ghaith Tarawneh,&nbsp;Matthew F. Naylor,&nbsp;Simon W. Moore,&nbsp;David B. Thomas,&nbsp;Graeme M. Bragg,&nbsp;Mark L. Vousden,&nbsp;Andrew D. Brown","doi":"10.1049/cdt2.12041","DOIUrl":"10.1049/cdt2.12041","url":null,"abstract":"<p>One of the key problems in designing and implementing graph analysis algorithms for distributed platforms is to find an optimal way of managing communication flows in the massively parallel processing network. Message-passing and global synchronization are powerful abstractions in this regard, especially when used in combination. This paper studies the use of a hardware-implemented refutable global barrier as a design optimization technique aimed at unifying these abstractions at the API level. The paper explores the trade-offs between the related overheads and performance factors on a message-passing prototype machine with 49,152 RISC-V threads distributed over 48 FPGAs (called the Partially Ordered Event-Triggered Systems platform). Our experiments show that some graph applications favour synchronized communication, but the effect is hard to predict in general because of the interplay between multiple hardware and software factors. A classifier model is therefore proposed and implemented to perform such a prediction based on the application graph topology parameters: graph diameter, degree of connectivity, and reconvergence metric. The presented experimental results demonstrate that the correct choice of communication mode, granted by the new model-driven approach, helps to achieve 3.22 times faster computation time on average compared to the baseline platform operation.</p>","PeriodicalId":50383,"journal":{"name":"IET Computers and Digital Techniques","volume":"16 2-3","pages":"71-88"},"PeriodicalIF":1.2,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cdt2.12041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85073025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IET Computers and Digital Techniques
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1