Hyunjeong Kwak, Nayeon Kim, Seonuk Jeon, Seyoung Kim, Jiyong Woo
{"title":"电化学随机存取存储器:面向神经形态计算的材料、设备和系统的最新进展。","authors":"Hyunjeong Kwak, Nayeon Kim, Seonuk Jeon, Seyoung Kim, Jiyong Woo","doi":"10.1186/s40580-024-00415-8","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial neural networks (ANNs), inspired by the human brain's network of neurons and synapses, enable computing machines and systems to execute cognitive tasks, thus embodying artificial intelligence (AI). Since the performance of ANNs generally improves with the expansion of the network size, and also most of the computation time is spent for matrix operations, AI computation have been performed not only using the general-purpose central processing unit (CPU) but also architectures that facilitate parallel computation, such as graphic processing units (GPUs) and custom-designed application-specific integrated circuits (ASICs). Nevertheless, the substantial energy consumption stemming from frequent data transfers between processing units and memory has remained a persistent challenge. In response, a novel approach has emerged: an in-memory computing architecture harnessing analog memory elements. This innovation promises a notable advancement in energy efficiency. The core of this analog AI hardware accelerator lies in expansive arrays of non-volatile memory devices, known as resistive processing units (RPUs). These RPUs facilitate massively parallel matrix operations, leading to significant enhancements in both performance and energy efficiency. Electrochemical random-access memory (ECRAM), leveraging ion dynamics in secondary-ion battery materials, has emerged as a promising candidate for RPUs. ECRAM achieves over 1000 memory states through precise ion movement control, prompting early-stage research into material stacks such as mobile ion species and electrolyte materials. Crucially, the analog states in ECRAMs update symmetrically with pulse number (or voltage polarity), contributing to high network performance. Recent strides in device engineering in planar and three-dimensional structures and the understanding of ECRAM operation physics have marked significant progress in a short research period. This paper aims to review ECRAM material advancements through literature surveys, offering a systematic discussion on engineering assessments for ion control and a physical understanding of array-level demonstrations. Finally, the review outlines future directions for improvements, co-optimization, and multidisciplinary collaboration in circuits, algorithms, and applications to develop energy-efficient, next-generation AI hardware systems.</p></div>","PeriodicalId":712,"journal":{"name":"Nano Convergence","volume":"11 1","pages":""},"PeriodicalIF":13.4000,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://nanoconvergencejournal.springeropen.com/counter/pdf/10.1186/s40580-024-00415-8","citationCount":"0","resultStr":"{\"title\":\"Electrochemical random-access memory: recent advances in materials, devices, and systems towards neuromorphic computing\",\"authors\":\"Hyunjeong Kwak, Nayeon Kim, Seonuk Jeon, Seyoung Kim, Jiyong Woo\",\"doi\":\"10.1186/s40580-024-00415-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Artificial neural networks (ANNs), inspired by the human brain's network of neurons and synapses, enable computing machines and systems to execute cognitive tasks, thus embodying artificial intelligence (AI). Since the performance of ANNs generally improves with the expansion of the network size, and also most of the computation time is spent for matrix operations, AI computation have been performed not only using the general-purpose central processing unit (CPU) but also architectures that facilitate parallel computation, such as graphic processing units (GPUs) and custom-designed application-specific integrated circuits (ASICs). Nevertheless, the substantial energy consumption stemming from frequent data transfers between processing units and memory has remained a persistent challenge. In response, a novel approach has emerged: an in-memory computing architecture harnessing analog memory elements. This innovation promises a notable advancement in energy efficiency. The core of this analog AI hardware accelerator lies in expansive arrays of non-volatile memory devices, known as resistive processing units (RPUs). These RPUs facilitate massively parallel matrix operations, leading to significant enhancements in both performance and energy efficiency. Electrochemical random-access memory (ECRAM), leveraging ion dynamics in secondary-ion battery materials, has emerged as a promising candidate for RPUs. ECRAM achieves over 1000 memory states through precise ion movement control, prompting early-stage research into material stacks such as mobile ion species and electrolyte materials. Crucially, the analog states in ECRAMs update symmetrically with pulse number (or voltage polarity), contributing to high network performance. Recent strides in device engineering in planar and three-dimensional structures and the understanding of ECRAM operation physics have marked significant progress in a short research period. This paper aims to review ECRAM material advancements through literature surveys, offering a systematic discussion on engineering assessments for ion control and a physical understanding of array-level demonstrations. Finally, the review outlines future directions for improvements, co-optimization, and multidisciplinary collaboration in circuits, algorithms, and applications to develop energy-efficient, next-generation AI hardware systems.</p></div>\",\"PeriodicalId\":712,\"journal\":{\"name\":\"Nano Convergence\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":13.4000,\"publicationDate\":\"2024-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://nanoconvergencejournal.springeropen.com/counter/pdf/10.1186/s40580-024-00415-8\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nano Convergence\",\"FirstCategoryId\":\"88\",\"ListUrlMain\":\"https://link.springer.com/article/10.1186/s40580-024-00415-8\",\"RegionNum\":2,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATERIALS SCIENCE, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nano Convergence","FirstCategoryId":"88","ListUrlMain":"https://link.springer.com/article/10.1186/s40580-024-00415-8","RegionNum":2,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
Electrochemical random-access memory: recent advances in materials, devices, and systems towards neuromorphic computing
Artificial neural networks (ANNs), inspired by the human brain's network of neurons and synapses, enable computing machines and systems to execute cognitive tasks, thus embodying artificial intelligence (AI). Since the performance of ANNs generally improves with the expansion of the network size, and also most of the computation time is spent for matrix operations, AI computation have been performed not only using the general-purpose central processing unit (CPU) but also architectures that facilitate parallel computation, such as graphic processing units (GPUs) and custom-designed application-specific integrated circuits (ASICs). Nevertheless, the substantial energy consumption stemming from frequent data transfers between processing units and memory has remained a persistent challenge. In response, a novel approach has emerged: an in-memory computing architecture harnessing analog memory elements. This innovation promises a notable advancement in energy efficiency. The core of this analog AI hardware accelerator lies in expansive arrays of non-volatile memory devices, known as resistive processing units (RPUs). These RPUs facilitate massively parallel matrix operations, leading to significant enhancements in both performance and energy efficiency. Electrochemical random-access memory (ECRAM), leveraging ion dynamics in secondary-ion battery materials, has emerged as a promising candidate for RPUs. ECRAM achieves over 1000 memory states through precise ion movement control, prompting early-stage research into material stacks such as mobile ion species and electrolyte materials. Crucially, the analog states in ECRAMs update symmetrically with pulse number (or voltage polarity), contributing to high network performance. Recent strides in device engineering in planar and three-dimensional structures and the understanding of ECRAM operation physics have marked significant progress in a short research period. This paper aims to review ECRAM material advancements through literature surveys, offering a systematic discussion on engineering assessments for ion control and a physical understanding of array-level demonstrations. Finally, the review outlines future directions for improvements, co-optimization, and multidisciplinary collaboration in circuits, algorithms, and applications to develop energy-efficient, next-generation AI hardware systems.
期刊介绍:
Nano Convergence is an internationally recognized, peer-reviewed, and interdisciplinary journal designed to foster effective communication among scientists spanning diverse research areas closely aligned with nanoscience and nanotechnology. Dedicated to encouraging the convergence of technologies across the nano- to microscopic scale, the journal aims to unveil novel scientific domains and cultivate fresh research prospects.
Operating on a single-blind peer-review system, Nano Convergence ensures transparency in the review process, with reviewers cognizant of authors' names and affiliations while maintaining anonymity in the feedback provided to authors.