The number of studies that apply machine learning (ML) to materials science has been growing at a rate of approximately 1.67 times per year over the past decade. In this review, I examine this growth in various contexts. First, I present an analysis of the most commonly used tools (software, databases, materials science methods, and ML methods) used within papers that apply ML to materials science. The analysis demonstrates that despite the growth of deep learning techniques, the use of classical machine learning is still dominant as a whole. It also demonstrates how new research can effectively build upon past research, particular in the domain of ML models trained on density functional theory calculation data. Next, I present the progression of best scores as a function of time on the matbench materials science benchmark for formation enthalpy prediction. In particular, a dramatic improvement of 7 times reduction in error is obtained when progressing from feature-based methods that use conventional ML (random forest, support vector regression, etc.) to the use of graph neural network techniques. Finally, I provide views on future challenges and opportunities, focusing on data size and complexity, extrapolation, interpretation, access, and relevance.
The ever-expanding capabilities of machine learning are powered by exponentially growing complexity of deep neural network (DNN) models, requiring more energy and chip-area efficient hardware to carry out increasingly computational expensive model-inference and training tasks. Electrochemical random-access memories (ECRAMs) are developed specifically to implement efficient analog in-memory computing for these data-intensive workloads, showing some critical advantages over competing memory technologies mostly developed originally for digital electronics. ECRAMs possess the distinctive capability to switch between a very large number of memristive states with a high level of symmetry, small cycle-to-cycle variability, and low energy consumption; and they simultaneously exhibit good endurance, long data retention, fast switching speed up to nanoseconds, and verified scalability down to sub-50 nm regime, therefore holding great promise in realizing deep-learning accelerators when heterogeneously integrated with silicon-based peripheral circuits. In this review, we first examine challenges in constructing in-memory-computing accelerators and unique advantages of ECRAMs. We then critically assess the various ionic species, channel materials, and solid-state electrolytes employed in ECRAMs that influence device programming characteristics and performance metrics with their different memristive modulation and ionic transport mechanisms. Furthermore, ECRAM device engineering and integration schemes are discussed, within the context of their implementation in high-density pseudo-crossbar array microarchitectures for performing DNN inference and training with high parallelism. Finally, we offer our insights regarding major remaining obstacles and emerging opportunities of harnessing ECRAMs to realize deep-learning accelerators through material-device-circuit-architecture-algorithm co-design.
The application of electric current on metallic materials alters the microstructures and mechanical properties of materials. The improved formability and accelerated microstructural evolution in material via the application of electric current is referred to as electric current-induced phenomena. This review includes extensive experimental and computational studies on the deformation behavior and microstructural evolutions of metallic materials, underlying mechanisms, and practical applications in industry. We precisely introduce various electric current-induced effects by considering different materials and electric conditions. The discussion covers the mechanisms underlying these effects, emphasizing both thermal and athermal effects of electric current, supported by experimental evidence, physical principles, atomic-scale simulations, and numerical methods. Furthermore, we explore the applications of electric current-induced phenomena in material processing techniques including electrically-assisted forming, treatment, joining, and machining. This review aims to deepen the understanding of how electric currents affect metallic materials and inspire further development of advanced fabrication and processing technologies in time- and energy-efficient ways.
New materials are a fundamental component of most major advancements in human history. The pivotal role materials play in the development of next generation technologies has spurred campaigns such as the Materials Genome Initiative (MGI) with the goal of reducing the time and cost to discover, characterize, and deploy advanced materials. As goals of the MGI have been met and new capabilities have emerged, a contemporary vision has taken shape within the scientific community whereby the exploration of materials space is dramatically accelerated by artificial intelligence agent(s) capable of performing research independently from humans and achieving a paradigm change in the field. As this idea comes to fruition and new materials are more rapidly computationally evaluated and synthesized nearly on demand, the rate at which a complete characterization of each candidate material’s properties can be completed and understood within the context of all other potential solutions will be the next bottleneck in a materials design campaign. This work provides an overview of the technical and conceptual components related to materials characterization discussed during a workshop dedicated to challenging the way materials research is thought of and performed within the emergent field of autonomous materials research and design (AMRAD). Furthermore, general considerations for developing autonomous characterization are presented along with related works and a discussion of their progress and shortcomings toward the AMRAD vision.