Pub Date : 2024-02-05DOI: 10.1007/s40192-023-00338-y
Fan Zhang, Aaron C. Johnston-Peck, Lyle E. Levine, Michael B. Katz, Kil-Won Moon, Maureen E. Williams, Sandra W. Young, Andrew J. Allen, Olaf Borkiewicz, Jan Ilavsky
Additive manufacturing (AM) technologies offer unprecedented design flexibility but are limited by a lack of understanding of the material microstructure formed under their extreme and transient processing conditions and its subsequent transformation during post-build processing. As part of the 2022 AM Bench Challenge, sponsored by the National Institute of Standards and Technology, this study focuses on the phase composition and phase evolution of AM nickel alloy 718, a nickel-based superalloy, to provide benchmark data essential for the validation of computational models for microstructural predictions. We employed high-energy synchrotron X-ray diffraction, in situ synchrotron X-ray scattering, as well as high-resolution transmission electron microscopy for our analyses. The study uncovers critical aspects of the microstructure in its as-built state, its transformation during homogenization, and its phase evolution during subsequent aging heat treatment. Specifically, we identified secondary phases, monitored the dissolution and coarsening of microstructural elements, and observed the formation and stability of γ’ and γ” phases. The results provide the rigorous benchmark data required to understand the atomic and microstructural transformations of AM nickel alloy 718, thereby enhancing the reliability and applicability of AM models for predicting phase evolution and mechanical properties.
快速成型制造(AM)技术提供了前所未有的设计灵活性,但由于缺乏对在极端和瞬时加工条件下形成的材料微观结构及其在制造后加工过程中的后续转变的了解,这种灵活性受到了限制。作为美国国家标准与技术研究院赞助的 2022 年 AM 工作台挑战赛的一部分,本研究重点关注 AM 镍合金 718(一种镍基超级合金)的相组成和相演化,以提供验证微观结构预测计算模型所必需的基准数据。我们采用了高能同步辐射 X 射线衍射、原位同步辐射 X 射线散射以及高分辨率透射电子显微镜进行分析。这项研究揭示了微观结构在雏形状态、均质化过程中的转变以及在随后的老化热处理过程中的相演变的关键方面。具体来说,我们确定了次生相,监测了微结构元素的溶解和粗化,并观察了 γ' 和 γ" 相的形成和稳定性。这些结果为了解 AM 镍合金 718 的原子和微观结构转变提供了所需的严格基准数据,从而提高了 AM 模型预测相演变和机械性能的可靠性和适用性。
{"title":"Phase Composition and Phase Transformation of Additively Manufactured Nickel Alloy 718 AM Bench Artifacts","authors":"Fan Zhang, Aaron C. Johnston-Peck, Lyle E. Levine, Michael B. Katz, Kil-Won Moon, Maureen E. Williams, Sandra W. Young, Andrew J. Allen, Olaf Borkiewicz, Jan Ilavsky","doi":"10.1007/s40192-023-00338-y","DOIUrl":"https://doi.org/10.1007/s40192-023-00338-y","url":null,"abstract":"<p>Additive manufacturing (AM) technologies offer unprecedented design flexibility but are limited by a lack of understanding of the material microstructure formed under their extreme and transient processing conditions and its subsequent transformation during post-build processing. As part of the 2022 AM Bench Challenge, sponsored by the National Institute of Standards and Technology, this study focuses on the phase composition and phase evolution of AM nickel alloy 718, a nickel-based superalloy, to provide benchmark data essential for the validation of computational models for microstructural predictions. We employed high-energy synchrotron X-ray diffraction, in situ synchrotron X-ray scattering, as well as high-resolution transmission electron microscopy for our analyses. The study uncovers critical aspects of the microstructure in its as-built state, its transformation during homogenization, and its phase evolution during subsequent aging heat treatment. Specifically, we identified secondary phases, monitored the dissolution and coarsening of microstructural elements, and observed the formation and stability of <i>γ</i>’ and <i>γ</i>” phases. The results provide the rigorous benchmark data required to understand the atomic and microstructural transformations of AM nickel alloy 718, thereby enhancing the reliability and applicability of AM models for predicting phase evolution and mechanical properties.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"51 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139768019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1007/s40192-023-00336-0
Brian J. Simonds, Jack Tanner, Alexandra Artusio-Glimpse, Niranjan Parab, Cang Zhao, Tao Sun, Paul A. Williams
The 2022 Asynchronous AM-Bench challenge was designed to test the ability of simulations to accurately predict laser power absorption as well as various melt pool behaviors (width, depth, and solidification) during laser melting of solid metal during stationary and scanned laser illumination. In this challenge, participants were asked to predict a series of experimental outcomes. Experimental data were obtained from a series of experiments performed at the Advanced Photon Source at Argonne National Laboratories in 2019. These experiments combined integrating sphere radiometry with high-speed X-ray imaging, allowing for the simultaneous recording of absolute laser power absorption and two-dimensional, projected images of the melt pool. All challenge problems were based on experiments using bare aluminum solid metal. Participants were provided with pertinent experimental information like laser power, scan speed, laser spot size, and material composition. Additionally, participants were given absorptance and X-ray imaging data from stationary and scanned laser experiments on solid Ti–6Al–4V that could be used for testing their models before attempting challenge problems. In total, this challenge received 56 submissions from eight different research groups for eight individual challenge problems. The data for this challenge, and associated information, are available for download from the NIST Public Data Repository. This paper summarizes the results from the 2022 Asynchronous AM-Bench challenge as well as discusses the lessons learned to help inform future challenges.
{"title":"Ability to Simulate Absorption and Melt Pool Dynamics for Laser Melting of Bare Aluminum Plate: Results and Insights from the 2022 Asynchronous AM-Bench Challenge","authors":"Brian J. Simonds, Jack Tanner, Alexandra Artusio-Glimpse, Niranjan Parab, Cang Zhao, Tao Sun, Paul A. Williams","doi":"10.1007/s40192-023-00336-0","DOIUrl":"https://doi.org/10.1007/s40192-023-00336-0","url":null,"abstract":"<p>The 2022 Asynchronous AM-Bench challenge was designed to test the ability of simulations to accurately predict laser power absorption as well as various melt pool behaviors (width, depth, and solidification) during laser melting of solid metal during stationary and scanned laser illumination. In this challenge, participants were asked to predict a series of experimental outcomes. Experimental data were obtained from a series of experiments performed at the Advanced Photon Source at Argonne National Laboratories in 2019. These experiments combined integrating sphere radiometry with high-speed X-ray imaging, allowing for the simultaneous recording of absolute laser power absorption and two-dimensional, projected images of the melt pool. All challenge problems were based on experiments using bare aluminum solid metal. Participants were provided with pertinent experimental information like laser power, scan speed, laser spot size, and material composition. Additionally, participants were given absorptance and X-ray imaging data from stationary and scanned laser experiments on solid Ti–6Al–4V that could be used for testing their models before attempting challenge problems. In total, this challenge received 56 submissions from eight different research groups for eight individual challenge problems. The data for this challenge, and associated information, are available for download from the NIST Public Data Repository. This paper summarizes the results from the 2022 Asynchronous AM-Bench challenge as well as discusses the lessons learned to help inform future challenges.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"23 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139667154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-19DOI: 10.1007/s40192-023-00334-2
Masahiro Kusano, Makoto Watanabe
To understand the correlation between process, structures, and properties in laser powder bed fusion (L-PBF), it is essential to use numerical analysis as well as experimental approaches. A finite element thermal analysis uses a moving heat source model represented as a volumetric heat flux to simulate heat input by laser. Because of its computational efficiency, finite element thermal analysis is suitable for iterative procedures such as parametric study and process optimization. However, to obtain valid simulated results, the heat source model must be calibrated by comparison with experimental results for each laser scanning condition. The need for re-calibration limits the applicable window of laser scanning conditions in the thermal analysis. Thus, the current study developed a novel heat source model that is valid and precise under any laser scanning condition within a wide process window. As a secondary objective in the development, we quantitatively evaluated and compared the four heat source models proposed to date. It was found that the most suitable heat source model for the L-PBF is conical one among them. Then, a multiple linear regression analysis was performed to represent the heat source model as a function of laser power and scanning velocity. Consequently, the thermal analysis with the novel model is valid and precise within the wide process window of L-PBF.
{"title":"Heat Source Model Development for Thermal Analysis of Laser Powder Bed Fusion Using Bayesian Optimization and Machine Learning","authors":"Masahiro Kusano, Makoto Watanabe","doi":"10.1007/s40192-023-00334-2","DOIUrl":"https://doi.org/10.1007/s40192-023-00334-2","url":null,"abstract":"<p>To understand the correlation between process, structures, and properties in laser powder bed fusion (L-PBF), it is essential to use numerical analysis as well as experimental approaches. A finite element thermal analysis uses a moving heat source model represented as a volumetric heat flux to simulate heat input by laser. Because of its computational efficiency, finite element thermal analysis is suitable for iterative procedures such as parametric study and process optimization. However, to obtain valid simulated results, the heat source model must be calibrated by comparison with experimental results for each laser scanning condition. The need for re-calibration limits the applicable window of laser scanning conditions in the thermal analysis. Thus, the current study developed a novel heat source model that is valid and precise under any laser scanning condition within a wide process window. As a secondary objective in the development, we quantitatively evaluated and compared the four heat source models proposed to date. It was found that the most suitable heat source model for the L-PBF is conical one among them. Then, a multiple linear regression analysis was performed to represent the heat source model as a function of laser power and scanning velocity. Consequently, the thermal analysis with the novel model is valid and precise within the wide process window of L-PBF.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"26 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.1007/s40192-023-00331-5
Abstract
High-strength aluminum alloys used in aerospace and automotive applications obtain their strength through precipitation hardening. Achieving the desired mechanical properties requires precise control over the nanometer-sized precipitates. However, the microstructure of these alloys changes over time due to aging, leading to a deterioration in strength. Typically, the size, number, and distribution of precipitates for a quantitative assessment of microstructural changes are determined by manual analysis, which is subjective and time-consuming. In our work, we introduce a progressive and automatable approach that enables a more efficient, objective, and reproducible analysis of precipitates. The method involves several sequential steps using an image repository containing dark-field transmission electron microscopy (DF-TEM) images depicting various aging states of an aluminum alloy. During the process, precipitation contours are generated and quantitatively evaluated, and the results are comprehensibly transferred into semantic data structures. The use and deployment of Jupyter Notebooks, along with the beneficial implementation of Semantic Web technologies, significantly enhances the reproducibility and comparability of the findings. This work serves as an exemplar of FAIR image and research data management.
{"title":"Enhancing Reproducibility in Precipitate Analysis: A FAIR Approach with Automated Dark-Field Transmission Electron Microscope Image Processing","authors":"","doi":"10.1007/s40192-023-00331-5","DOIUrl":"https://doi.org/10.1007/s40192-023-00331-5","url":null,"abstract":"<h3>Abstract</h3> <p>High-strength aluminum alloys used in aerospace and automotive applications obtain their strength through precipitation hardening. Achieving the desired mechanical properties requires precise control over the nanometer-sized precipitates. However, the microstructure of these alloys changes over time due to aging, leading to a deterioration in strength. Typically, the size, number, and distribution of precipitates for a quantitative assessment of microstructural changes are determined by manual analysis, which is subjective and time-consuming. In our work, we introduce a progressive and automatable approach that enables a more efficient, objective, and reproducible analysis of precipitates. The method involves several sequential steps using an image repository containing dark-field transmission electron microscopy (DF-TEM) images depicting various aging states of an aluminum alloy. During the process, precipitation contours are generated and quantitatively evaluated, and the results are comprehensibly transferred into semantic data structures. The use and deployment of Jupyter Notebooks, along with the beneficial implementation of Semantic Web technologies, significantly enhances the reproducibility and comparability of the findings. This work serves as an exemplar of FAIR image and research data management.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"35 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.1007/s40192-023-00335-1
Paul Seibert, Alexander Raßloff, Yichi Zhang, Karl Kalina, Paul Reck, Daniel Peterseim, Markus Kästner
Abstract
The problem of generating microstructures of complex materials in silico has been approached from various directions including simulation, Markov, deep learning and descriptor-based approaches. This work presents a hybrid method that is inspired by all four categories and has interesting scalability properties. A neural cellular automaton is trained to evolve microstructures based on local information. Unlike most machine learning-based approaches, it does not directly require a data set of reference micrographs, but is trained from statistical microstructure descriptors that can stem from a single reference. This means that the training cost scales only with the complexity of the structure and associated descriptors. Since the size of the reconstructed structures can be set during inference, even extremely large structures can be efficiently generated. Similarly, the method is very efficient if many structures are to be reconstructed from the same descriptor for statistical evaluations. The method is formulated and discussed in detail by means of various numerical experiments, demonstrating its utility and scalability.
{"title":"Reconstructing Microstructures From Statistical Descriptors Using Neural Cellular Automata","authors":"Paul Seibert, Alexander Raßloff, Yichi Zhang, Karl Kalina, Paul Reck, Daniel Peterseim, Markus Kästner","doi":"10.1007/s40192-023-00335-1","DOIUrl":"https://doi.org/10.1007/s40192-023-00335-1","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>The problem of generating microstructures of complex materials in silico has been approached from various directions including simulation, Markov, deep learning and descriptor-based approaches. This work presents a hybrid method that is inspired by all four categories and has interesting scalability properties. A neural cellular automaton is trained to evolve microstructures based on local information. Unlike most machine learning-based approaches, it does not directly require a data set of reference micrographs, but is trained from statistical microstructure descriptors that can stem from a single reference. This means that the training cost scales only with the complexity of the structure and associated descriptors. Since the size of the reconstructed structures can be set during inference, even extremely large structures can be efficiently generated. Similarly, the method is very efficient if many structures are to be reconstructed from the same descriptor for statistical evaluations. The method is formulated and discussed in detail by means of various numerical experiments, demonstrating its utility and scalability.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"248 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-17DOI: 10.1007/s40192-023-00332-4
Abstract
Indentation testing has played a major role for many materials design processes as a convenient and relatively cheap experiment. However, extracting the data from indentation tests requires complex post-processing or an integrated simulation and experiment framework. Accordingly, the simulation of indentation has become a post-processing routine for indentation tests. Providing a highly efficient, computationally scalable, and open-source platform for indentation simulation provides invaluable machinery for materials design process. An open-source PRISMS-Indentation module is presented here as a multi-scale elasto-plastic virtual indentation framework. The module is implemented as a part of PRISMS-Plasticity software which covers length scales of macroscopic plasticity and crystal plasticity. The contact problem is handled using a primal–dual active set method. The framework is first tested against analytical solution of Hertzian theory for contact using an isotropic elasticity model. The robustness of the framework is then investigated in simulations of indentation of annealed Cu microstructures. Unstructured meshes with hexahedral elements and variable mesh density are used to demonstrate potential for speedup in indentation simulations.
{"title":"PRISMS-Indentation: Multi-scale Elasto-Plastic Virtual Indentation Module","authors":"","doi":"10.1007/s40192-023-00332-4","DOIUrl":"https://doi.org/10.1007/s40192-023-00332-4","url":null,"abstract":"<h3>Abstract</h3> <p>Indentation testing has played a major role for many materials design processes as a convenient and relatively cheap experiment. However, extracting the data from indentation tests requires complex post-processing or an integrated simulation and experiment framework. Accordingly, the simulation of indentation has become a post-processing routine for indentation tests. Providing a highly efficient, computationally scalable, and open-source platform for indentation simulation provides invaluable machinery for materials design process. An open-source PRISMS-Indentation module is presented here as a multi-scale elasto-plastic virtual indentation framework. The module is implemented as a part of PRISMS-Plasticity software which covers length scales of macroscopic plasticity and crystal plasticity. The contact problem is handled using a primal–dual active set method. The framework is first tested against analytical solution of Hertzian theory for contact using an isotropic elasticity model. The robustness of the framework is then investigated in simulations of indentation of annealed Cu microstructures. Unstructured meshes with hexahedral elements and variable mesh density are used to demonstrate potential for speedup in indentation simulations.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"30 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-16DOI: 10.1007/s40192-023-00328-0
Weiqi Yue, Pawan K. Tripathi, Gabriel Ponon, Zhuldyz Ualikhankyzy, Donald W. Brown, Bjorn Clausen, Maria Strantza, Darren C. Pagan, Matthew A. Willard, Frank Ernst, Erman Ayday, Vipin Chaudhary, Roger H. French
X-ray diffraction patterns contain information about the atomistic structure and microstructure (defect population) of materials, extracting detailed information from diffraction patterns is complex, demanding and relies on prior knowledge. We hypothesize that deep-learning techniques can help to perform an effective and accurate analysis with high throughput rates. To demonstrate this concept, we applied a novel deep learning framework to determine the evolution of the (upbeta )-phase volume fraction in a Ti–6Al–4V alloy during heat-treatment from video sequences of 2D diffraction patterns recorded in transmission and with highly monochromatic radiation in a synchrotron beamline. In particular, we studied the impact of network design on prediction reliability and computational performance. Networks of different architectures were trained using 3008 experimental 2D patterns. A well-tuned model was found to reproduce the phase fractions of another experimental data set, consisting of 1100 diffraction patterns, with a mean-square error as small as (2.6 times 10^{-4}). The average prediction error of (upbeta )-phase volume fraction was within (1.6 times 10^{-2}) (in each diffraction pattern) of the values obtained by conventional methods. Our work demonstrates that convolutional neural networks can evaluate high energy X-ray diffraction patterns with a remarkable level of reliability. Furthermore, it demonstrates the significance of network design on the reliability of predictions and computational performance. The most complex models do not necessarily result in highest accuracy and may even fail to learn from the data.
X 射线衍射图样包含有关材料原子结构和微观结构(缺陷群)的信息,从衍射图样中提取详细信息非常复杂,要求很高,并且依赖于先验知识。我们假设,深度学习技术可以帮助以高吞吐率进行有效而准确的分析。为了证明这一概念,我们应用了一种新颖的深度学习框架,通过同步辐射光束线以透射和高单色辐射方式记录的二维衍射图样视频序列,确定了热处理过程中 Ti-6Al-4V 合金中的(upbeta )相体积分数的演变。我们特别研究了网络设计对预测可靠性和计算性能的影响。我们使用 3008 个实验二维图案训练了不同架构的网络。结果发现,一个经过良好调整的模型可以再现由 1100 个衍射图样组成的另一个实验数据集的相位分数,其均方误差小到 (2.6 倍 10^{-4})。在每个衍射图样中,相体积分数的平均预测误差在传统方法得出的数值的 1.6 倍(10^{-2})以内。我们的工作表明,卷积神经网络能以出色的可靠性评估高能 X 射线衍射图样。此外,它还证明了网络设计对预测可靠性和计算性能的重要意义。最复杂的模型并不一定能带来最高的准确性,甚至可能无法从数据中学习。
{"title":"Phase Identification in Synchrotron X-ray Diffraction Patterns of Ti–6Al–4V Using Computer Vision and Deep Learning","authors":"Weiqi Yue, Pawan K. Tripathi, Gabriel Ponon, Zhuldyz Ualikhankyzy, Donald W. Brown, Bjorn Clausen, Maria Strantza, Darren C. Pagan, Matthew A. Willard, Frank Ernst, Erman Ayday, Vipin Chaudhary, Roger H. French","doi":"10.1007/s40192-023-00328-0","DOIUrl":"https://doi.org/10.1007/s40192-023-00328-0","url":null,"abstract":"<p>X-ray diffraction patterns contain information about the atomistic structure and microstructure (defect population) of materials, extracting detailed information from diffraction patterns is complex, demanding and relies on prior knowledge. We hypothesize that deep-learning techniques can help to perform an effective and accurate analysis with high throughput rates. To demonstrate this concept, we applied a novel deep learning framework to determine the evolution of the <span>(upbeta )</span>-phase volume fraction in a Ti–6Al–4V alloy during heat-treatment from video sequences of 2D diffraction patterns recorded in transmission and with highly monochromatic radiation in a synchrotron beamline. In particular, we studied the impact of <i>network design</i> on prediction reliability and computational performance. Networks of different architectures were trained using 3008 experimental 2D patterns. A well-tuned model was found to reproduce the phase fractions of another experimental data set, consisting of 1100 diffraction patterns, with a mean-square error as small as <span>(2.6 times 10^{-4})</span>. The average prediction error of <span>(upbeta )</span>-phase volume fraction was within <span>(1.6 times 10^{-2})</span> (in each diffraction pattern) of the values obtained by conventional methods. Our work demonstrates that convolutional neural networks can evaluate high energy X-ray diffraction patterns with a remarkable level of reliability. Furthermore, it demonstrates the significance of network design on the reliability of predictions and computational performance. The most complex models do not necessarily result in highest accuracy and may even fail to learn from the data.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"255 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-16DOI: 10.1007/s40192-023-00333-3
Newell Moser, Jake Benzing, Orion L. Kafka, Jordan Weaver, Nicholas Derimow, Ross Rentz, Nikolas Hrabe
The additive manufacturing benchmarking challenge described in this work was aimed at the prediction of average stress–strain properties for tensile specimens that were excised from blocks of non-heat-treated IN625 manufactured by laser powder bed fusion. Two different laser scan strategies were considered: an X-only raster and an XY raster, which involved a 90(^circ ) rotation in the scan direction between subsequent layers. To measure anisotropy, multiple tensile orientations with respect to the build direction were investigated (e.g., parallel, perpendicular, and intervals in between). Benchmark participants were provided grain structure information via electron backscatter diffraction measurements, as well as the stress–strain response for tensile specimens manufactured parallel to the build direction and produced by the XY scan strategy. Then, participants were asked to predict tensile properties, like the ultimate tensile strength, for the remaining specimens and orientations. Interestingly, the measured mechanical properties did not vary linearly as a function of tensile orientation. Moreover, specimens manufactured with the XY scan strategy exhibited greater yield strength than those corresponding to the X-only scan strategy, regardless of orientation. The benchmark data have been made publicly available for anyone that is interested [1]. For the modeling aspect of the challenge, five teams participated in this benchmark. While most of the models incorporated a crystal plasticity framework, one team chose to use a more semiempirical approach and to great success. However, no team excelled at all the predictions, and all teams were seemingly challenged with the predictions associated with the X-only scan strategy.
这项工作中描述的增材制造基准挑战旨在预测拉伸试样的平均应力应变特性,这些试样是从通过激光粉末床熔融技术制造的未经热处理的 IN625 块上切除的。我们考虑了两种不同的激光扫描策略:仅 X 光栅和 XY 光栅,其中 XY 光栅涉及在后续层之间的扫描方向上旋转 90(^circ )次。为了测量各向异性,研究了相对于构建方向的多个拉伸方向(例如平行、垂直以及两者之间的间隔)。基准参与者可通过电子反向散射衍射测量获得晶粒结构信息,以及平行于构建方向并通过 XY 扫描策略制作的拉伸试样的应力-应变响应。然后,要求参与者预测其余试样和方向的拉伸性能,如极限拉伸强度。有趣的是,测得的机械性能并不随拉伸方向的变化而线性变化。此外,无论取向如何,使用 XY 扫描策略制造的试样都比仅使用 X 扫描策略制造的试样具有更高的屈服强度。基准数据已经公开,有兴趣的人可以查阅[1]。在挑战赛的建模方面,有五个团队参加了此次基准测试。虽然大多数模型都采用了晶体塑性框架,但有一个团队选择了使用更多的半经验方法,并取得了巨大成功。然而,没有一个团队在所有预测方面都表现出色,所有团队似乎都在与仅 X 扫描策略相关的预测方面遇到了挑战。
{"title":"AM Bench 2022 Macroscale Tensile Challenge at Different Orientations (CHAL-AMB2022-04-MaTTO) and Summary of Predictions","authors":"Newell Moser, Jake Benzing, Orion L. Kafka, Jordan Weaver, Nicholas Derimow, Ross Rentz, Nikolas Hrabe","doi":"10.1007/s40192-023-00333-3","DOIUrl":"https://doi.org/10.1007/s40192-023-00333-3","url":null,"abstract":"<p>The additive manufacturing benchmarking challenge described in this work was aimed at the prediction of average stress–strain properties for tensile specimens that were excised from blocks of non-heat-treated IN625 manufactured by laser powder bed fusion. Two different laser scan strategies were considered: an X-only raster and an XY raster, which involved a 90<span>(^circ )</span> rotation in the scan direction between subsequent layers. To measure anisotropy, multiple tensile orientations with respect to the build direction were investigated (e.g., parallel, perpendicular, and intervals in between). Benchmark participants were provided grain structure information via electron backscatter diffraction measurements, as well as the stress–strain response for tensile specimens manufactured parallel to the build direction and produced by the XY scan strategy. Then, participants were asked to predict tensile properties, like the ultimate tensile strength, for the remaining specimens and orientations. Interestingly, the measured mechanical properties did not vary linearly as a function of tensile orientation. Moreover, specimens manufactured with the XY scan strategy exhibited greater yield strength than those corresponding to the X-only scan strategy, regardless of orientation. The benchmark data have been made publicly available for anyone that is interested [1]. For the modeling aspect of the challenge, five teams participated in this benchmark. While most of the models incorporated a crystal plasticity framework, one team chose to use a more semiempirical approach and to great success. However, no team excelled at all the predictions, and all teams were seemingly challenged with the predictions associated with the X-only scan strategy.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"6 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1007/s40192-023-00330-6
Abstract
Advancements in high-throughput data generation and physics-informed artificial intelligence and machine-learning algorithms are rapidly challenging the status quo for how materials data is collected, analyzed, and communicated with the world. Machine-learning algorithms can be executed in just a few lines of code by researchers with minimal data science expertise. This perspective addresses the reality that the ecosystems which have been constructed to nurture new materials discovery and development are not yet well equipped to take advantage of the radically more powerful and accessible computational and algorithmic tools which have the immediate potential to enhance the pace of scientific advancement in this field. A novel architecture for managing materials data is proposed and discussed from the standpoint of how historical and emerging subfields of materials science could have been or might still significantly improve the impact of materials discoveries to the many human societal needs for new materials.
{"title":"Beyond Combinatorial Materials Science: The 100 Prisoners Problem","authors":"","doi":"10.1007/s40192-023-00330-6","DOIUrl":"https://doi.org/10.1007/s40192-023-00330-6","url":null,"abstract":"<h3>Abstract</h3> <p>Advancements in high-throughput data generation and physics-informed artificial intelligence and machine-learning algorithms are rapidly challenging the status quo for how materials data is collected, analyzed, and communicated with the world. Machine-learning algorithms can be executed in just a few lines of code by researchers with minimal data science expertise. This perspective addresses the reality that the ecosystems which have been constructed to nurture new materials discovery and development are not yet well equipped to take advantage of the radically more powerful and accessible computational and algorithmic tools which have the immediate potential to enhance the pace of scientific advancement in this field. A novel architecture for managing materials data is proposed and discussed from the standpoint of how historical and emerging subfields of materials science could have been or might still significantly improve the impact of materials discoveries to the many human societal needs for new materials.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"1 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139397262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08DOI: 10.1007/s40192-023-00305-7
Shoieb Ahmed Chowdhury, M. F. N. Taufique, Jing Wang, Marissa Masden, Madison Wenzlick, Ram Devanathan, Alan L. Schemer-Kohrn, Keerti S. Kappagantula
Austenitic 347H stainless steel offers superior mechanical properties and corrosion resistance required for extreme operating conditions such as high temperature. The change in microstructure due to composition and process variations is expected to impact material properties. Identifying microstructural features such as grain boundaries thus becomes an important task in the process-microstructure-properties loop. Applying convolutional neural network (CNN)-based deep learning models is a powerful technique to detect features from material micrographs in an automated manner. In contrast to microstructural classification, supervised CNN models for segmentation tasks require pixel-wise annotation labels. However, manual labeling of the images for the segmentation task poses a major bottleneck for generating training data and labels in a reliable and reproducible way within a reasonable timeframe. Microstructural characterization especially needs to be expedited for faster material discovery by changing alloy compositions. In this study, we attempt to overcome such limitations by utilizing multimodal microscopy to generate labels directly instead of manual labeling. We combine scanning electron microscopy images of 347H stainless steel as training data and electron backscatter diffraction micrographs as pixel-wise labels for grain boundary detection as a semantic segmentation task. The viability of our method is evaluated by considering a set of deep CNN architectures. We demonstrate that despite producing instrumentation drift during data collection between two modes of microscopy, this method performs comparably to similar segmentation tasks that used manual labeling. Additionally, we find that naïve pixel-wise segmentation results in small gaps and missing boundaries in the predicted grain boundary map. By incorporating topological information during model training, the connectivity of the grain boundary network and segmentation performance is improved. Finally, our approach is validated by accurate computation on downstream tasks of predicting the underlying grain morphology distributions which are the ultimate quantities of interest for microstructural characterization.
{"title":"Automated Grain Boundary (GB) Segmentation and Microstructural Analysis in 347H Stainless Steel Using Deep Learning and Multimodal Microscopy","authors":"Shoieb Ahmed Chowdhury, M. F. N. Taufique, Jing Wang, Marissa Masden, Madison Wenzlick, Ram Devanathan, Alan L. Schemer-Kohrn, Keerti S. Kappagantula","doi":"10.1007/s40192-023-00305-7","DOIUrl":"https://doi.org/10.1007/s40192-023-00305-7","url":null,"abstract":"<p>Austenitic 347H stainless steel offers superior mechanical properties and corrosion resistance required for extreme operating conditions such as high temperature. The change in microstructure due to composition and process variations is expected to impact material properties. Identifying microstructural features such as grain boundaries thus becomes an important task in the process-microstructure-properties loop. Applying convolutional neural network (CNN)-based deep learning models is a powerful technique to detect features from material micrographs in an automated manner. In contrast to microstructural classification, supervised CNN models for segmentation tasks require pixel-wise annotation labels. However, manual labeling of the images for the segmentation task poses a major bottleneck for generating training data and labels in a reliable and reproducible way within a reasonable timeframe. Microstructural characterization especially needs to be expedited for faster material discovery by changing alloy compositions. In this study, we attempt to overcome such limitations by utilizing multimodal microscopy to generate labels directly instead of manual labeling. We combine scanning electron microscopy images of 347H stainless steel as training data and electron backscatter diffraction micrographs as pixel-wise labels for grain boundary detection as a semantic segmentation task. The viability of our method is evaluated by considering a set of deep CNN architectures. We demonstrate that despite producing instrumentation drift during data collection between two modes of microscopy, this method performs comparably to similar segmentation tasks that used manual labeling. Additionally, we find that naïve pixel-wise segmentation results in small gaps and missing boundaries in the predicted grain boundary map. By incorporating topological information during model training, the connectivity of the grain boundary network and segmentation performance is improved. Finally, our approach is validated by accurate computation on downstream tasks of predicting the underlying grain morphology distributions which are the ultimate quantities of interest for microstructural characterization.</p>","PeriodicalId":13604,"journal":{"name":"Integrating Materials and Manufacturing Innovation","volume":"74 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139409452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}