利用机器感知和视觉显著性自动识别增材制造部件的缺陷。

IF 2.3 4区 工程技术 Q3 ENGINEERING, MANUFACTURING 3D Printing and Additive Manufacturing Pub Date : 2023-06-01 Epub Date: 2023-06-08 DOI:10.1089/3dp.2021.0224
Jan Petrich, Edward W Reutzel
{"title":"利用机器感知和视觉显著性自动识别增材制造部件的缺陷。","authors":"Jan Petrich, Edward W Reutzel","doi":"10.1089/3dp.2021.0224","DOIUrl":null,"url":null,"abstract":"<p><p>Metal additive manufacturing (AM) is known to produce internal defects that can impact performance. As the technology becomes more mainstream, there is a growing need to establish nondestructive inspection technologies that can assess and quantify build quality with high confidence. This article presents a complete, three-dimensional (3D) solution for automated defect recognition in AM parts using X-ray computed tomography (CT) scans. The algorithm uses a machine perception framework to automatically separate visually salient regions, that is, anomalous voxels, from the CT background. Compared with supervised approaches, the proposed concept relies solely on visual cues in 3D similar to those used by human operators in two-dimensional (2D) assuming no <i>a priori</i> information about defect appearance, size, and/or shape. To ingest any arbitrary part geometry, a binary mask is generated using statistical measures that separate lighter, material voxels from darker, background voxels. Therefore, no additional part or scan information, such as CAD files, STL models, or laser scan vector data, is needed. Visual saliency is established using multiscale, symmetric, and separable 3D convolution kernels. Separability of the convolution kernels is paramount when processing CT scans with potentially billions of voxels because it allows for parallel processing and thus faster execution of the convolution operation in single dimensions. Based on the CT scan resolution, kernel sizes may be adjusted to identify defects of different sizes. All adjacent anomalous voxels are subsequently merged to form defect clusters, which in turn reveals additional information regarding defect size, morphology, and orientation to the user, information that may be linked to mechanical properties, such as fatigue response. The algorithm was implemented in MATLAB™ using hardware acceleration, that is, graphics processing unit support, and tested on CT scans of AM components available at the Center for Innovative Materials Processing through Direct Digital Deposition (CIMP-3D) at Penn State's Applied Research Laboratory. Initial results show adequate processing times of just a few minutes and very low false-positive rates, especially when addressing highly salient and larger defects. All developed analytic tools can be simplified to accommodate 2D images.</p>","PeriodicalId":54341,"journal":{"name":"3D Printing and Additive Manufacturing","volume":"10 3","pages":"406-419"},"PeriodicalIF":2.3000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280214/pdf/","citationCount":"0","resultStr":"{\"title\":\"Automated Defect Recognition for Additive Manufactured Parts Using Machine Perception and Visual Saliency.\",\"authors\":\"Jan Petrich, Edward W Reutzel\",\"doi\":\"10.1089/3dp.2021.0224\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Metal additive manufacturing (AM) is known to produce internal defects that can impact performance. As the technology becomes more mainstream, there is a growing need to establish nondestructive inspection technologies that can assess and quantify build quality with high confidence. This article presents a complete, three-dimensional (3D) solution for automated defect recognition in AM parts using X-ray computed tomography (CT) scans. The algorithm uses a machine perception framework to automatically separate visually salient regions, that is, anomalous voxels, from the CT background. Compared with supervised approaches, the proposed concept relies solely on visual cues in 3D similar to those used by human operators in two-dimensional (2D) assuming no <i>a priori</i> information about defect appearance, size, and/or shape. To ingest any arbitrary part geometry, a binary mask is generated using statistical measures that separate lighter, material voxels from darker, background voxels. Therefore, no additional part or scan information, such as CAD files, STL models, or laser scan vector data, is needed. Visual saliency is established using multiscale, symmetric, and separable 3D convolution kernels. Separability of the convolution kernels is paramount when processing CT scans with potentially billions of voxels because it allows for parallel processing and thus faster execution of the convolution operation in single dimensions. Based on the CT scan resolution, kernel sizes may be adjusted to identify defects of different sizes. All adjacent anomalous voxels are subsequently merged to form defect clusters, which in turn reveals additional information regarding defect size, morphology, and orientation to the user, information that may be linked to mechanical properties, such as fatigue response. The algorithm was implemented in MATLAB™ using hardware acceleration, that is, graphics processing unit support, and tested on CT scans of AM components available at the Center for Innovative Materials Processing through Direct Digital Deposition (CIMP-3D) at Penn State's Applied Research Laboratory. Initial results show adequate processing times of just a few minutes and very low false-positive rates, especially when addressing highly salient and larger defects. All developed analytic tools can be simplified to accommodate 2D images.</p>\",\"PeriodicalId\":54341,\"journal\":{\"name\":\"3D Printing and Additive Manufacturing\",\"volume\":\"10 3\",\"pages\":\"406-419\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280214/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"3D Printing and Additive Manufacturing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1089/3dp.2021.0224\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/6/8 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, MANUFACTURING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"3D Printing and Additive Manufacturing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1089/3dp.2021.0224","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/6/8 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

摘要

众所周知,金属增材制造(AM)会产生影响性能的内部缺陷。随着该技术逐渐成为主流,人们越来越需要建立无损检测技术,以高分辨率评估和量化制造质量。本文介绍了一种利用 X 射线计算机断层扫描 (CT) 扫描自动识别 AM 零件缺陷的完整三维 (3D) 解决方案。该算法使用机器感知框架从 CT 背景中自动分离出视觉突出区域,即异常体素。与有监督的方法相比,所提出的概念完全依赖于三维视觉线索,类似于人类操作员在二维(2D)中使用的视觉线索,假设没有关于缺陷外观、尺寸和/或形状的先验信息。要采集任意部件的几何形状,可使用统计方法生成二进制掩膜,将浅色的材料体素与深色的背景体素区分开来。因此,不需要额外的零件或扫描信息,如 CAD 文件、STL 模型或激光扫描矢量数据。视觉显著性是通过多尺度、对称和可分离的三维卷积核确定的。在处理可能包含数十亿体素的 CT 扫描数据时,卷积核的可分离性至关重要,因为它允许并行处理,从而在单一维度上更快地执行卷积操作。根据 CT 扫描分辨率,可以调整核大小,以识别不同大小的缺陷。随后,所有相邻的异常体素将被合并,形成缺陷簇,进而向用户揭示有关缺陷大小、形态和方向的其他信息,这些信息可能与疲劳响应等机械性能有关。该算法通过硬件加速(即图形处理单元支持)在 MATLAB™ 中实现,并在宾夕法尼亚州立大学应用研究实验室的直接数字沉积创新材料加工中心(CIMP-3D)所提供的 AM 组件 CT 扫描上进行了测试。初步结果显示,处理时间仅需几分钟,误判率非常低,尤其是在处理高度突出和较大的缺陷时。所有开发的分析工具都可以简化,以适应二维图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Automated Defect Recognition for Additive Manufactured Parts Using Machine Perception and Visual Saliency.

Metal additive manufacturing (AM) is known to produce internal defects that can impact performance. As the technology becomes more mainstream, there is a growing need to establish nondestructive inspection technologies that can assess and quantify build quality with high confidence. This article presents a complete, three-dimensional (3D) solution for automated defect recognition in AM parts using X-ray computed tomography (CT) scans. The algorithm uses a machine perception framework to automatically separate visually salient regions, that is, anomalous voxels, from the CT background. Compared with supervised approaches, the proposed concept relies solely on visual cues in 3D similar to those used by human operators in two-dimensional (2D) assuming no a priori information about defect appearance, size, and/or shape. To ingest any arbitrary part geometry, a binary mask is generated using statistical measures that separate lighter, material voxels from darker, background voxels. Therefore, no additional part or scan information, such as CAD files, STL models, or laser scan vector data, is needed. Visual saliency is established using multiscale, symmetric, and separable 3D convolution kernels. Separability of the convolution kernels is paramount when processing CT scans with potentially billions of voxels because it allows for parallel processing and thus faster execution of the convolution operation in single dimensions. Based on the CT scan resolution, kernel sizes may be adjusted to identify defects of different sizes. All adjacent anomalous voxels are subsequently merged to form defect clusters, which in turn reveals additional information regarding defect size, morphology, and orientation to the user, information that may be linked to mechanical properties, such as fatigue response. The algorithm was implemented in MATLAB™ using hardware acceleration, that is, graphics processing unit support, and tested on CT scans of AM components available at the Center for Innovative Materials Processing through Direct Digital Deposition (CIMP-3D) at Penn State's Applied Research Laboratory. Initial results show adequate processing times of just a few minutes and very low false-positive rates, especially when addressing highly salient and larger defects. All developed analytic tools can be simplified to accommodate 2D images.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
3D Printing and Additive Manufacturing
3D Printing and Additive Manufacturing Materials Science-Materials Science (miscellaneous)
CiteScore
6.00
自引率
6.50%
发文量
126
期刊介绍: 3D Printing and Additive Manufacturing is a peer-reviewed journal that provides a forum for world-class research in additive manufacturing and related technologies. The Journal explores emerging challenges and opportunities ranging from new developments of processes and materials, to new simulation and design tools, and informative applications and case studies. Novel applications in new areas, such as medicine, education, bio-printing, food printing, art and architecture, are also encouraged. The Journal addresses the important questions surrounding this powerful and growing field, including issues in policy and law, intellectual property, data standards, safety and liability, environmental impact, social, economic, and humanitarian implications, and emerging business models at the industrial and consumer scales.
期刊最新文献
Experimental Study on Interfacial Shear Behavior of 3D Printed Recycled Mortar. Characterizing the Effect of Filament Moisture on Tensile Properties and Morphology of Fused Deposition Modeled Polylactic Acid/Polybutylene Succinate Parts. On the Development of Smart Framework for Printability Maps in Additive Manufacturing of AISI 316L Stainless Steel. Rapid Fabrication of Silica Microlens Arrays via Glass 3D Printing. Simulation of Binder Jetting and Analysis of Magnesium Alloy Bonding Mechanism.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1