{"title":"利用机器感知和视觉显著性自动识别增材制造部件的缺陷。","authors":"Jan Petrich, Edward W Reutzel","doi":"10.1089/3dp.2021.0224","DOIUrl":null,"url":null,"abstract":"<p><p>Metal additive manufacturing (AM) is known to produce internal defects that can impact performance. As the technology becomes more mainstream, there is a growing need to establish nondestructive inspection technologies that can assess and quantify build quality with high confidence. This article presents a complete, three-dimensional (3D) solution for automated defect recognition in AM parts using X-ray computed tomography (CT) scans. The algorithm uses a machine perception framework to automatically separate visually salient regions, that is, anomalous voxels, from the CT background. Compared with supervised approaches, the proposed concept relies solely on visual cues in 3D similar to those used by human operators in two-dimensional (2D) assuming no <i>a priori</i> information about defect appearance, size, and/or shape. To ingest any arbitrary part geometry, a binary mask is generated using statistical measures that separate lighter, material voxels from darker, background voxels. Therefore, no additional part or scan information, such as CAD files, STL models, or laser scan vector data, is needed. Visual saliency is established using multiscale, symmetric, and separable 3D convolution kernels. Separability of the convolution kernels is paramount when processing CT scans with potentially billions of voxels because it allows for parallel processing and thus faster execution of the convolution operation in single dimensions. Based on the CT scan resolution, kernel sizes may be adjusted to identify defects of different sizes. All adjacent anomalous voxels are subsequently merged to form defect clusters, which in turn reveals additional information regarding defect size, morphology, and orientation to the user, information that may be linked to mechanical properties, such as fatigue response. The algorithm was implemented in MATLAB™ using hardware acceleration, that is, graphics processing unit support, and tested on CT scans of AM components available at the Center for Innovative Materials Processing through Direct Digital Deposition (CIMP-3D) at Penn State's Applied Research Laboratory. Initial results show adequate processing times of just a few minutes and very low false-positive rates, especially when addressing highly salient and larger defects. All developed analytic tools can be simplified to accommodate 2D images.</p>","PeriodicalId":54341,"journal":{"name":"3D Printing and Additive Manufacturing","volume":"10 3","pages":"406-419"},"PeriodicalIF":2.3000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280214/pdf/","citationCount":"0","resultStr":"{\"title\":\"Automated Defect Recognition for Additive Manufactured Parts Using Machine Perception and Visual Saliency.\",\"authors\":\"Jan Petrich, Edward W Reutzel\",\"doi\":\"10.1089/3dp.2021.0224\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Metal additive manufacturing (AM) is known to produce internal defects that can impact performance. As the technology becomes more mainstream, there is a growing need to establish nondestructive inspection technologies that can assess and quantify build quality with high confidence. This article presents a complete, three-dimensional (3D) solution for automated defect recognition in AM parts using X-ray computed tomography (CT) scans. The algorithm uses a machine perception framework to automatically separate visually salient regions, that is, anomalous voxels, from the CT background. Compared with supervised approaches, the proposed concept relies solely on visual cues in 3D similar to those used by human operators in two-dimensional (2D) assuming no <i>a priori</i> information about defect appearance, size, and/or shape. To ingest any arbitrary part geometry, a binary mask is generated using statistical measures that separate lighter, material voxels from darker, background voxels. Therefore, no additional part or scan information, such as CAD files, STL models, or laser scan vector data, is needed. Visual saliency is established using multiscale, symmetric, and separable 3D convolution kernels. Separability of the convolution kernels is paramount when processing CT scans with potentially billions of voxels because it allows for parallel processing and thus faster execution of the convolution operation in single dimensions. Based on the CT scan resolution, kernel sizes may be adjusted to identify defects of different sizes. All adjacent anomalous voxels are subsequently merged to form defect clusters, which in turn reveals additional information regarding defect size, morphology, and orientation to the user, information that may be linked to mechanical properties, such as fatigue response. The algorithm was implemented in MATLAB™ using hardware acceleration, that is, graphics processing unit support, and tested on CT scans of AM components available at the Center for Innovative Materials Processing through Direct Digital Deposition (CIMP-3D) at Penn State's Applied Research Laboratory. Initial results show adequate processing times of just a few minutes and very low false-positive rates, especially when addressing highly salient and larger defects. All developed analytic tools can be simplified to accommodate 2D images.</p>\",\"PeriodicalId\":54341,\"journal\":{\"name\":\"3D Printing and Additive Manufacturing\",\"volume\":\"10 3\",\"pages\":\"406-419\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280214/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"3D Printing and Additive Manufacturing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1089/3dp.2021.0224\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/6/8 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, MANUFACTURING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"3D Printing and Additive Manufacturing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1089/3dp.2021.0224","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/6/8 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0
摘要
众所周知,金属增材制造(AM)会产生影响性能的内部缺陷。随着该技术逐渐成为主流,人们越来越需要建立无损检测技术,以高分辨率评估和量化制造质量。本文介绍了一种利用 X 射线计算机断层扫描 (CT) 扫描自动识别 AM 零件缺陷的完整三维 (3D) 解决方案。该算法使用机器感知框架从 CT 背景中自动分离出视觉突出区域,即异常体素。与有监督的方法相比,所提出的概念完全依赖于三维视觉线索,类似于人类操作员在二维(2D)中使用的视觉线索,假设没有关于缺陷外观、尺寸和/或形状的先验信息。要采集任意部件的几何形状,可使用统计方法生成二进制掩膜,将浅色的材料体素与深色的背景体素区分开来。因此,不需要额外的零件或扫描信息,如 CAD 文件、STL 模型或激光扫描矢量数据。视觉显著性是通过多尺度、对称和可分离的三维卷积核确定的。在处理可能包含数十亿体素的 CT 扫描数据时,卷积核的可分离性至关重要,因为它允许并行处理,从而在单一维度上更快地执行卷积操作。根据 CT 扫描分辨率,可以调整核大小,以识别不同大小的缺陷。随后,所有相邻的异常体素将被合并,形成缺陷簇,进而向用户揭示有关缺陷大小、形态和方向的其他信息,这些信息可能与疲劳响应等机械性能有关。该算法通过硬件加速(即图形处理单元支持)在 MATLAB™ 中实现,并在宾夕法尼亚州立大学应用研究实验室的直接数字沉积创新材料加工中心(CIMP-3D)所提供的 AM 组件 CT 扫描上进行了测试。初步结果显示,处理时间仅需几分钟,误判率非常低,尤其是在处理高度突出和较大的缺陷时。所有开发的分析工具都可以简化,以适应二维图像。
Automated Defect Recognition for Additive Manufactured Parts Using Machine Perception and Visual Saliency.
Metal additive manufacturing (AM) is known to produce internal defects that can impact performance. As the technology becomes more mainstream, there is a growing need to establish nondestructive inspection technologies that can assess and quantify build quality with high confidence. This article presents a complete, three-dimensional (3D) solution for automated defect recognition in AM parts using X-ray computed tomography (CT) scans. The algorithm uses a machine perception framework to automatically separate visually salient regions, that is, anomalous voxels, from the CT background. Compared with supervised approaches, the proposed concept relies solely on visual cues in 3D similar to those used by human operators in two-dimensional (2D) assuming no a priori information about defect appearance, size, and/or shape. To ingest any arbitrary part geometry, a binary mask is generated using statistical measures that separate lighter, material voxels from darker, background voxels. Therefore, no additional part or scan information, such as CAD files, STL models, or laser scan vector data, is needed. Visual saliency is established using multiscale, symmetric, and separable 3D convolution kernels. Separability of the convolution kernels is paramount when processing CT scans with potentially billions of voxels because it allows for parallel processing and thus faster execution of the convolution operation in single dimensions. Based on the CT scan resolution, kernel sizes may be adjusted to identify defects of different sizes. All adjacent anomalous voxels are subsequently merged to form defect clusters, which in turn reveals additional information regarding defect size, morphology, and orientation to the user, information that may be linked to mechanical properties, such as fatigue response. The algorithm was implemented in MATLAB™ using hardware acceleration, that is, graphics processing unit support, and tested on CT scans of AM components available at the Center for Innovative Materials Processing through Direct Digital Deposition (CIMP-3D) at Penn State's Applied Research Laboratory. Initial results show adequate processing times of just a few minutes and very low false-positive rates, especially when addressing highly salient and larger defects. All developed analytic tools can be simplified to accommodate 2D images.
期刊介绍:
3D Printing and Additive Manufacturing is a peer-reviewed journal that provides a forum for world-class research in additive manufacturing and related technologies. The Journal explores emerging challenges and opportunities ranging from new developments of processes and materials, to new simulation and design tools, and informative applications and case studies. Novel applications in new areas, such as medicine, education, bio-printing, food printing, art and architecture, are also encouraged.
The Journal addresses the important questions surrounding this powerful and growing field, including issues in policy and law, intellectual property, data standards, safety and liability, environmental impact, social, economic, and humanitarian implications, and emerging business models at the industrial and consumer scales.