Audio-visual cross-modality knowledge transfer for machine learning-based in-situ monitoring in laser additive manufacturing

IF 11.1 1区 工程技术 Q1 ENGINEERING, MANUFACTURING Additive manufacturing Pub Date : 2025-02-05 DOI:10.1016/j.addma.2025.104692
Jiarui Xie , Mutahar Safdar , Lequn Chen , Seung Ki Moon , Yaoyao Fiona Zhao
{"title":"Audio-visual cross-modality knowledge transfer for machine learning-based in-situ monitoring in laser additive manufacturing","authors":"Jiarui Xie ,&nbsp;Mutahar Safdar ,&nbsp;Lequn Chen ,&nbsp;Seung Ki Moon ,&nbsp;Yaoyao Fiona Zhao","doi":"10.1016/j.addma.2025.104692","DOIUrl":null,"url":null,"abstract":"<div><div>Various machine learning (ML)-based in-situ monitoring systems have been developed to detect anomalies and defects in laser additive manufacturing (LAM) processes. While multimodal fusion, which integrates data from visual, audio, and other modalities, can improve monitoring performance, it also increases hardware, computational, and operational costs. This paper introduces a cross-modality knowledge transfer (CMKT) methodology for LAM in-situ monitoring, which transfers knowledge from a source modality to a target modality. CMKT enhances the representativeness of the features extracted from the target modality, allowing the removal of source modality sensors during prediction. This paper proposes three CMKT methods: semantic alignment, fully supervised mapping, and semi-supervised mapping. The semantic alignment method establishes a shared encoded space between modalities to facilitate knowledge transfer. It employs a semantic alignment loss to align the distributions of identical groups (e.g., visual and audio defective groups) and a separation loss to distinguish different groups (e.g., visual defective and audio defect-free groups). The two mapping methods transfer knowledge by deriving features from one modality to another using fully supervised and semi-supervised learning approaches. In a case study for LAM in-situ defect detection, the proposed CMKT methods were compared with multimodal audio-visual fusion. The semantic alignment method achieved an accuracy of 98.6 % while removing the audio modality during the prediction phase, which is comparable to the 98.2 % accuracy obtained through multimodal fusion. Using explainable artificial intelligence, we discovered that semantic alignment CMKT can extract more representative features while reducing noise by leveraging the inherent correlations between modalities.</div></div>","PeriodicalId":7172,"journal":{"name":"Additive manufacturing","volume":"101 ","pages":"Article 104692"},"PeriodicalIF":11.1000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Additive manufacturing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214860425000569","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

Abstract

Various machine learning (ML)-based in-situ monitoring systems have been developed to detect anomalies and defects in laser additive manufacturing (LAM) processes. While multimodal fusion, which integrates data from visual, audio, and other modalities, can improve monitoring performance, it also increases hardware, computational, and operational costs. This paper introduces a cross-modality knowledge transfer (CMKT) methodology for LAM in-situ monitoring, which transfers knowledge from a source modality to a target modality. CMKT enhances the representativeness of the features extracted from the target modality, allowing the removal of source modality sensors during prediction. This paper proposes three CMKT methods: semantic alignment, fully supervised mapping, and semi-supervised mapping. The semantic alignment method establishes a shared encoded space between modalities to facilitate knowledge transfer. It employs a semantic alignment loss to align the distributions of identical groups (e.g., visual and audio defective groups) and a separation loss to distinguish different groups (e.g., visual defective and audio defect-free groups). The two mapping methods transfer knowledge by deriving features from one modality to another using fully supervised and semi-supervised learning approaches. In a case study for LAM in-situ defect detection, the proposed CMKT methods were compared with multimodal audio-visual fusion. The semantic alignment method achieved an accuracy of 98.6 % while removing the audio modality during the prediction phase, which is comparable to the 98.2 % accuracy obtained through multimodal fusion. Using explainable artificial intelligence, we discovered that semantic alignment CMKT can extract more representative features while reducing noise by leveraging the inherent correlations between modalities.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于机器学习的激光增材制造现场监测的视听跨模态知识转移
各种基于机器学习(ML)的原位监测系统已经被开发出来,用于检测激光增材制造(LAM)过程中的异常和缺陷。虽然多模态融合集成了来自视觉、音频和其他模态的数据,可以提高监控性能,但它也增加了硬件、计算和操作成本。本文介绍了一种用于LAM原位监测的跨模态知识转移(CMKT)方法,该方法将知识从源模态转移到目标模态。CMKT增强了从目标模态中提取的特征的代表性,允许在预测期间去除源模态传感器。本文提出了三种CMKT方法:语义对齐、完全监督映射和半监督映射。语义对齐方法在模态之间建立共享的编码空间,促进知识转移。它使用语义对齐损失来对齐相同组(例如,视觉和音频缺陷组)的分布,并使用分离损失来区分不同组(例如,视觉缺陷和音频缺陷无组)。这两种映射方法通过使用全监督和半监督学习方法将特征从一种模态转移到另一种模态来转移知识。以LAM原位缺陷检测为例,将所提出的CMKT方法与多模态视听融合方法进行了比较。语义对齐方法在预测阶段去除音频模态后,准确率达到98.6 %,与多模态融合的准确率98.2% %相当。使用可解释的人工智能,我们发现语义对齐CMKT可以通过利用模态之间的固有相关性提取更多代表性特征,同时降低噪声。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Additive manufacturing
Additive manufacturing Materials Science-General Materials Science
CiteScore
19.80
自引率
12.70%
发文量
648
审稿时长
35 days
期刊介绍: Additive Manufacturing stands as a peer-reviewed journal dedicated to delivering high-quality research papers and reviews in the field of additive manufacturing, serving both academia and industry leaders. The journal's objective is to recognize the innovative essence of additive manufacturing and its diverse applications, providing a comprehensive overview of current developments and future prospects. The transformative potential of additive manufacturing technologies in product design and manufacturing is poised to disrupt traditional approaches. In response to this paradigm shift, a distinctive and comprehensive publication outlet was essential. Additive Manufacturing fulfills this need, offering a platform for engineers, materials scientists, and practitioners across academia and various industries to document and share innovations in these evolving technologies.
期刊最新文献
Geometric deviations and their effects in thin-plate lattice structures fabricated via LPBF Partition laser assembling technique Competition between pore coalescence-controlled and pore growth-controlled fracture in 316L stainless steel by laser powder bed fusion: Effect of pore size and spacing Formation mechanisms on amorphous-enhanced interfaces of Al/steel bimetallic freeform components via wire-based friction stir additive manufacturing Electrohydrodynamic inkjet printing of resistive random access memory under microgravity: A pathway to in-space manufacturing of microelectronics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1