Benjamin Lutz, Dominik Kißkalt, Daniel Regulin, Burak Aybar, Jörg K.H. Franke
{"title":"Automated Domain Adaptation in Tool Condition Monitoring using Generative Adversarial Networks","authors":"Benjamin Lutz, Dominik Kißkalt, Daniel Regulin, Burak Aybar, Jörg K.H. Franke","doi":"10.1109/CASE49439.2021.9551632","DOIUrl":null,"url":null,"abstract":"Microscopy is commonly used in machining to study the effects of tool wear. In modern tool condition monitoring systems, the analytical capabilities are further enhanced by machine learning, allowing for automated segmentation of the various visible defects. The prevailing challenge, however, is the divergence among different use cases, as the visual properties of cutting tool images are influenced by many domain-specific factors such as the type of the cutting tool, the respective machining process, and the image acquisition unit. Thus, we propose the usage of automated domain adaptation so that existing training data from source domains can be used effectively to train segmentation models for novel target domains, while minimizing the need for newly labelled data. This is achieved through image-to-image translation using generative adversarial networks, which generate synthetic images with similar visual characteristics as the target domain based on existing masks of the source domains. Our validation shows that with as few as ten labelled images from the target domain, a sufficient prediction performance of 0.72 mIoU can be achieved when tested on unseen images from the target domain. This corresponds to a reduction of manual labelling efforts by two-thirds compared to conventional labelling and training methods. Thus, by adapting existing data, prediction performance is increased while expensive data generation is minimized.","PeriodicalId":232083,"journal":{"name":"2021 IEEE 17th International Conference on Automation Science and Engineering (CASE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 17th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CASE49439.2021.9551632","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Microscopy is commonly used in machining to study the effects of tool wear. In modern tool condition monitoring systems, the analytical capabilities are further enhanced by machine learning, allowing for automated segmentation of the various visible defects. The prevailing challenge, however, is the divergence among different use cases, as the visual properties of cutting tool images are influenced by many domain-specific factors such as the type of the cutting tool, the respective machining process, and the image acquisition unit. Thus, we propose the usage of automated domain adaptation so that existing training data from source domains can be used effectively to train segmentation models for novel target domains, while minimizing the need for newly labelled data. This is achieved through image-to-image translation using generative adversarial networks, which generate synthetic images with similar visual characteristics as the target domain based on existing masks of the source domains. Our validation shows that with as few as ten labelled images from the target domain, a sufficient prediction performance of 0.72 mIoU can be achieved when tested on unseen images from the target domain. This corresponds to a reduction of manual labelling efforts by two-thirds compared to conventional labelling and training methods. Thus, by adapting existing data, prediction performance is increased while expensive data generation is minimized.