Ruizhe Chen, Jianfei Yang, Huimin Xiong, Ruiling Xu, Yang Feng, Jian Wu, Zuozhu Liu
{"title":"Cross-center Model Adaptive Tooth segmentation.","authors":"Ruizhe Chen, Jianfei Yang, Huimin Xiong, Ruiling Xu, Yang Feng, Jian Wu, Zuozhu Liu","doi":"10.1016/j.media.2024.103443","DOIUrl":null,"url":null,"abstract":"<p><p>Automatic 3-dimensional tooth segmentation on intraoral scans (IOS) plays a pivotal role in computer-aided orthodontic treatments. In practice, deploying existing well-trained models to different medical centers suffers from two main problems: (1) the data distribution shifts between existing and new centers, which causes significant performance degradation. (2) The data in the existing center(s) is usually not permitted to be shared, and annotating additional data in the new center(s) is time-consuming and expensive, thus making re-training or fine-tuning unfeasible. In this paper, we propose a framework for Cross-center Model Adaptive Tooth segmentation (CMAT) to alleviate these issues. CMAT takes the trained model(s) from the source center(s) as input and adapts them to different target centers, without data transmission or additional annotations. CMAT is applicable to three cross-center scenarios: source-data-free, multi-source-data-free, and test-time. The model adaptation in CMAT is realized by a tooth-level prototype alignment module, a progressive pseudo-labeling transfer module, and a tooth-prior regularized information maximization module. Experiments under three cross-center scenarios on two datasets show that CMAT can consistently surpass existing baselines. The effectiveness is further verified with extensive ablation studies and statistical analysis, demonstrating its applicability for privacy-preserving model adaptive tooth segmentation in real-world digital dentistry.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103443"},"PeriodicalIF":10.7000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.media.2024.103443","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Automatic 3-dimensional tooth segmentation on intraoral scans (IOS) plays a pivotal role in computer-aided orthodontic treatments. In practice, deploying existing well-trained models to different medical centers suffers from two main problems: (1) the data distribution shifts between existing and new centers, which causes significant performance degradation. (2) The data in the existing center(s) is usually not permitted to be shared, and annotating additional data in the new center(s) is time-consuming and expensive, thus making re-training or fine-tuning unfeasible. In this paper, we propose a framework for Cross-center Model Adaptive Tooth segmentation (CMAT) to alleviate these issues. CMAT takes the trained model(s) from the source center(s) as input and adapts them to different target centers, without data transmission or additional annotations. CMAT is applicable to three cross-center scenarios: source-data-free, multi-source-data-free, and test-time. The model adaptation in CMAT is realized by a tooth-level prototype alignment module, a progressive pseudo-labeling transfer module, and a tooth-prior regularized information maximization module. Experiments under three cross-center scenarios on two datasets show that CMAT can consistently surpass existing baselines. The effectiveness is further verified with extensive ablation studies and statistical analysis, demonstrating its applicability for privacy-preserving model adaptive tooth segmentation in real-world digital dentistry.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.