{"title":"Automatic soft-tissue analysis on orthodontic frontal and lateral facial photographs based on deep learning","authors":"Qiao Chang, Yuxing Bai, Shaofeng Wang, Fan Wang, Yajie Wang, Feifei Zuo, Xianju Xie","doi":"10.1111/ocr.12830","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>To establish the automatic soft-tissue analysis model based on deep learning that performs landmark detection and measurement calculations on orthodontic facial photographs to achieve a more comprehensive quantitative evaluation of soft tissues.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>A total of 578 frontal photographs and 450 lateral photographs of orthodontic patients were collected to construct datasets. All images were manually annotated by two orthodontists with 43 frontal-image landmarks and 17 lateral-image landmarks. Automatic landmark detection models were established, which consisted of a high-resolution network, a feature fusion module based on depthwise separable convolution, and a prediction model based on pixel shuffle. Ten measurements for frontal images and eight measurements for lateral images were defined. Test sets were used to evaluate the model performance, respectively. The mean radial error of landmarks and measurement error were calculated and statistically analysed to evaluate their reliability.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The mean radial error was 14.44 ± 17.20 pixels for the landmarks in the frontal images and 13.48 ± 17.12 pixels for the landmarks in the lateral images. There was no statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index. A total of 14 measurements had a high consistency.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Based on deep learning, we established automatic soft-tissue analysis models for orthodontic facial photographs that can automatically detect 43 frontal-image landmarks and 17 lateral-image landmarks while performing comprehensive soft-tissue measurements. The models can assist orthodontists in efficient and accurate quantitative soft-tissue evaluation for clinical application.</p>\n </section>\n </div>","PeriodicalId":19652,"journal":{"name":"Orthodontics & Craniofacial Research","volume":"27 6","pages":"893-902"},"PeriodicalIF":2.4000,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Orthodontics & Craniofacial Research","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/ocr.12830","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Background
To establish the automatic soft-tissue analysis model based on deep learning that performs landmark detection and measurement calculations on orthodontic facial photographs to achieve a more comprehensive quantitative evaluation of soft tissues.
Methods
A total of 578 frontal photographs and 450 lateral photographs of orthodontic patients were collected to construct datasets. All images were manually annotated by two orthodontists with 43 frontal-image landmarks and 17 lateral-image landmarks. Automatic landmark detection models were established, which consisted of a high-resolution network, a feature fusion module based on depthwise separable convolution, and a prediction model based on pixel shuffle. Ten measurements for frontal images and eight measurements for lateral images were defined. Test sets were used to evaluate the model performance, respectively. The mean radial error of landmarks and measurement error were calculated and statistically analysed to evaluate their reliability.
Results
The mean radial error was 14.44 ± 17.20 pixels for the landmarks in the frontal images and 13.48 ± 17.12 pixels for the landmarks in the lateral images. There was no statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index. A total of 14 measurements had a high consistency.
Conclusion
Based on deep learning, we established automatic soft-tissue analysis models for orthodontic facial photographs that can automatically detect 43 frontal-image landmarks and 17 lateral-image landmarks while performing comprehensive soft-tissue measurements. The models can assist orthodontists in efficient and accurate quantitative soft-tissue evaluation for clinical application.
期刊介绍:
Orthodontics & Craniofacial Research - Genes, Growth and Development is published to serve its readers as an international forum for the presentation and critical discussion of issues pertinent to the advancement of the specialty of orthodontics and the evidence-based knowledge of craniofacial growth and development. This forum is based on scientifically supported information, but also includes minority and conflicting opinions.
The objective of the journal is to facilitate effective communication between the research community and practicing clinicians. Original papers of high scientific quality that report the findings of clinical trials, clinical epidemiology, and novel therapeutic or diagnostic approaches are appropriate submissions. Similarly, we welcome papers in genetics, developmental biology, syndromology, surgery, speech and hearing, and other biomedical disciplines related to clinical orthodontics and normal and abnormal craniofacial growth and development. In addition to original and basic research, the journal publishes concise reviews, case reports of substantial value, invited essays, letters, and announcements.
The journal is published quarterly. The review of submitted papers will be coordinated by the editor and members of the editorial board. It is policy to review manuscripts within 3 to 4 weeks of receipt and to publish within 3 to 6 months of acceptance.