Artificial intelligence for predicting orthodontic patient cooperation: Voice records versus frontal photographs

IF 0.5 Q4 DENTISTRY, ORAL SURGERY & MEDICINE APOS Trends in Orthodontics Pub Date : 2024-01-18 DOI:10.25259/apos_221_2023
Farhad Salmanpour, Hasan Camcı
{"title":"Artificial intelligence for predicting orthodontic patient cooperation: Voice records versus frontal photographs","authors":"Farhad Salmanpour, Hasan Camcı","doi":"10.25259/apos_221_2023","DOIUrl":null,"url":null,"abstract":"\n\nThe purpose of this study was to compare the predictive ability of different convolutional neural network (CNN) models and machine learning algorithms trained with frontal photographs and voice recordings.\n\n\n\nTwo hundred and thirty-seven orthodontic patients (147 women, 90 men, mean age 14.94 ± 2.4 years) were included in the study. According to the orthodontic patient cooperation scale, patients were classified into two groups at the 12th month of treatment: Cooperative and non-cooperative. Afterward, frontal photographs and text-to-speech voice records of the participants were collected. CNN models and machine learning algorithms were employed to categorize the data into cooperative and non-cooperative groups. Nine different CNN models were employed to analyze images, while one CNN model and 13 machine learning models were utilized to analyze audio data. The accuracy, precision, recall, and F1-score values of these models were assessed.\n\n\n\nXception (66%) and DenseNet121 (66%) were the two most effective CNN models in evaluating photographs. The model with the lowest success rate was ResNet101V2 (48.0%). The success rates of the other five models were similar. In the assessment of audio data, the most successful models were YAMNet, linear discriminant analysis, K-nearest neighbors, support vector machine, extra tree classifier, and stacking classifier (%58.7). The algorithm with the lowest success rate was the decision tree classifier (41.3%).\n\n\n\nSome of the CNN models trained with photographs were successful in predicting cooperation, but voice data were not as useful as photographs in predicting cooperation.\n","PeriodicalId":42593,"journal":{"name":"APOS Trends in Orthodontics","volume":null,"pages":null},"PeriodicalIF":0.5000,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"APOS Trends in Orthodontics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25259/apos_221_2023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0

Abstract

The purpose of this study was to compare the predictive ability of different convolutional neural network (CNN) models and machine learning algorithms trained with frontal photographs and voice recordings. Two hundred and thirty-seven orthodontic patients (147 women, 90 men, mean age 14.94 ± 2.4 years) were included in the study. According to the orthodontic patient cooperation scale, patients were classified into two groups at the 12th month of treatment: Cooperative and non-cooperative. Afterward, frontal photographs and text-to-speech voice records of the participants were collected. CNN models and machine learning algorithms were employed to categorize the data into cooperative and non-cooperative groups. Nine different CNN models were employed to analyze images, while one CNN model and 13 machine learning models were utilized to analyze audio data. The accuracy, precision, recall, and F1-score values of these models were assessed. Xception (66%) and DenseNet121 (66%) were the two most effective CNN models in evaluating photographs. The model with the lowest success rate was ResNet101V2 (48.0%). The success rates of the other five models were similar. In the assessment of audio data, the most successful models were YAMNet, linear discriminant analysis, K-nearest neighbors, support vector machine, extra tree classifier, and stacking classifier (%58.7). The algorithm with the lowest success rate was the decision tree classifier (41.3%). Some of the CNN models trained with photographs were successful in predicting cooperation, but voice data were not as useful as photographs in predicting cooperation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工智能预测正畸患者的合作情况:语音记录与正面照片
这项研究的目的是比较不同卷积神经网络(CNN)模型和使用正面照片和语音记录训练的机器学习算法的预测能力。根据正畸患者合作量表,患者在治疗第 12 个月时被分为两组:合作组和不合作组。随后,研究人员收集了参与者的正面照片和文本到语音的语音记录。采用 CNN 模型和机器学习算法将数据分为合作组和不合作组。分析图像时使用了九种不同的 CNN 模型,分析音频数据时使用了一种 CNN 模型和 13 种机器学习模型。Xception(66%)和 DenseNet121(66%)是评估照片最有效的两个 CNN 模型。成功率最低的模型是 ResNet101V2(48.0%)。其他五个模型的成功率相似。在音频数据评估中,最成功的模型是 YAMNet、线性判别分析、K-近邻、支持向量机、额外树分类器和堆叠分类器(%58.7)。成功率最低的算法是决策树分类器(41.3%)。一些用照片训练的 CNN 模型在预测合作方面取得了成功,但语音数据在预测合作方面不如照片有用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
APOS Trends in Orthodontics
APOS Trends in Orthodontics DENTISTRY, ORAL SURGERY & MEDICINE-
CiteScore
0.80
自引率
0.00%
发文量
47
期刊最新文献
Comparison of coating stability and surface characterization of different esthetic NiTi arch wires – An in vivo study Accelerated and hybrid orthodontic treatment using a combination of 2D lingual appliance and in-house aligner: An anterior cross-bite and TMD case report after 1-year follow-up The perception of facial esthetics with regard to different buccal corridors and facial proportions Clinical effect of low-level laser therapy on pain perception after placement of initial orthodontic archwires Orthodontic treatment of a patient with pycnodysostosis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1