{"title":"Exploring the capabilities of GenAI for oral cancer consultations in remote consultations : Author.","authors":"Yu-Tao Xiong, Hao-Nan Liu, Yu-Min Zeng, Zheng-Zhe Zhan, Wei Liu, Yuan-Chen Wang, Wei Tang, Chang Liu","doi":"10.1186/s12903-025-05619-w","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Generative artificial intelligence (GenAI) has demonstrated potential in remote consultations, yet its capacity to comprehend oral cancer has not yet been fully evaluated. The objective of this study was to evaluate the accuracy, reliability and validity of GenAI in addressing questions related to remote consultations for oral cancer.</p><p><strong>Methods: </strong>A search was conducted on telemedicine platforms in China, summarizing patients' inquiries regarding oral cancer. A panel of board-certified oral surgeons compiled the reference answers for addressing these questions. GPT-3.5-turbo and GPT-4o were tasked to answer specific questions related to oral cancer, with their responses recorded. The responses were assessed using qualitative and quantitative measures, including the accuracy, the number of key points, text length, lexical density, and a Likert scale. The chi-square test was utilized to detect differences in qualitative data, while Kruskal-Wallis test, Mann-Whitney U test and t-test for quantitative data.</p><p><strong>Results: </strong>A total of 34 oral cancer questions were included, covering basic, etiology, diagnosis, intervention, and prognosis. GPT-3.5-Turbo demonstrated an overall accuracy rate of 77.50% in qualitative analysis, and GPT-4o was 88.20%. The average scores of GPT-3.5Turbo and GPT-4o were 3.96 and 4.35, respectively, with statistically significant differences. GPT-3.5-Turbo and GPT-4o were close to the reference answers in terms of the number of key points, but significantly lower in terms of text length and lexical density.</p><p><strong>Conclusion: </strong>GPT-4o demonstrated a marginal advantage, although no statistically significant differences in response accuracy were observed between GPT-3.5-Turbo and GPT-4o. Moreover, GPT-4o outperformed in terms of reliability and validity, making it more appropriate for remote consultation scenarios.</p>","PeriodicalId":9072,"journal":{"name":"BMC Oral Health","volume":"25 1","pages":"269"},"PeriodicalIF":2.6000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Oral Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12903-025-05619-w","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Generative artificial intelligence (GenAI) has demonstrated potential in remote consultations, yet its capacity to comprehend oral cancer has not yet been fully evaluated. The objective of this study was to evaluate the accuracy, reliability and validity of GenAI in addressing questions related to remote consultations for oral cancer.
Methods: A search was conducted on telemedicine platforms in China, summarizing patients' inquiries regarding oral cancer. A panel of board-certified oral surgeons compiled the reference answers for addressing these questions. GPT-3.5-turbo and GPT-4o were tasked to answer specific questions related to oral cancer, with their responses recorded. The responses were assessed using qualitative and quantitative measures, including the accuracy, the number of key points, text length, lexical density, and a Likert scale. The chi-square test was utilized to detect differences in qualitative data, while Kruskal-Wallis test, Mann-Whitney U test and t-test for quantitative data.
Results: A total of 34 oral cancer questions were included, covering basic, etiology, diagnosis, intervention, and prognosis. GPT-3.5-Turbo demonstrated an overall accuracy rate of 77.50% in qualitative analysis, and GPT-4o was 88.20%. The average scores of GPT-3.5Turbo and GPT-4o were 3.96 and 4.35, respectively, with statistically significant differences. GPT-3.5-Turbo and GPT-4o were close to the reference answers in terms of the number of key points, but significantly lower in terms of text length and lexical density.
Conclusion: GPT-4o demonstrated a marginal advantage, although no statistically significant differences in response accuracy were observed between GPT-3.5-Turbo and GPT-4o. Moreover, GPT-4o outperformed in terms of reliability and validity, making it more appropriate for remote consultation scenarios.
期刊介绍:
BMC Oral Health is an open access, peer-reviewed journal that considers articles on all aspects of the prevention, diagnosis and management of disorders of the mouth, teeth and gums, as well as related molecular genetics, pathophysiology, and epidemiology.