Mélissa Peters (Doctorstomatologist) , Maxime Le Clercq (Doctor) , Antoine Yanni (Doctor) , Xavier Vanden Eynden (Maxillofacial surgeon) , Lalmand Martin (Maxillofacial surgeon) , Noémie Vanden Haute (Doctor) , Szonja Tancredi (Doctor) , Céline De Passe (Doctor) , Edward Boutremans (Maxillofacial surgeon) , Jerome Lechien (ENT doctor) , Didier Dequanter (Head and neck surgeon)
{"title":"ChatGPT 和受训人员在管理颌面部病人方面的表现。","authors":"Mélissa Peters (Doctorstomatologist) , Maxime Le Clercq (Doctor) , Antoine Yanni (Doctor) , Xavier Vanden Eynden (Maxillofacial surgeon) , Lalmand Martin (Maxillofacial surgeon) , Noémie Vanden Haute (Doctor) , Szonja Tancredi (Doctor) , Céline De Passe (Doctor) , Edward Boutremans (Maxillofacial surgeon) , Jerome Lechien (ENT doctor) , Didier Dequanter (Head and neck surgeon)","doi":"10.1016/j.jormas.2024.102090","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>ChatGPT is an artificial intelligence based large language model with the ability to generate human-like response to text input, its performance has already been the subject of several studies in different fields. The aim of this study was to evaluate the performance of ChatGPT in the management of maxillofacial clinical cases.</div></div><div><h3>Materials and methods</h3><div>A total of 38 clinical cases consulting at the Stomatology-Maxillofacial Surgery Department were prospectively recruited and presented to ChatGPT, which was interrogated for diagnosis, differential diagnosis, management and treatment. The performance of trainees and ChatGPT was compared by three blinded board-certified maxillofacial surgeons using the AIPI score.</div></div><div><h3>Results</h3><div>The average total AIPI score assigned to the practitioners was 18.71 and 16.39 to ChatGPT, significantly lower (<em>p</em> < 0.001). According to the experts, ChatGPT was significantly less effective for diagnosis and treatment (<em>p</em> < 0.001). Following two of the three experts, ChatGPT was significantly less effective in considering patient data (<em>p</em> = 0.001) and suggesting additional examinations (<em>p</em> < 0.0001). The primary diagnosis proposed by ChatGPT was judged by the experts as not plausible and /or incomplete in 2.63 % to 18 % of the cases, the additional examinations were associated with inadequate examinations in 2.63 %, to 21.05 % of the cases and proposed an association of pertinent, but incomplete therapeutic findings in 18.42 % to 47.37 % of the cases, while the therapeutic findings were considered pertinent, necessary and inadequate in 18.42 % of cases.</div></div><div><h3>Conclusions</h3><div>ChatGPT appears less efficient in diagnosis, the selection of the most adequate additional examination and the proposition of pertinent and necessary therapeutic approaches.</div></div>","PeriodicalId":55993,"journal":{"name":"Journal of Stomatology Oral and Maxillofacial Surgery","volume":"126 3","pages":"Article 102090"},"PeriodicalIF":1.8000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT and trainee performances in the management of maxillofacial patients\",\"authors\":\"Mélissa Peters (Doctorstomatologist) , Maxime Le Clercq (Doctor) , Antoine Yanni (Doctor) , Xavier Vanden Eynden (Maxillofacial surgeon) , Lalmand Martin (Maxillofacial surgeon) , Noémie Vanden Haute (Doctor) , Szonja Tancredi (Doctor) , Céline De Passe (Doctor) , Edward Boutremans (Maxillofacial surgeon) , Jerome Lechien (ENT doctor) , Didier Dequanter (Head and neck surgeon)\",\"doi\":\"10.1016/j.jormas.2024.102090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction</h3><div>ChatGPT is an artificial intelligence based large language model with the ability to generate human-like response to text input, its performance has already been the subject of several studies in different fields. The aim of this study was to evaluate the performance of ChatGPT in the management of maxillofacial clinical cases.</div></div><div><h3>Materials and methods</h3><div>A total of 38 clinical cases consulting at the Stomatology-Maxillofacial Surgery Department were prospectively recruited and presented to ChatGPT, which was interrogated for diagnosis, differential diagnosis, management and treatment. The performance of trainees and ChatGPT was compared by three blinded board-certified maxillofacial surgeons using the AIPI score.</div></div><div><h3>Results</h3><div>The average total AIPI score assigned to the practitioners was 18.71 and 16.39 to ChatGPT, significantly lower (<em>p</em> < 0.001). According to the experts, ChatGPT was significantly less effective for diagnosis and treatment (<em>p</em> < 0.001). Following two of the three experts, ChatGPT was significantly less effective in considering patient data (<em>p</em> = 0.001) and suggesting additional examinations (<em>p</em> < 0.0001). The primary diagnosis proposed by ChatGPT was judged by the experts as not plausible and /or incomplete in 2.63 % to 18 % of the cases, the additional examinations were associated with inadequate examinations in 2.63 %, to 21.05 % of the cases and proposed an association of pertinent, but incomplete therapeutic findings in 18.42 % to 47.37 % of the cases, while the therapeutic findings were considered pertinent, necessary and inadequate in 18.42 % of cases.</div></div><div><h3>Conclusions</h3><div>ChatGPT appears less efficient in diagnosis, the selection of the most adequate additional examination and the proposition of pertinent and necessary therapeutic approaches.</div></div>\",\"PeriodicalId\":55993,\"journal\":{\"name\":\"Journal of Stomatology Oral and Maxillofacial Surgery\",\"volume\":\"126 3\",\"pages\":\"Article 102090\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Stomatology Oral and Maxillofacial Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468785524003665\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Stomatology Oral and Maxillofacial Surgery","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468785524003665","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
ChatGPT and trainee performances in the management of maxillofacial patients
Introduction
ChatGPT is an artificial intelligence based large language model with the ability to generate human-like response to text input, its performance has already been the subject of several studies in different fields. The aim of this study was to evaluate the performance of ChatGPT in the management of maxillofacial clinical cases.
Materials and methods
A total of 38 clinical cases consulting at the Stomatology-Maxillofacial Surgery Department were prospectively recruited and presented to ChatGPT, which was interrogated for diagnosis, differential diagnosis, management and treatment. The performance of trainees and ChatGPT was compared by three blinded board-certified maxillofacial surgeons using the AIPI score.
Results
The average total AIPI score assigned to the practitioners was 18.71 and 16.39 to ChatGPT, significantly lower (p < 0.001). According to the experts, ChatGPT was significantly less effective for diagnosis and treatment (p < 0.001). Following two of the three experts, ChatGPT was significantly less effective in considering patient data (p = 0.001) and suggesting additional examinations (p < 0.0001). The primary diagnosis proposed by ChatGPT was judged by the experts as not plausible and /or incomplete in 2.63 % to 18 % of the cases, the additional examinations were associated with inadequate examinations in 2.63 %, to 21.05 % of the cases and proposed an association of pertinent, but incomplete therapeutic findings in 18.42 % to 47.37 % of the cases, while the therapeutic findings were considered pertinent, necessary and inadequate in 18.42 % of cases.
Conclusions
ChatGPT appears less efficient in diagnosis, the selection of the most adequate additional examination and the proposition of pertinent and necessary therapeutic approaches.