Magdalena Elisabeth Bachmann, Ioana Duta, Emily Mazey, William Cooke, Manu Vatish, Gabriel Davis Jones
{"title":"Exploring the Capabilities of ChatGPT in Women's Health","authors":"Magdalena Elisabeth Bachmann, Ioana Duta, Emily Mazey, William Cooke, Manu Vatish, Gabriel Davis Jones","doi":"10.1101/2024.02.27.23300005","DOIUrl":null,"url":null,"abstract":"Introduction: Artificial Intelligence (AI) is redefining healthcare, with Large Language Models (LLMs) like ChatGPT offering novel and powerful capabilities in processing and generating human-like information. These advancements offer potential improvements in Women's Health, particularly Obstetrics and Gynaecology (O&G), where diagnostic and treatment gaps have long existed. Despite its generalist nature, ChatGPT is increasingly being tested in healthcare, necessitating a critical analysis of its utility, limitations and safety. This study examines ChatGPT's performance in interpreting and responding to international gold standard benchmark assessments in O&G: the RCOG's MRCOG Part One and Two examinations. We evaluate ChatGPT's domain- and knowledge area-specific accuracy, the influence of linguistic complexity on performance and its self-assessment confidence and uncertainty, essential for safe clinical decision-making. Methods: A dataset of MRCOG examination questions from sources beyond the reach of LLMs was developed to mitigate the risk of ChatGPT's prior exposure. A dual-review process validated the technical and clinical accuracy of the questions, omitting those dependent on previous content, duplicates, or requiring image interpretation. Single Best Answer (SBA) and Extended Matching (EMQ) Questions were converted to JSON format to facilitate ChatGPT's interpretation, incorporating question types and background information. Interaction with ChatGPT was conducted via OpenAI's API, structured to ensure consistent, contextually informed responses from ChatGPT. The response from ChatGPT was recorded and compared against the known accurate response. Linguistic complexity was evaluated using unique token counts and Type-Token ratios (vocabulary breadth and diversity) to explore their influence on performance. ChatGPT was instructed to assign confidence scores to its answers (0-100%), reflecting its self-perceived accuracy. Responses were categorized by correctness and statistically analysed through entropy calculation, assessing ChatGPT's capacity for self-evaluating certainty and knowledge boundaries. Findings: Of 1,824 MRCOG Part One and Two questions, ChatGPT's accuracy on MRCOG Part One was 72.2% (95% CI 69.2-75.3). For Part Two, it achieved 50.4% accuracy (95% CI 47.2-53.5) with 534 correct out of 989 questions, performing better on SBAs (54.0%, 95% CI 50.0-58.0) than on EMQs (45.0%, 95% CI 40.1-49.9). In domain-specific performance, the highest accuracy was in Biochemistry (79.8%, 95% CI 71.4-88.1) and the lowest in Biophysics (51.4%, 95% CI 35.2-67.5). The best-performing subject in Part Two was Urogynaecology (63.0%, 95% CI 50.1-75.8) and the worst was Management of Labour (35.6%, 95% CI 21.6-49.5). Linguistic complexity analysis showed a marginal increase in unique token count for correct answers in Part One (median 122, IQR 114-134) compared to incorrect (median 120, IQR 112-131, p=0.05). TTR analysis revealed higher medians for correct answers with negligible effect sizes (Part One: 0.66, IQR 0.63-0.68; Part Two: 0.62, IQR 0.57-0.67) and p-values <0.001. Regarding self-assessed confidence, the median confidence for correct answers was 70.0% (IQR 60-90), the same as for incorrect choices identified as correct (p<0.001). For correct answers deemed incorrect, the median confidence was 10.0% (IQR 0-10), and for incorrect answers accurately identified, it was 5.0% (IQR 0-10, p<0.001). Entropy values were identical for correct and incorrect responses (median 1.46, IQR 0.44-1.77), indicating no discernible distinction in ChatGPT's prediction certainty. Conclusions: ChatGPT demonstrated commendable accuracy in basic medical queries on the MRCOG Part One, yet its performance was markedly reduced in the clinically demanding Part Two exam. The model's high self-confidence across correct and incorrect responses necessitates scrutiny for its application in clinical decision-making. These findings suggest that while ChatGPT has potential, its current form requires significant refinement before it can enhance diagnostic efficacy and clinical workflow in women's health.","PeriodicalId":501409,"journal":{"name":"medRxiv - Obstetrics and Gynecology","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Obstetrics and Gynecology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.02.27.23300005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Artificial Intelligence (AI) is redefining healthcare, with Large Language Models (LLMs) like ChatGPT offering novel and powerful capabilities in processing and generating human-like information. These advancements offer potential improvements in Women's Health, particularly Obstetrics and Gynaecology (O&G), where diagnostic and treatment gaps have long existed. Despite its generalist nature, ChatGPT is increasingly being tested in healthcare, necessitating a critical analysis of its utility, limitations and safety. This study examines ChatGPT's performance in interpreting and responding to international gold standard benchmark assessments in O&G: the RCOG's MRCOG Part One and Two examinations. We evaluate ChatGPT's domain- and knowledge area-specific accuracy, the influence of linguistic complexity on performance and its self-assessment confidence and uncertainty, essential for safe clinical decision-making. Methods: A dataset of MRCOG examination questions from sources beyond the reach of LLMs was developed to mitigate the risk of ChatGPT's prior exposure. A dual-review process validated the technical and clinical accuracy of the questions, omitting those dependent on previous content, duplicates, or requiring image interpretation. Single Best Answer (SBA) and Extended Matching (EMQ) Questions were converted to JSON format to facilitate ChatGPT's interpretation, incorporating question types and background information. Interaction with ChatGPT was conducted via OpenAI's API, structured to ensure consistent, contextually informed responses from ChatGPT. The response from ChatGPT was recorded and compared against the known accurate response. Linguistic complexity was evaluated using unique token counts and Type-Token ratios (vocabulary breadth and diversity) to explore their influence on performance. ChatGPT was instructed to assign confidence scores to its answers (0-100%), reflecting its self-perceived accuracy. Responses were categorized by correctness and statistically analysed through entropy calculation, assessing ChatGPT's capacity for self-evaluating certainty and knowledge boundaries. Findings: Of 1,824 MRCOG Part One and Two questions, ChatGPT's accuracy on MRCOG Part One was 72.2% (95% CI 69.2-75.3). For Part Two, it achieved 50.4% accuracy (95% CI 47.2-53.5) with 534 correct out of 989 questions, performing better on SBAs (54.0%, 95% CI 50.0-58.0) than on EMQs (45.0%, 95% CI 40.1-49.9). In domain-specific performance, the highest accuracy was in Biochemistry (79.8%, 95% CI 71.4-88.1) and the lowest in Biophysics (51.4%, 95% CI 35.2-67.5). The best-performing subject in Part Two was Urogynaecology (63.0%, 95% CI 50.1-75.8) and the worst was Management of Labour (35.6%, 95% CI 21.6-49.5). Linguistic complexity analysis showed a marginal increase in unique token count for correct answers in Part One (median 122, IQR 114-134) compared to incorrect (median 120, IQR 112-131, p=0.05). TTR analysis revealed higher medians for correct answers with negligible effect sizes (Part One: 0.66, IQR 0.63-0.68; Part Two: 0.62, IQR 0.57-0.67) and p-values <0.001. Regarding self-assessed confidence, the median confidence for correct answers was 70.0% (IQR 60-90), the same as for incorrect choices identified as correct (p<0.001). For correct answers deemed incorrect, the median confidence was 10.0% (IQR 0-10), and for incorrect answers accurately identified, it was 5.0% (IQR 0-10, p<0.001). Entropy values were identical for correct and incorrect responses (median 1.46, IQR 0.44-1.77), indicating no discernible distinction in ChatGPT's prediction certainty. Conclusions: ChatGPT demonstrated commendable accuracy in basic medical queries on the MRCOG Part One, yet its performance was markedly reduced in the clinically demanding Part Two exam. The model's high self-confidence across correct and incorrect responses necessitates scrutiny for its application in clinical decision-making. These findings suggest that while ChatGPT has potential, its current form requires significant refinement before it can enhance diagnostic efficacy and clinical workflow in women's health.