Simon Bader, Michael O Schneider, Iason Psilopatis, Daniel Anetsberger, Julius Emons, Sven Kehl
{"title":"[AI-supported decision-making in obstetrics - a feasibility study on the medical accuracy and reliability of ChatGPT].","authors":"Simon Bader, Michael O Schneider, Iason Psilopatis, Daniel Anetsberger, Julius Emons, Sven Kehl","doi":"10.1055/a-2411-9516","DOIUrl":null,"url":null,"abstract":"<p><p>The aim of this study is to investigate the feasibility of artificial intelligence in the interpretation and application of medical guidelines to support clinical decision-making in obstetrics. ChatGPT was provided with guidelines on specific obstetric issues. Using several clinical scenarios as examples, the AI was then evaluated for its ability to make accurate diagnoses and appropriate clinical decisions. The results varied, with ChatGPT providing predominantly correct answers in some fictional scenarios but performing inadequately in others. Despite ChatGPT's ability to grasp complex medical information, the study revealed limitations in the precision and reliability of its interpretations and recommendations. These discrepancies highlight the need for careful review by healthcare professionals and underscore the importance of clear, unambiguous guideline recommendations. Furthermore, continuous technical development is required to harness artificial intelligence as a supportive tool in clinical practice. Overall, while the use of AI in medicine shows promise, its current suitability primarily lies in controlled scientific settings due to potential error susceptibility and interpretation weaknesses, aiming to safeguard the safety and accuracy of patient care.</p>","PeriodicalId":23854,"journal":{"name":"Zeitschrift fur Geburtshilfe und Neonatologie","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Zeitschrift fur Geburtshilfe und Neonatologie","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2411-9516","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OBSTETRICS & GYNECOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
The aim of this study is to investigate the feasibility of artificial intelligence in the interpretation and application of medical guidelines to support clinical decision-making in obstetrics. ChatGPT was provided with guidelines on specific obstetric issues. Using several clinical scenarios as examples, the AI was then evaluated for its ability to make accurate diagnoses and appropriate clinical decisions. The results varied, with ChatGPT providing predominantly correct answers in some fictional scenarios but performing inadequately in others. Despite ChatGPT's ability to grasp complex medical information, the study revealed limitations in the precision and reliability of its interpretations and recommendations. These discrepancies highlight the need for careful review by healthcare professionals and underscore the importance of clear, unambiguous guideline recommendations. Furthermore, continuous technical development is required to harness artificial intelligence as a supportive tool in clinical practice. Overall, while the use of AI in medicine shows promise, its current suitability primarily lies in controlled scientific settings due to potential error susceptibility and interpretation weaknesses, aiming to safeguard the safety and accuracy of patient care.