Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn
{"title":"肾脏病学中的人工智能集成:评估 ChatGPT 在准确记录 ICD-10 和编码方面的作用。","authors":"Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn","doi":"10.3389/frai.2024.1457586","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.</p><p><strong>Methods: </strong>Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.</p><p><strong>Results: </strong>In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (<i>p</i> = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (<i>p</i> > 0.05).</p><p><strong>Conclusion: </strong>ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1457586"},"PeriodicalIF":3.0000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402808/pdf/","citationCount":"0","resultStr":"{\"title\":\"AI integration in nephrology: evaluating ChatGPT for accurate ICD-10 documentation and coding.\",\"authors\":\"Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn\",\"doi\":\"10.3389/frai.2024.1457586\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.</p><p><strong>Methods: </strong>Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.</p><p><strong>Results: </strong>In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (<i>p</i> = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (<i>p</i> > 0.05).</p><p><strong>Conclusion: </strong>ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.</p>\",\"PeriodicalId\":33315,\"journal\":{\"name\":\"Frontiers in Artificial Intelligence\",\"volume\":\"7 \",\"pages\":\"1457586\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402808/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/frai.2024.1457586\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2024.1457586","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
AI integration in nephrology: evaluating ChatGPT for accurate ICD-10 documentation and coding.
Background: Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.
Methods: Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.
Results: In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (p = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (p > 0.05).
Conclusion: ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.