Tamir E Bresler, Shivam Pandya, Ryan Meyer, Zin Htway, Manabu Fujita
{"title":"从字节到最佳实践:追溯 ChatGPT-3.5 的发展历程以及与《美国国家综合癌症网络® 胰腺腺癌管理指南》(National Comprehensive Cancer Network® Guidelines in Pancreatic Adenocarcinoma Management)的一致性。","authors":"Tamir E Bresler, Shivam Pandya, Ryan Meyer, Zin Htway, Manabu Fujita","doi":"10.1177/00031348241248801","DOIUrl":null,"url":null,"abstract":"INTRODUCTION\nArtificial intelligence continues to play an increasingly important role in modern health care. ChatGPT-3.5 (OpenAI, San Francisco, CA) has gained attention for its potential impact in this domain.\n\n\nOBJECTIVE\nTo explore the role of ChatGPT-3.5 in guiding clinical decision-making specifically in the context of pancreatic adenocarcinoma and to assess its growth over a period of time.\n\n\nPARTICIPANTS\nWe reviewed the National Comprehensive Cancer Network® (NCCN) Clinical Practice Guidelines for the Management of Pancreatic Adenocarcinoma and formulated a complex clinical question for each decision-making page. ChatGPT-3.5 was queried in a reproducible fashion. We scored answers on the following Likert scale: 5) Correct; 4) Correct, with missing information requiring clarification; 3) Correct, but unable to complete answer; 2) Partially incorrect; 1) Absolutely incorrect. We repeated this protocol at 3-months. Score frequencies were compared, and subgroup analysis was conducted on Correctness (defined as scores 1-2 vs 3-5) and Accuracy (scores 1-3 vs 4-5).\n\n\nRESULTS\nIn total, 50-pages of the NCCN Guidelines® were analyzed, generating 50 complex clinical questions. On subgroup analysis, the percentage of Acceptable answers improved from 60% to 76%. The score improvement was statistically significant (Mann-Whitney U-test; Mean Rank = 44.52 vs 56.48, P = .027).\n\n\nCONCLUSION\nChatGPT-3.5 represents an interesting but limited tool for assistance in clinical decision-making. We demonstrate that the platform evolved, and its responses to our standardized questions improved over a relatively short period (3-months). Future research is needed to determine the validity of this tool for this clinical application.","PeriodicalId":325363,"journal":{"name":"The American Surgeon","volume":"7 46","pages":"31348241248801"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From Bytes to Best Practices: Tracing ChatGPT-3.5's Evolution and Alignment With the National Comprehensive Cancer Network® Guidelines in Pancreatic Adenocarcinoma Management.\",\"authors\":\"Tamir E Bresler, Shivam Pandya, Ryan Meyer, Zin Htway, Manabu Fujita\",\"doi\":\"10.1177/00031348241248801\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"INTRODUCTION\\nArtificial intelligence continues to play an increasingly important role in modern health care. ChatGPT-3.5 (OpenAI, San Francisco, CA) has gained attention for its potential impact in this domain.\\n\\n\\nOBJECTIVE\\nTo explore the role of ChatGPT-3.5 in guiding clinical decision-making specifically in the context of pancreatic adenocarcinoma and to assess its growth over a period of time.\\n\\n\\nPARTICIPANTS\\nWe reviewed the National Comprehensive Cancer Network® (NCCN) Clinical Practice Guidelines for the Management of Pancreatic Adenocarcinoma and formulated a complex clinical question for each decision-making page. ChatGPT-3.5 was queried in a reproducible fashion. We scored answers on the following Likert scale: 5) Correct; 4) Correct, with missing information requiring clarification; 3) Correct, but unable to complete answer; 2) Partially incorrect; 1) Absolutely incorrect. We repeated this protocol at 3-months. Score frequencies were compared, and subgroup analysis was conducted on Correctness (defined as scores 1-2 vs 3-5) and Accuracy (scores 1-3 vs 4-5).\\n\\n\\nRESULTS\\nIn total, 50-pages of the NCCN Guidelines® were analyzed, generating 50 complex clinical questions. On subgroup analysis, the percentage of Acceptable answers improved from 60% to 76%. The score improvement was statistically significant (Mann-Whitney U-test; Mean Rank = 44.52 vs 56.48, P = .027).\\n\\n\\nCONCLUSION\\nChatGPT-3.5 represents an interesting but limited tool for assistance in clinical decision-making. We demonstrate that the platform evolved, and its responses to our standardized questions improved over a relatively short period (3-months). Future research is needed to determine the validity of this tool for this clinical application.\",\"PeriodicalId\":325363,\"journal\":{\"name\":\"The American Surgeon\",\"volume\":\"7 46\",\"pages\":\"31348241248801\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The American Surgeon\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/00031348241248801\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The American Surgeon","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/00031348241248801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
From Bytes to Best Practices: Tracing ChatGPT-3.5's Evolution and Alignment With the National Comprehensive Cancer Network® Guidelines in Pancreatic Adenocarcinoma Management.
INTRODUCTION
Artificial intelligence continues to play an increasingly important role in modern health care. ChatGPT-3.5 (OpenAI, San Francisco, CA) has gained attention for its potential impact in this domain.
OBJECTIVE
To explore the role of ChatGPT-3.5 in guiding clinical decision-making specifically in the context of pancreatic adenocarcinoma and to assess its growth over a period of time.
PARTICIPANTS
We reviewed the National Comprehensive Cancer Network® (NCCN) Clinical Practice Guidelines for the Management of Pancreatic Adenocarcinoma and formulated a complex clinical question for each decision-making page. ChatGPT-3.5 was queried in a reproducible fashion. We scored answers on the following Likert scale: 5) Correct; 4) Correct, with missing information requiring clarification; 3) Correct, but unable to complete answer; 2) Partially incorrect; 1) Absolutely incorrect. We repeated this protocol at 3-months. Score frequencies were compared, and subgroup analysis was conducted on Correctness (defined as scores 1-2 vs 3-5) and Accuracy (scores 1-3 vs 4-5).
RESULTS
In total, 50-pages of the NCCN Guidelines® were analyzed, generating 50 complex clinical questions. On subgroup analysis, the percentage of Acceptable answers improved from 60% to 76%. The score improvement was statistically significant (Mann-Whitney U-test; Mean Rank = 44.52 vs 56.48, P = .027).
CONCLUSION
ChatGPT-3.5 represents an interesting but limited tool for assistance in clinical decision-making. We demonstrate that the platform evolved, and its responses to our standardized questions improved over a relatively short period (3-months). Future research is needed to determine the validity of this tool for this clinical application.