Gage A Guerra, Sophie Grove, Jonathan Le, Hayden L Hofmann, Ishan Shah, Sweta Bhagavatula, Benjamin Fixman, David Gomez, Benjamin Hopkins, Jonathan Dallas, Giovanni Cacciamani, Racheal Peterson, Gabriel Zada
{"title":"人工智能作为一种模式,可提高神经外科文献对患者的可读性。","authors":"Gage A Guerra, Sophie Grove, Jonathan Le, Hayden L Hofmann, Ishan Shah, Sweta Bhagavatula, Benjamin Fixman, David Gomez, Benjamin Hopkins, Jonathan Dallas, Giovanni Cacciamani, Racheal Peterson, Gabriel Zada","doi":"10.3171/2024.6.JNS24617","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>In this study the authors assessed the ability of Chat Generative Pretrained Transformer (ChatGPT) 3.5 and ChatGPT4 to generate readable and accurate summaries of published neurosurgical literature.</p><p><strong>Methods: </strong>Abstracts published in journal issues released between June 2023 and August 2023 (n = 150) were randomly selected from the top 5 ranked neurosurgical journals according to Google Scholar. ChatGPT models were instructed to generate a readable layperson summary of the original abstract from a statistically validated prompt. Readability results and grade-level indicators (RR-GLIs) scores were calculated for GPT3.5- and GPT4-generated summaries and original abstracts. Two physicians independently rated the accuracy of ChatGPT-generated layperson summaries to assess scientific validity. One-way ANOVA followed by pairwise t-test with Bonferroni correction were performed to compare readability scores. Cohen's kappa was used to assess interrater agreement between the two rater physicians.</p><p><strong>Results: </strong>Analysis of 150 original abstracts showed a statistically significant difference for all RR-GLIs between the ChatGPT-generated summaries and original abstracts. The readability scores are formatted as follows (original abstract mean, GPT3.5 summary mean, GPT4 summary mean, p value): Flesch-Kincaid reading grade (12.55, 7.80, 7.70, p < 0.0001); Gunning fog score (15.46, 10.00, 9.00, p < 0.0001); Simple Measure of Gobbledygook (SMOG) index (11.30, 7.13, 6.60, p < 0.0001); Coleman-Liau index (14.67, 11.32, 10.26, p < 0.0001); automated readability index (10.87, 8.50, 7.75, p < 0.0001); and Flesch-Kincaid reading ease (33.29, 68.45, 69.55, p < 0.0001). GPT4-generated summaries demonstrated higher RR-GLIs than GPT3.5-generated summaries in the following categories: Gunning fog score (0.0003); SMOG index (0.027); Coleman-Liau index (< 0.0001); sentences (< 0.0001); complex words (< 0.0001); and % complex words (0.0035). A total of 68.4% and 84.2% of GPT3.5- and GPT4-generated summaries, respectively, maintained moderate scientific accuracy according to the two physician-reviewers.</p><p><strong>Conclusions: </strong>The findings demonstrate promising potential for application of the ChatGPT in patient education. GPT4 is an accessible tool that can be an immediate solution to enhancing the readability of current neurosurgical literature. Layperson summaries generated by GPT4 would be a valuable addition to a neurosurgical journal and would be likely to improve comprehension for patients using internet resources like PubMed.</p>","PeriodicalId":16505,"journal":{"name":"Journal of neurosurgery","volume":" ","pages":"1-7"},"PeriodicalIF":3.5000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence as a modality to enhance the readability of neurosurgical literature for patients.\",\"authors\":\"Gage A Guerra, Sophie Grove, Jonathan Le, Hayden L Hofmann, Ishan Shah, Sweta Bhagavatula, Benjamin Fixman, David Gomez, Benjamin Hopkins, Jonathan Dallas, Giovanni Cacciamani, Racheal Peterson, Gabriel Zada\",\"doi\":\"10.3171/2024.6.JNS24617\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>In this study the authors assessed the ability of Chat Generative Pretrained Transformer (ChatGPT) 3.5 and ChatGPT4 to generate readable and accurate summaries of published neurosurgical literature.</p><p><strong>Methods: </strong>Abstracts published in journal issues released between June 2023 and August 2023 (n = 150) were randomly selected from the top 5 ranked neurosurgical journals according to Google Scholar. ChatGPT models were instructed to generate a readable layperson summary of the original abstract from a statistically validated prompt. Readability results and grade-level indicators (RR-GLIs) scores were calculated for GPT3.5- and GPT4-generated summaries and original abstracts. Two physicians independently rated the accuracy of ChatGPT-generated layperson summaries to assess scientific validity. One-way ANOVA followed by pairwise t-test with Bonferroni correction were performed to compare readability scores. Cohen's kappa was used to assess interrater agreement between the two rater physicians.</p><p><strong>Results: </strong>Analysis of 150 original abstracts showed a statistically significant difference for all RR-GLIs between the ChatGPT-generated summaries and original abstracts. The readability scores are formatted as follows (original abstract mean, GPT3.5 summary mean, GPT4 summary mean, p value): Flesch-Kincaid reading grade (12.55, 7.80, 7.70, p < 0.0001); Gunning fog score (15.46, 10.00, 9.00, p < 0.0001); Simple Measure of Gobbledygook (SMOG) index (11.30, 7.13, 6.60, p < 0.0001); Coleman-Liau index (14.67, 11.32, 10.26, p < 0.0001); automated readability index (10.87, 8.50, 7.75, p < 0.0001); and Flesch-Kincaid reading ease (33.29, 68.45, 69.55, p < 0.0001). GPT4-generated summaries demonstrated higher RR-GLIs than GPT3.5-generated summaries in the following categories: Gunning fog score (0.0003); SMOG index (0.027); Coleman-Liau index (< 0.0001); sentences (< 0.0001); complex words (< 0.0001); and % complex words (0.0035). A total of 68.4% and 84.2% of GPT3.5- and GPT4-generated summaries, respectively, maintained moderate scientific accuracy according to the two physician-reviewers.</p><p><strong>Conclusions: </strong>The findings demonstrate promising potential for application of the ChatGPT in patient education. GPT4 is an accessible tool that can be an immediate solution to enhancing the readability of current neurosurgical literature. Layperson summaries generated by GPT4 would be a valuable addition to a neurosurgical journal and would be likely to improve comprehension for patients using internet resources like PubMed.</p>\",\"PeriodicalId\":16505,\"journal\":{\"name\":\"Journal of neurosurgery\",\"volume\":\" \",\"pages\":\"1-7\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neurosurgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3171/2024.6.JNS24617\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CLINICAL NEUROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neurosurgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3171/2024.6.JNS24617","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
Artificial intelligence as a modality to enhance the readability of neurosurgical literature for patients.
Objective: In this study the authors assessed the ability of Chat Generative Pretrained Transformer (ChatGPT) 3.5 and ChatGPT4 to generate readable and accurate summaries of published neurosurgical literature.
Methods: Abstracts published in journal issues released between June 2023 and August 2023 (n = 150) were randomly selected from the top 5 ranked neurosurgical journals according to Google Scholar. ChatGPT models were instructed to generate a readable layperson summary of the original abstract from a statistically validated prompt. Readability results and grade-level indicators (RR-GLIs) scores were calculated for GPT3.5- and GPT4-generated summaries and original abstracts. Two physicians independently rated the accuracy of ChatGPT-generated layperson summaries to assess scientific validity. One-way ANOVA followed by pairwise t-test with Bonferroni correction were performed to compare readability scores. Cohen's kappa was used to assess interrater agreement between the two rater physicians.
Results: Analysis of 150 original abstracts showed a statistically significant difference for all RR-GLIs between the ChatGPT-generated summaries and original abstracts. The readability scores are formatted as follows (original abstract mean, GPT3.5 summary mean, GPT4 summary mean, p value): Flesch-Kincaid reading grade (12.55, 7.80, 7.70, p < 0.0001); Gunning fog score (15.46, 10.00, 9.00, p < 0.0001); Simple Measure of Gobbledygook (SMOG) index (11.30, 7.13, 6.60, p < 0.0001); Coleman-Liau index (14.67, 11.32, 10.26, p < 0.0001); automated readability index (10.87, 8.50, 7.75, p < 0.0001); and Flesch-Kincaid reading ease (33.29, 68.45, 69.55, p < 0.0001). GPT4-generated summaries demonstrated higher RR-GLIs than GPT3.5-generated summaries in the following categories: Gunning fog score (0.0003); SMOG index (0.027); Coleman-Liau index (< 0.0001); sentences (< 0.0001); complex words (< 0.0001); and % complex words (0.0035). A total of 68.4% and 84.2% of GPT3.5- and GPT4-generated summaries, respectively, maintained moderate scientific accuracy according to the two physician-reviewers.
Conclusions: The findings demonstrate promising potential for application of the ChatGPT in patient education. GPT4 is an accessible tool that can be an immediate solution to enhancing the readability of current neurosurgical literature. Layperson summaries generated by GPT4 would be a valuable addition to a neurosurgical journal and would be likely to improve comprehension for patients using internet resources like PubMed.
期刊介绍:
The Journal of Neurosurgery, Journal of Neurosurgery: Spine, Journal of Neurosurgery: Pediatrics, and Neurosurgical Focus are devoted to the publication of original works relating primarily to neurosurgery, including studies in clinical neurophysiology, organic neurology, ophthalmology, radiology, pathology, and molecular biology. The Editors and Editorial Boards encourage submission of clinical and laboratory studies. Other manuscripts accepted for review include technical notes on instruments or equipment that are innovative or useful to clinicians and researchers in the field of neuroscience; papers describing unusual cases; manuscripts on historical persons or events related to neurosurgery; and in Neurosurgical Focus, occasional reviews. Letters to the Editor commenting on articles recently published in the Journal of Neurosurgery, Journal of Neurosurgery: Spine, and Journal of Neurosurgery: Pediatrics are welcome.