Joanne Igoli, Temidayo Osunronbi, O. Olukoya, Jeremiah Oluwatomi, Itodo Daniel, Hillary O. Alemenzohu, Alieu Kanu, Alex Mwangi Kihunyu, Ebuka Okeleke, Henry Oyoyo, Oluwatobi Shekoni, D. Jesuyajolu, Andrew F Alalade
{"title":"大型语言模型在标注神经外科 \"病例对照研究 \"和偏倚风险评估中的准确性:与人类审稿人进行的审稿人间一致性研究协议。","authors":"Joanne Igoli, Temidayo Osunronbi, O. Olukoya, Jeremiah Oluwatomi, Itodo Daniel, Hillary O. Alemenzohu, Alieu Kanu, Alex Mwangi Kihunyu, Ebuka Okeleke, Henry Oyoyo, Oluwatobi Shekoni, D. Jesuyajolu, Andrew F Alalade","doi":"10.1101/2024.08.11.24311830","DOIUrl":null,"url":null,"abstract":"Introduction: Accurate identification of study designs and risk of bias (RoB) assessment is crucial for evidence synthesis in research. However, mislabelling of case-control studies (CCS) is prevalent, leading to a downgraded quality of evidence. Large Language Models (LLMs), a form of artificial intelligence, have shown impressive performance in various medical tasks. Still, their utility and application in categorising study designs and assessing RoB needs to be further explored. This study will evaluate the performance of four publicly available LLMs (ChatGPT-3.5, ChatGPT-4, Claude 3 Sonnet, Claude 3 Opus) in accurately identifying CCS designs from the neurosurgical literature. Secondly, we will assess the human-LLM interrater agreement for RoB assessment of true CCS. Methods: We identified thirty-four top-ranking neurosurgical-focused journals and searched them on PubMed/MEDLINE for manuscripts reported as CCS in the title/abstract. Human reviewers will independently assess study designs and RoB using the Newcastle-Ottawa Scale. The methods sections/full-text articles will be provided to LLMs to determine study designs and assess RoB. Cohen's kappa will be used to evaluate human-human, human-LLM and LLM-LLM interrater agreement. Logistic regression will be used to assess study characteristics affecting performance. A p-value < 0.05 at a 95% confidence interval will be considered statistically significant. Conclusion If the human-LLM agreement is high, LLMs could become valuable teaching and quality assurance tools for critical appraisal in neurosurgery and other medical fields. This study will contribute to validating LLMs for specialised scientific tasks in evidence synthesis. This could lead to reduced review costs, faster completion, standardisation, and minimal errors in evidence synthesis.","PeriodicalId":18505,"journal":{"name":"medRxiv","volume":"40 9","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The accuracy of large language models in labelling neurosurgical 'case-control studies and risk of bias assessment: protocol for a study of interrater agreement with human reviewers.\",\"authors\":\"Joanne Igoli, Temidayo Osunronbi, O. Olukoya, Jeremiah Oluwatomi, Itodo Daniel, Hillary O. Alemenzohu, Alieu Kanu, Alex Mwangi Kihunyu, Ebuka Okeleke, Henry Oyoyo, Oluwatobi Shekoni, D. Jesuyajolu, Andrew F Alalade\",\"doi\":\"10.1101/2024.08.11.24311830\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Introduction: Accurate identification of study designs and risk of bias (RoB) assessment is crucial for evidence synthesis in research. However, mislabelling of case-control studies (CCS) is prevalent, leading to a downgraded quality of evidence. Large Language Models (LLMs), a form of artificial intelligence, have shown impressive performance in various medical tasks. Still, their utility and application in categorising study designs and assessing RoB needs to be further explored. This study will evaluate the performance of four publicly available LLMs (ChatGPT-3.5, ChatGPT-4, Claude 3 Sonnet, Claude 3 Opus) in accurately identifying CCS designs from the neurosurgical literature. Secondly, we will assess the human-LLM interrater agreement for RoB assessment of true CCS. Methods: We identified thirty-four top-ranking neurosurgical-focused journals and searched them on PubMed/MEDLINE for manuscripts reported as CCS in the title/abstract. Human reviewers will independently assess study designs and RoB using the Newcastle-Ottawa Scale. The methods sections/full-text articles will be provided to LLMs to determine study designs and assess RoB. Cohen's kappa will be used to evaluate human-human, human-LLM and LLM-LLM interrater agreement. Logistic regression will be used to assess study characteristics affecting performance. A p-value < 0.05 at a 95% confidence interval will be considered statistically significant. Conclusion If the human-LLM agreement is high, LLMs could become valuable teaching and quality assurance tools for critical appraisal in neurosurgery and other medical fields. This study will contribute to validating LLMs for specialised scientific tasks in evidence synthesis. This could lead to reduced review costs, faster completion, standardisation, and minimal errors in evidence synthesis.\",\"PeriodicalId\":18505,\"journal\":{\"name\":\"medRxiv\",\"volume\":\"40 9\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.08.11.24311830\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.11.24311830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The accuracy of large language models in labelling neurosurgical 'case-control studies and risk of bias assessment: protocol for a study of interrater agreement with human reviewers.
Introduction: Accurate identification of study designs and risk of bias (RoB) assessment is crucial for evidence synthesis in research. However, mislabelling of case-control studies (CCS) is prevalent, leading to a downgraded quality of evidence. Large Language Models (LLMs), a form of artificial intelligence, have shown impressive performance in various medical tasks. Still, their utility and application in categorising study designs and assessing RoB needs to be further explored. This study will evaluate the performance of four publicly available LLMs (ChatGPT-3.5, ChatGPT-4, Claude 3 Sonnet, Claude 3 Opus) in accurately identifying CCS designs from the neurosurgical literature. Secondly, we will assess the human-LLM interrater agreement for RoB assessment of true CCS. Methods: We identified thirty-four top-ranking neurosurgical-focused journals and searched them on PubMed/MEDLINE for manuscripts reported as CCS in the title/abstract. Human reviewers will independently assess study designs and RoB using the Newcastle-Ottawa Scale. The methods sections/full-text articles will be provided to LLMs to determine study designs and assess RoB. Cohen's kappa will be used to evaluate human-human, human-LLM and LLM-LLM interrater agreement. Logistic regression will be used to assess study characteristics affecting performance. A p-value < 0.05 at a 95% confidence interval will be considered statistically significant. Conclusion If the human-LLM agreement is high, LLMs could become valuable teaching and quality assurance tools for critical appraisal in neurosurgery and other medical fields. This study will contribute to validating LLMs for specialised scientific tasks in evidence synthesis. This could lead to reduced review costs, faster completion, standardisation, and minimal errors in evidence synthesis.