{"title":"meddedmentor AI:人工智能可以帮助医学教育研究人员选择理论结构吗?","authors":"Gregory Ow, Adam Rodman, Geoffrey V Stetson","doi":"10.1101/2023.11.16.23298661","DOIUrl":null,"url":null,"abstract":"BACKGROUND: Medical education scholarship often lacks a strong theoretical underpinning, with this gap most often affecting early-career researchers and researchers in the Global South. Large language models (LLMs) have shown considerable promise to augment human writing and creativity in a variety of settings. In this study, we describe the development of MedEdMENTOR - an online platform for medical education research with a library of over 250 theories - and the development and evaluation of MedEdMENTOR AI, an LLM containing knowledge from MedEdMENTOR and the first AI mentor for medical education research. METHODS: From a postpositivist paradigm, we evaluated MedEdMENTOR AI by testing it against 6 months of qualitative research published in 24 core medical educational journals. In a blinded fashion, we presented MedEdMENTOR AI with only the phenomenon of the qualitative study, and asked it to recommend 5 theories that could be used to study that phenomenon. RESULTS: For 55% (29 of 53) of studies, MedEdMENTOR AI recommended the actual theoretical constructs chosen in the respective qualitative studies. CONCLUSIONS: Our data is preliminary, but it suggests that MedEdMENTOR AI and other LLMs can be highly effective in guiding medical education scholars towards theories that may be applicable in their research. Further research is needed to assess performance on other tasks in medical education research.","PeriodicalId":501387,"journal":{"name":"medRxiv - Medical Education","volume":"77 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MedEdMENTOR AI: Can artificial intelligence help medical education researchers select theoretical constructs?\",\"authors\":\"Gregory Ow, Adam Rodman, Geoffrey V Stetson\",\"doi\":\"10.1101/2023.11.16.23298661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"BACKGROUND: Medical education scholarship often lacks a strong theoretical underpinning, with this gap most often affecting early-career researchers and researchers in the Global South. Large language models (LLMs) have shown considerable promise to augment human writing and creativity in a variety of settings. In this study, we describe the development of MedEdMENTOR - an online platform for medical education research with a library of over 250 theories - and the development and evaluation of MedEdMENTOR AI, an LLM containing knowledge from MedEdMENTOR and the first AI mentor for medical education research. METHODS: From a postpositivist paradigm, we evaluated MedEdMENTOR AI by testing it against 6 months of qualitative research published in 24 core medical educational journals. In a blinded fashion, we presented MedEdMENTOR AI with only the phenomenon of the qualitative study, and asked it to recommend 5 theories that could be used to study that phenomenon. RESULTS: For 55% (29 of 53) of studies, MedEdMENTOR AI recommended the actual theoretical constructs chosen in the respective qualitative studies. CONCLUSIONS: Our data is preliminary, but it suggests that MedEdMENTOR AI and other LLMs can be highly effective in guiding medical education scholars towards theories that may be applicable in their research. Further research is needed to assess performance on other tasks in medical education research.\",\"PeriodicalId\":501387,\"journal\":{\"name\":\"medRxiv - Medical Education\",\"volume\":\"77 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Medical Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2023.11.16.23298661\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Medical Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2023.11.16.23298661","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MedEdMENTOR AI: Can artificial intelligence help medical education researchers select theoretical constructs?
BACKGROUND: Medical education scholarship often lacks a strong theoretical underpinning, with this gap most often affecting early-career researchers and researchers in the Global South. Large language models (LLMs) have shown considerable promise to augment human writing and creativity in a variety of settings. In this study, we describe the development of MedEdMENTOR - an online platform for medical education research with a library of over 250 theories - and the development and evaluation of MedEdMENTOR AI, an LLM containing knowledge from MedEdMENTOR and the first AI mentor for medical education research. METHODS: From a postpositivist paradigm, we evaluated MedEdMENTOR AI by testing it against 6 months of qualitative research published in 24 core medical educational journals. In a blinded fashion, we presented MedEdMENTOR AI with only the phenomenon of the qualitative study, and asked it to recommend 5 theories that could be used to study that phenomenon. RESULTS: For 55% (29 of 53) of studies, MedEdMENTOR AI recommended the actual theoretical constructs chosen in the respective qualitative studies. CONCLUSIONS: Our data is preliminary, but it suggests that MedEdMENTOR AI and other LLMs can be highly effective in guiding medical education scholars towards theories that may be applicable in their research. Further research is needed to assess performance on other tasks in medical education research.