Graham Keir, Willie Hu, Christopher G Filippi, Lisa Ellenbogen, Rona Woldenberg
{"title":"在医学院招生筛选中使用人工智能来减少观察者之间和内部的可变性。","authors":"Graham Keir, Willie Hu, Christopher G Filippi, Lisa Ellenbogen, Rona Woldenberg","doi":"10.1093/jamiaopen/ooad011","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Inter- and intra-observer variability is a concern for medical school admissions. Artificial intelligence (AI) may present an opportunity to apply a fair standard to all applicants systematically and yet maintain sensitivity to nuances that have been a part of traditional screening methods.</p><p><strong>Material and methods: </strong>Data from 5 years of medical school applications were retrospectively accrued and analyzed. The applicants (<i>m</i> = 22 258 applicants) were split 60%-20%-20% into a training set (<i>m</i> = 13 354), validation set (<i>m</i> = 4452), and test set (<i>m</i> = 4452). An AI model was trained and evaluated with the ground truth being whether a given applicant was invited for an interview. In addition, a \"real-world\" evaluation was conducted simultaneously within an admissions cycle to observe how it would perform if utilized.</p><p><strong>Results: </strong>The algorithm had an accuracy of 95% on the training set, 88% on the validation set, and 88% on the test set. The area under the curve of the test set was 0.93. The SHapely Additive exPlanations (SHAP) values demonstrated that the model utilizes features in a concordant manner with current admissions rubrics. By using a combined human and AI evaluation process, the accuracy of the process was demonstrated to be 96% on the \"real-world\" evaluation with a negative predictive value of 0.97.</p><p><strong>Discussion and conclusion: </strong>These results demonstrate the feasibility of an AI approach applied to medical school admissions screening decision-making. Model explainability and supplemental analyses help ensure that the model makes decisions as intended.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"6 1","pages":"ooad011"},"PeriodicalIF":2.5000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9936956/pdf/","citationCount":"2","resultStr":"{\"title\":\"Using artificial intelligence in medical school admissions screening to decrease inter- and intra-observer variability.\",\"authors\":\"Graham Keir, Willie Hu, Christopher G Filippi, Lisa Ellenbogen, Rona Woldenberg\",\"doi\":\"10.1093/jamiaopen/ooad011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Inter- and intra-observer variability is a concern for medical school admissions. Artificial intelligence (AI) may present an opportunity to apply a fair standard to all applicants systematically and yet maintain sensitivity to nuances that have been a part of traditional screening methods.</p><p><strong>Material and methods: </strong>Data from 5 years of medical school applications were retrospectively accrued and analyzed. The applicants (<i>m</i> = 22 258 applicants) were split 60%-20%-20% into a training set (<i>m</i> = 13 354), validation set (<i>m</i> = 4452), and test set (<i>m</i> = 4452). An AI model was trained and evaluated with the ground truth being whether a given applicant was invited for an interview. In addition, a \\\"real-world\\\" evaluation was conducted simultaneously within an admissions cycle to observe how it would perform if utilized.</p><p><strong>Results: </strong>The algorithm had an accuracy of 95% on the training set, 88% on the validation set, and 88% on the test set. The area under the curve of the test set was 0.93. The SHapely Additive exPlanations (SHAP) values demonstrated that the model utilizes features in a concordant manner with current admissions rubrics. By using a combined human and AI evaluation process, the accuracy of the process was demonstrated to be 96% on the \\\"real-world\\\" evaluation with a negative predictive value of 0.97.</p><p><strong>Discussion and conclusion: </strong>These results demonstrate the feasibility of an AI approach applied to medical school admissions screening decision-making. Model explainability and supplemental analyses help ensure that the model makes decisions as intended.</p>\",\"PeriodicalId\":36278,\"journal\":{\"name\":\"JAMIA Open\",\"volume\":\"6 1\",\"pages\":\"ooad011\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9936956/pdf/\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JAMIA Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/jamiaopen/ooad011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JAMIA Open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jamiaopen/ooad011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Using artificial intelligence in medical school admissions screening to decrease inter- and intra-observer variability.
Objectives: Inter- and intra-observer variability is a concern for medical school admissions. Artificial intelligence (AI) may present an opportunity to apply a fair standard to all applicants systematically and yet maintain sensitivity to nuances that have been a part of traditional screening methods.
Material and methods: Data from 5 years of medical school applications were retrospectively accrued and analyzed. The applicants (m = 22 258 applicants) were split 60%-20%-20% into a training set (m = 13 354), validation set (m = 4452), and test set (m = 4452). An AI model was trained and evaluated with the ground truth being whether a given applicant was invited for an interview. In addition, a "real-world" evaluation was conducted simultaneously within an admissions cycle to observe how it would perform if utilized.
Results: The algorithm had an accuracy of 95% on the training set, 88% on the validation set, and 88% on the test set. The area under the curve of the test set was 0.93. The SHapely Additive exPlanations (SHAP) values demonstrated that the model utilizes features in a concordant manner with current admissions rubrics. By using a combined human and AI evaluation process, the accuracy of the process was demonstrated to be 96% on the "real-world" evaluation with a negative predictive value of 0.97.
Discussion and conclusion: These results demonstrate the feasibility of an AI approach applied to medical school admissions screening decision-making. Model explainability and supplemental analyses help ensure that the model makes decisions as intended.