Alexander Ngo , Saumya Gupta , Oliver Perrine, Rithik Reddy, Sherry Ershadi, Daniel Remick MD
{"title":"ChatGPT 3.5 无法编写适当的多选模拟试题","authors":"Alexander Ngo , Saumya Gupta , Oliver Perrine, Rithik Reddy, Sherry Ershadi, Daniel Remick MD","doi":"10.1016/j.acpath.2023.100099","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial intelligence (AI) may have a profound impact on traditional teaching in academic settings. Multiple concerns have been raised, especially related to using ChatGPT for creating <em>de novo</em> essays. However, AI programs such as ChatGPT may augment teaching techniques. In this article, we used ChatGPT 3.5 to create 60 multiple choice questions. Author written text was uploaded and ChatGPT asked to create multiple choice questions with an explanation for the correct answer and explanations for the incorrect answers. Unfortunately, ChatGPT only generated correct questions and answers with explanations in 32 % of the questions (19 out of 60). In many instances, ChatGPT failed to provide an explanation for the incorrect answers. An additional 25 % of the questions had answers that were either wrong or misleading. A grade of 32 % would be considered failing in most courses. Despite these issues, instructors may still find ChatGPT useful for creating practice exams with explanations—with the caveat that extensive editing may be required.</p></div>","PeriodicalId":44927,"journal":{"name":"Academic Pathology","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2374289523000313/pdfft?md5=a10c566ba8cc29483c2eb3deca716a25&pid=1-s2.0-S2374289523000313-main.pdf","citationCount":"0","resultStr":"{\"title\":\"ChatGPT 3.5 fails to write appropriate multiple choice practice exam questions\",\"authors\":\"Alexander Ngo , Saumya Gupta , Oliver Perrine, Rithik Reddy, Sherry Ershadi, Daniel Remick MD\",\"doi\":\"10.1016/j.acpath.2023.100099\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Artificial intelligence (AI) may have a profound impact on traditional teaching in academic settings. Multiple concerns have been raised, especially related to using ChatGPT for creating <em>de novo</em> essays. However, AI programs such as ChatGPT may augment teaching techniques. In this article, we used ChatGPT 3.5 to create 60 multiple choice questions. Author written text was uploaded and ChatGPT asked to create multiple choice questions with an explanation for the correct answer and explanations for the incorrect answers. Unfortunately, ChatGPT only generated correct questions and answers with explanations in 32 % of the questions (19 out of 60). In many instances, ChatGPT failed to provide an explanation for the incorrect answers. An additional 25 % of the questions had answers that were either wrong or misleading. A grade of 32 % would be considered failing in most courses. Despite these issues, instructors may still find ChatGPT useful for creating practice exams with explanations—with the caveat that extensive editing may be required.</p></div>\",\"PeriodicalId\":44927,\"journal\":{\"name\":\"Academic Pathology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2023-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2374289523000313/pdfft?md5=a10c566ba8cc29483c2eb3deca716a25&pid=1-s2.0-S2374289523000313-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Academic Pathology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2374289523000313\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Pathology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2374289523000313","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PATHOLOGY","Score":null,"Total":0}
ChatGPT 3.5 fails to write appropriate multiple choice practice exam questions
Artificial intelligence (AI) may have a profound impact on traditional teaching in academic settings. Multiple concerns have been raised, especially related to using ChatGPT for creating de novo essays. However, AI programs such as ChatGPT may augment teaching techniques. In this article, we used ChatGPT 3.5 to create 60 multiple choice questions. Author written text was uploaded and ChatGPT asked to create multiple choice questions with an explanation for the correct answer and explanations for the incorrect answers. Unfortunately, ChatGPT only generated correct questions and answers with explanations in 32 % of the questions (19 out of 60). In many instances, ChatGPT failed to provide an explanation for the incorrect answers. An additional 25 % of the questions had answers that were either wrong or misleading. A grade of 32 % would be considered failing in most courses. Despite these issues, instructors may still find ChatGPT useful for creating practice exams with explanations—with the caveat that extensive editing may be required.
期刊介绍:
Academic Pathology is an open access journal sponsored by the Association of Pathology Chairs, established to give voice to the innovations in leadership and management of academic departments of Pathology. These innovations may have impact across the breadth of pathology and laboratory medicine practice. Academic Pathology addresses methods for improving patient care (clinical informatics, genomic testing and data management, lab automation, electronic health record integration, and annotate biorepositories); best practices in inter-professional clinical partnerships; innovative pedagogical approaches to medical education and educational program evaluation in pathology; models for training academic pathologists and advancing academic career development; administrative and organizational models supporting the discipline; and leadership development in academic medical centers, health systems, and other relevant venues. Intended authorship and audiences for Academic Pathology are international and reach beyond academic pathology itself, including but not limited to healthcare providers, educators, researchers, and policy-makers.