Sören Moritz, Bernd Romeike, Christoph Stosch, Daniel Tolks
{"title":"Generative AI (gAI) in medical education: Chat-GPT and co.","authors":"Sören Moritz, Bernd Romeike, Christoph Stosch, Daniel Tolks","doi":"10.3205/zma001636","DOIUrl":null,"url":null,"abstract":"“The use of chatbots inmedical education is an emerging trend that is welcomed by many educators and medical professionals. In particular, the use of ChatGPT, a large languagemodel of OpenAI, offers a variety of benefits for students and educators alike [...]” [1]. So far so amazing, the passage already points to the whole dilemma: will teaching at universities ever be the same after ChatGPT as it never was anyways? We had a Cologne term paper in the “field of competence carcinogenesis” (interdisciplinary teaching in the first preclinical study semester) generated in triplicate by ChatGPT, each with identical queries, and received three different two-page texts including literature citations according to APA style. These have been examined by two detector programs (Groover, Writer) to determine whether they were written by a human or a bot. Both programs could not detect them as machine-written (cave: short texts are practically undetectable). The search for plagiarism with the software PlagAware did not reveal any conspicuous passages worthy of consideration (approx. 3-5% agreement with already published texts). The papers were forwarded unchanged to the assessing tutors with the result that two papers were assessed as “passed” and one as “failed”. The poor performance was due to certain terms used in the field of competence that was not named, as well as a non-matching literature citation. What next? Let’s ask ChatGPT: “...If students were able to access ChatGPT and ask questions during the exam, they could theoretically receive answers from ChatGPT that could help them answer exam questions...” [2].","PeriodicalId":45850,"journal":{"name":"GMS Journal for Medical Education","volume":"40 4","pages":"Doc54"},"PeriodicalIF":1.5000,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10407583/pdf/","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GMS Journal for Medical Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3205/zma001636","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 1
Abstract
“The use of chatbots inmedical education is an emerging trend that is welcomed by many educators and medical professionals. In particular, the use of ChatGPT, a large languagemodel of OpenAI, offers a variety of benefits for students and educators alike [...]” [1]. So far so amazing, the passage already points to the whole dilemma: will teaching at universities ever be the same after ChatGPT as it never was anyways? We had a Cologne term paper in the “field of competence carcinogenesis” (interdisciplinary teaching in the first preclinical study semester) generated in triplicate by ChatGPT, each with identical queries, and received three different two-page texts including literature citations according to APA style. These have been examined by two detector programs (Groover, Writer) to determine whether they were written by a human or a bot. Both programs could not detect them as machine-written (cave: short texts are practically undetectable). The search for plagiarism with the software PlagAware did not reveal any conspicuous passages worthy of consideration (approx. 3-5% agreement with already published texts). The papers were forwarded unchanged to the assessing tutors with the result that two papers were assessed as “passed” and one as “failed”. The poor performance was due to certain terms used in the field of competence that was not named, as well as a non-matching literature citation. What next? Let’s ask ChatGPT: “...If students were able to access ChatGPT and ask questions during the exam, they could theoretically receive answers from ChatGPT that could help them answer exam questions...” [2].
期刊介绍:
GMS Journal for Medical Education (GMS J Med Educ) – formerly GMS Zeitschrift für Medizinische Ausbildung – publishes scientific articles on all aspects of undergraduate and graduate education in medicine, dentistry, veterinary medicine, pharmacy and other health professions. Research and review articles, project reports, short communications as well as discussion papers and comments may be submitted. There is a special focus on empirical studies which are methodologically sound and lead to results that are relevant beyond the respective institution, profession or country. Please feel free to submit qualitative as well as quantitative studies. We especially welcome submissions by students. It is the mission of GMS Journal for Medical Education to contribute to furthering scientific knowledge in the German-speaking countries as well as internationally and thus to foster the improvement of teaching and learning and to build an evidence base for undergraduate and graduate education. To this end, the journal has set up an editorial board with international experts. All manuscripts submitted are subjected to a clearly structured peer review process. All articles are published bilingually in English and German and are available with unrestricted open access. Thus, GMS Journal for Medical Education is available to a broad international readership. GMS Journal for Medical Education is published as an unrestricted open access journal with at least four issues per year. In addition, special issues on current topics in medical education research are also published. Until 2015 the journal was published under its German name GMS Zeitschrift für Medizinische Ausbildung. By changing its name to GMS Journal for Medical Education, we wish to underline our international mission.