{"title":"ChatGPT,完美的虚拟助教?学习者与聊天机器人互动中的意识形态偏见","authors":"Margo Van Poucke","doi":"10.1016/j.compcom.2024.102871","DOIUrl":null,"url":null,"abstract":"<div><p>This paper examines ChatGPT's use of evaluative language and engagement strategies while addressing information-seeking queries. It assesses the chatbot's role as a virtual teaching assistant (VTA) across various educational settings. By employing Appraisal theory, the analysis contrasts responses generated by ChatGPT and those added by humans, focusing on the interactants’ attitude, deployment of interpersonal metaphors and evaluations of entities, revealing their views on Australian cultural practice. Two datasets were analysed: the first sample (15,909 words) was retrieved from the subreddit r/AskAnAustralian and the second (10,696 words) was obtained by prompting ChatGPT with the same questions. The findings show that, while human experts mainly opt for subjective explicit formulations to express personal viewpoints, the chatbot's preference goes out to incongruent ‘it is’-constructions to share pre-programmed perspectives, which may reflect ideological bias. Even though ChatGPT displays promising socio-communicative capabilities (SCs), its lack of contextual awareness, required to function cross-culturally as a VTA, may lead to considerable ethical issues. The study's novel contribution lies in the in-depth investigation of how the chatbot's SCs and lexicogrammatical selections may impact its role as a VTA, highlighting the need to develop students’ critical digital literacy skills while using AI learning tools.</p></div>","PeriodicalId":35773,"journal":{"name":"Computers and Composition","volume":"73 ","pages":"Article 102871"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S8755461524000471/pdfft?md5=dc30c9587d95ca1b01a5523ab82cfe56&pid=1-s2.0-S8755461524000471-main.pdf","citationCount":"0","resultStr":"{\"title\":\"ChatGPT, the perfect virtual teaching assistant? Ideological bias in learner-chatbot interactions\",\"authors\":\"Margo Van Poucke\",\"doi\":\"10.1016/j.compcom.2024.102871\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper examines ChatGPT's use of evaluative language and engagement strategies while addressing information-seeking queries. It assesses the chatbot's role as a virtual teaching assistant (VTA) across various educational settings. By employing Appraisal theory, the analysis contrasts responses generated by ChatGPT and those added by humans, focusing on the interactants’ attitude, deployment of interpersonal metaphors and evaluations of entities, revealing their views on Australian cultural practice. Two datasets were analysed: the first sample (15,909 words) was retrieved from the subreddit r/AskAnAustralian and the second (10,696 words) was obtained by prompting ChatGPT with the same questions. The findings show that, while human experts mainly opt for subjective explicit formulations to express personal viewpoints, the chatbot's preference goes out to incongruent ‘it is’-constructions to share pre-programmed perspectives, which may reflect ideological bias. Even though ChatGPT displays promising socio-communicative capabilities (SCs), its lack of contextual awareness, required to function cross-culturally as a VTA, may lead to considerable ethical issues. The study's novel contribution lies in the in-depth investigation of how the chatbot's SCs and lexicogrammatical selections may impact its role as a VTA, highlighting the need to develop students’ critical digital literacy skills while using AI learning tools.</p></div>\",\"PeriodicalId\":35773,\"journal\":{\"name\":\"Computers and Composition\",\"volume\":\"73 \",\"pages\":\"Article 102871\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S8755461524000471/pdfft?md5=dc30c9587d95ca1b01a5523ab82cfe56&pid=1-s2.0-S8755461524000471-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers and Composition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S8755461524000471\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Composition","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S8755461524000471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
ChatGPT, the perfect virtual teaching assistant? Ideological bias in learner-chatbot interactions
This paper examines ChatGPT's use of evaluative language and engagement strategies while addressing information-seeking queries. It assesses the chatbot's role as a virtual teaching assistant (VTA) across various educational settings. By employing Appraisal theory, the analysis contrasts responses generated by ChatGPT and those added by humans, focusing on the interactants’ attitude, deployment of interpersonal metaphors and evaluations of entities, revealing their views on Australian cultural practice. Two datasets were analysed: the first sample (15,909 words) was retrieved from the subreddit r/AskAnAustralian and the second (10,696 words) was obtained by prompting ChatGPT with the same questions. The findings show that, while human experts mainly opt for subjective explicit formulations to express personal viewpoints, the chatbot's preference goes out to incongruent ‘it is’-constructions to share pre-programmed perspectives, which may reflect ideological bias. Even though ChatGPT displays promising socio-communicative capabilities (SCs), its lack of contextual awareness, required to function cross-culturally as a VTA, may lead to considerable ethical issues. The study's novel contribution lies in the in-depth investigation of how the chatbot's SCs and lexicogrammatical selections may impact its role as a VTA, highlighting the need to develop students’ critical digital literacy skills while using AI learning tools.
期刊介绍:
Computers and Composition: An International Journal is devoted to exploring the use of computers in writing classes, writing programs, and writing research. It provides a forum for discussing issues connected with writing and computer use. It also offers information about integrating computers into writing programs on the basis of sound theoretical and pedagogical decisions, and empirical evidence. It welcomes articles, reviews, and letters to the Editors that may be of interest to readers, including descriptions of computer-aided writing and/or reading instruction, discussions of topics related to computer use of software development; explorations of controversial ethical, legal, or social issues related to the use of computers in writing programs.