Generative AI tools, especially ChatGPT, have become prevalent in education. Despite numerous benefits and convenience, potential risks such as hallucinations, misinformation, and limitations in advanced problem-solving underscore the need for inoculation training to safeguard students from these threats. International EFL students might be particularly vulnerable to the risks associated with ChatGPT due to their high demand for language and academic assistance, which can be provided through this tool. This study investigated whether an inoculation message would enhance students’ intentions to verify information provided by ChatGPT and encourage them to do so. The study employed a 2 (student status: domestic vs. international EFL) x 2 (inoculation status: inoculated vs. non-inoculated) x 2 (time: pre-test vs. post-test) mixed factorial design. This two-part study revealed that a generic inoculation message containing forewarnings prompted students to exhibit a higher level of caution when interacting with ChatGPT, as evidenced by their verification of ChatGPT-generated answers in the academic-source-summary task. Students who received inoculation training were more likely to verify the academic-source-summary task compared to those who did not receive the message. The inoculation intervention did not have a significant impact on their intentions to verify information provided by ChatGPT. The findings from this research provide educators, curriculum developers, and administrators with insights into improving and better delivering training programs tailored to different groups of students, helping them safely utilize generative AI tools.
扫码关注我们
求助内容:
应助结果提醒方式:
