Potential benefits of employing large language models in research in moral education and development

IF 1.7 4区 教育学 Q2 EDUCATION & EDUCATIONAL RESEARCH Journal of Moral Education Pub Date : 2023-09-15 DOI:10.1080/03057240.2023.2250570
Han, Hyemin
{"title":"Potential benefits of employing large language models in research in moral education and development","authors":"Han, Hyemin","doi":"10.1080/03057240.2023.2250570","DOIUrl":null,"url":null,"abstract":"Recently, computer scientists have developed large language models (LLMs) by training prediction models with large-scale language corpora and human reinforcements. The LLMs have become one promising way to implement artificial intelligence with accuracy in various fields. Interestingly, recent LLMs possess emergent functional features that emulate sophisticated human cognition, especially in-context learning and the chain of thought, which were unavailable in previous prediction models. In this paper, I will examine how LLMs might contribute to moral education and development research. To achieve this goal, I will review the most recently published conference papers and ArXiv preprints to overview the novel functional features implemented in LLMs. I also intend to conduct brief experiments with ChatGPT to investigate how LLMs behave while addressing ethical dilemmas and external feedback. The results suggest that LLMs might be capable of solving dilemmas based on reasoning and revising their reasoning process with external input. Furthermore, a preliminary experimental result from the moral exemplar test may demonstrate that exemplary stories can elicit moral elevation in LLMs as do they among human participants. I will discuss the potential implications of LLMs on research on moral education and development with the results.","PeriodicalId":47410,"journal":{"name":"Journal of Moral Education","volume":"23 1","pages":"0"},"PeriodicalIF":1.7000,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Moral Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/03057240.2023.2250570","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, computer scientists have developed large language models (LLMs) by training prediction models with large-scale language corpora and human reinforcements. The LLMs have become one promising way to implement artificial intelligence with accuracy in various fields. Interestingly, recent LLMs possess emergent functional features that emulate sophisticated human cognition, especially in-context learning and the chain of thought, which were unavailable in previous prediction models. In this paper, I will examine how LLMs might contribute to moral education and development research. To achieve this goal, I will review the most recently published conference papers and ArXiv preprints to overview the novel functional features implemented in LLMs. I also intend to conduct brief experiments with ChatGPT to investigate how LLMs behave while addressing ethical dilemmas and external feedback. The results suggest that LLMs might be capable of solving dilemmas based on reasoning and revising their reasoning process with external input. Furthermore, a preliminary experimental result from the moral exemplar test may demonstrate that exemplary stories can elicit moral elevation in LLMs as do they among human participants. I will discuss the potential implications of LLMs on research on moral education and development with the results.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在道德教育和发展研究中使用大型语言模型的潜在好处
近年来,计算机科学家通过使用大规模语料库和人类强化训练预测模型,开发了大型语言模型(llm)。llm已经成为在各个领域精确实现人工智能的一种有前途的方法。有趣的是,最近的法学硕士具有模拟复杂的人类认知的新兴功能特征,特别是上下文学习和思维链,这在以前的预测模型中是不可用的。在本文中,我将研究法学硕士如何有助于道德教育和发展研究。为了实现这一目标,我将回顾最近发表的会议论文和ArXiv预印本,以概述llm中实现的新功能特性。我还打算用ChatGPT进行简短的实验,以调查法学硕士在处理道德困境和外部反馈时的行为。结果表明,法学硕士可能有能力解决基于推理的困境,并在外部输入的情况下修改其推理过程。此外,道德范例测试的初步实验结果可能表明,模范故事可以在法学硕士中引起道德提升,就像在人类参与者中一样。我将讨论法学硕士对道德教育和发展研究的潜在影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Moral Education
Journal of Moral Education EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
3.90
自引率
11.80%
发文量
24
期刊介绍: The Journal of Moral Education (a Charitable Company Limited by Guarantee) provides a unique interdisciplinary forum for consideration of all aspects of moral education and development across the lifespan. It contains philosophical analyses, reports of empirical research and evaluation of educational strategies which address a range of value issues and the process of valuing, in theory and practice, and also at the social and individual level. The journal regularly includes country based state-of-the-art papers on moral education and publishes special issues on particular topics.
期刊最新文献
Flagellar switch inverted repeat impacts flagellar invertibility and varies Clostridioides difficile RT027/MLST1 virulence. School for sedition? Climate justice, citizenship and education Austrian secondary school teachers’ views on character education: Quantitative insights from a mixed-methods study Personal knowledge: Teaching place-based religious ethics for a climate emergency The university bundle: Unpacking the sources of undergraduate moral socialization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1