Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines

Nora McDonald , Aditya Johri , Areej Ali , Aayushi Hingle Collier
{"title":"Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines","authors":"Nora McDonald ,&nbsp;Aditya Johri ,&nbsp;Areej Ali ,&nbsp;Aayushi Hingle Collier","doi":"10.1016/j.chbah.2025.100121","DOIUrl":null,"url":null,"abstract":"<div><div>The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Finally, more than half of institutions talked about the ethics of GenAI on a broad range of topics, including Diversity, Equity and Inclusion (DEI) (N = 60, 52%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100121"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Finally, more than half of institutions talked about the ethics of GenAI on a broad range of topics, including Diversity, Equity and Inclusion (DEI) (N = 60, 52%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
高等教育中的生成式人工智能:来自制度政策和指导方针分析的证据
ChatGPT于2022年11月发布,促使高等教育机构(HEIs)大规模采用生成式人工智能(GenAI)。为此,高等教育机构将重点放在规范其使用,特别是在学生中使用,然后转向倡导将其有效地整合到教与学中。从那时起,许多高等教育机构越来越多地提供政策和指导方针来指导基因人工智能。本文对116所被列为高研究活动的美国大学或R1机构产生的文件进行了分析,对机构利益相关者提供的关于GenAI的建议和指导进行了全面检查。通过广泛的分析,我们发现大多数大学(N = 73,63%)鼓励使用GenAI,许多大学为其在课堂上的使用提供了详细的指导(N = 48,41%)。超过一半的机构提供了样本教学大纲(N = 65,56%),一半(N = 58,50%)提供了样本GenAI课程和活动,帮助教师在教学中整合和利用GenAI。值得注意的是,大多数指导都集中在写作活动上,而对代码和stem相关活动的参考很少,甚至在提到时也经常含糊不清(N = 58,50%)。最后,超过一半的机构在广泛的主题上讨论了GenAI的伦理问题,包括多样性、公平性和包容性(DEI) (N = 60, 52%)。根据我们的研究结果,我们警告说,当政策建议或暗示对现有教学实践进行实质性修改时,对教师的指导可能会变得繁重。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
AI-supported problem-based learning for enhancing computational thinking skills in STEM education Adoption of AI-enabled mental health wearables in India: The roles of psychological assurance and algorithmic credibility Wall-E vs. Terminator: The relationship between physical appearance and dimensions of mind perception A qualitative shift in AI capabilities: A “bitter lesson” for human-AI interaction research? Trusting in the competence of humans and artificially intelligent agents varying in generosity
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1