Trust or mistrust in algorithmic grading? An embedded agency perspective

IF 20.1 1区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE International Journal of Information Management Pub Date : 2023-04-01 DOI:10.1016/j.ijinfomgt.2022.102555
Stephen Jackson , Niki Panteli
{"title":"Trust or mistrust in algorithmic grading? An embedded agency perspective","authors":"Stephen Jackson ,&nbsp;Niki Panteli","doi":"10.1016/j.ijinfomgt.2022.102555","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial Intelligence (AI) has the potential to significantly impact the educational sector. One application of AI that has increasingly been applied is algorithmic grading. It is within this context that our study takes a focus on trust. While the concept of trust continues to grow in importance among AI researchers and practitioners, an investigation of trust/mistrust in algorithmic grading across multiple levels of analysis has so far been under-researched. In this paper, we argue the need for a model that encompasses the multi-layered nature of trust/mistrust in AI. Drawing on an embedded agency perspective, a model is devised that examines top-down and bottom-up forces that can influence trust/mistrust in algorithmic grading. We illustrate how the model can be applied by drawing on the case of the International Baccalaureate (IB) program in 2020, whereby an algorithm was used to determine student grades. This paper contributes to the AI-trust literature by providing a fresh theoretical lens based on institutional theory to investigate the dynamic and multi-faceted nature of trust/mistrust in algorithmic grading—an area that has seldom been explored, both theoretically and empirically. The study raises important implications for algorithmic design and awareness. Algorithms need to be designed in a transparent, fair, and ultimately a trustworthy manner. While an algorithm typically operates like a black box, whereby the underlying mechanisms are not apparent to those impacted by it, the purpose and an understanding of how the algorithm works should be communicated upfront and in a timely manner.</p></div>","PeriodicalId":48422,"journal":{"name":"International Journal of Information Management","volume":"69 ","pages":"Article 102555"},"PeriodicalIF":20.1000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Management","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0268401222000895","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 2

Abstract

Artificial Intelligence (AI) has the potential to significantly impact the educational sector. One application of AI that has increasingly been applied is algorithmic grading. It is within this context that our study takes a focus on trust. While the concept of trust continues to grow in importance among AI researchers and practitioners, an investigation of trust/mistrust in algorithmic grading across multiple levels of analysis has so far been under-researched. In this paper, we argue the need for a model that encompasses the multi-layered nature of trust/mistrust in AI. Drawing on an embedded agency perspective, a model is devised that examines top-down and bottom-up forces that can influence trust/mistrust in algorithmic grading. We illustrate how the model can be applied by drawing on the case of the International Baccalaureate (IB) program in 2020, whereby an algorithm was used to determine student grades. This paper contributes to the AI-trust literature by providing a fresh theoretical lens based on institutional theory to investigate the dynamic and multi-faceted nature of trust/mistrust in algorithmic grading—an area that has seldom been explored, both theoretically and empirically. The study raises important implications for algorithmic design and awareness. Algorithms need to be designed in a transparent, fair, and ultimately a trustworthy manner. While an algorithm typically operates like a black box, whereby the underlying mechanisms are not apparent to those impacted by it, the purpose and an understanding of how the algorithm works should be communicated upfront and in a timely manner.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
信任还是不信任算法评分?嵌入式机构视角
人工智能有可能对教育部门产生重大影响。人工智能的一个越来越多的应用是算法评分。正是在这种背景下,我们的研究将重点放在了信任上。尽管信任的概念在人工智能研究人员和从业者中的重要性不断增加,但迄今为止,对多个分析级别的算法评分中的信任/不信任的调查研究不足。在这篇论文中,我们认为需要一个包含人工智能中信任/不信任的多层次性质的模型。基于嵌入式代理的视角,设计了一个模型,研究自上而下和自下而上的力量,这些力量可以影响算法分级中的信任/不确定性。我们以2020年的国际文凭(IB)项目为例,说明了如何应用该模型,其中使用了一种算法来确定学生成绩。本文通过提供一个基于制度理论的新的理论视角来研究算法评分中信任/不信任的动态和多方面性质,为人工智能信任文献做出了贡献,这是一个理论和实证都很少探索的领域。这项研究对算法设计和意识提出了重要的启示。算法需要以透明、公平和最终值得信赖的方式进行设计。虽然算法通常像黑匣子一样运行,因此底层机制对受其影响的人来说并不明显,但应该提前及时沟通算法的目的和对算法工作方式的理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Information Management
International Journal of Information Management INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
53.10
自引率
6.20%
发文量
111
审稿时长
24 days
期刊介绍: The International Journal of Information Management (IJIM) is a distinguished, international, and peer-reviewed journal dedicated to providing its readers with top-notch analysis and discussions within the evolving field of information management. Key features of the journal include: Comprehensive Coverage: IJIM keeps readers informed with major papers, reports, and reviews. Topical Relevance: The journal remains current and relevant through Viewpoint articles and regular features like Research Notes, Case Studies, and a Reviews section, ensuring readers are updated on contemporary issues. Focus on Quality: IJIM prioritizes high-quality papers that address contemporary issues in information management.
期刊最新文献
Collaborative AI in the workplace: Enhancing organizational performance through resource-based and task-technology fit perspectives Personal data strategies in digital advertising: Can first-party data outshine third-party data? Using the influence of human-as-machine representation for self-improvement products The exploration of users’ perceived value from personalization and virtual conversational agents to enable a smart home assemblage– A mixed method approach Extending the unified theory of acceptance and use of technology for sustainable technologies context
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1