透明度如何影响算法建议的使用:信任信念的中介作用

IF 6.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Decision Support Systems Pub Date : 2024-06-22 DOI:10.1016/j.dss.2024.114273
Xianzhang Ning , Yaobin Lu , Weimo Li , Sumeet Gupta
{"title":"透明度如何影响算法建议的使用:信任信念的中介作用","authors":"Xianzhang Ning ,&nbsp;Yaobin Lu ,&nbsp;Weimo Li ,&nbsp;Sumeet Gupta","doi":"10.1016/j.dss.2024.114273","DOIUrl":null,"url":null,"abstract":"<div><p>Although algorithms are increasingly used to support professional tasks and routine decision-making, their opaque nature invites resistance and results in suboptimal use of their advice. Scholars argue for transparency to enhance the acceptability of algorithmic advice. However, current research is limited in understanding how improved transparency enhances the use of algorithmic advice, such as the differences among various aspects of transparency and the underlying mechanism. In this paper, we investigate whether and how different aspects of algorithmic transparency (performance, process, and purpose) enhance the use of algorithmic advice. Drawing on the knowledge-based trust perspective, we examine the mediating roles of trusting beliefs in the relationships between transparency and the use of algorithmic advice. Using the “judge-advisor system” paradigm, we conduct a 2 × 2 × 2 experiment to manipulate the three aspects of transparency and examine their effects on the use of algorithmic advice. We find that performance and process transparency promote the use of algorithmic advice. However, the effect of process transparency gets attenuated when purpose transparency is high. Purpose transparency is only useful when process transparency is low. We also find that while all three aspects of transparency facilitate different trusting beliefs, only competence belief significantly promotes the use of algorithmic advice. It also fully mediates the facilitating effects of performance and process transparency. This study contributes to the emerging research on algorithmic decision support by empirically investigating the effects of transparency on the use of algorithmic advice and identifying the underlying mechanism. The findings also provide practical guidance on how to promote the acceptance of algorithmic advice that is valuable to both individual users and practitioners.</p></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"183 ","pages":"Article 114273"},"PeriodicalIF":6.7000,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"How transparency affects algorithmic advice utilization: The mediating roles of trusting beliefs\",\"authors\":\"Xianzhang Ning ,&nbsp;Yaobin Lu ,&nbsp;Weimo Li ,&nbsp;Sumeet Gupta\",\"doi\":\"10.1016/j.dss.2024.114273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Although algorithms are increasingly used to support professional tasks and routine decision-making, their opaque nature invites resistance and results in suboptimal use of their advice. Scholars argue for transparency to enhance the acceptability of algorithmic advice. However, current research is limited in understanding how improved transparency enhances the use of algorithmic advice, such as the differences among various aspects of transparency and the underlying mechanism. In this paper, we investigate whether and how different aspects of algorithmic transparency (performance, process, and purpose) enhance the use of algorithmic advice. Drawing on the knowledge-based trust perspective, we examine the mediating roles of trusting beliefs in the relationships between transparency and the use of algorithmic advice. Using the “judge-advisor system” paradigm, we conduct a 2 × 2 × 2 experiment to manipulate the three aspects of transparency and examine their effects on the use of algorithmic advice. We find that performance and process transparency promote the use of algorithmic advice. However, the effect of process transparency gets attenuated when purpose transparency is high. Purpose transparency is only useful when process transparency is low. We also find that while all three aspects of transparency facilitate different trusting beliefs, only competence belief significantly promotes the use of algorithmic advice. It also fully mediates the facilitating effects of performance and process transparency. This study contributes to the emerging research on algorithmic decision support by empirically investigating the effects of transparency on the use of algorithmic advice and identifying the underlying mechanism. The findings also provide practical guidance on how to promote the acceptance of algorithmic advice that is valuable to both individual users and practitioners.</p></div>\",\"PeriodicalId\":55181,\"journal\":{\"name\":\"Decision Support Systems\",\"volume\":\"183 \",\"pages\":\"Article 114273\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Decision Support Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167923624001064\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923624001064","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

虽然算法越来越多地被用于支持专业任务和常规决策,但其不透明的性质招致了抵制,导致算法建议的使用效果不理想。学者们主张通过提高透明度来增强算法建议的可接受性。然而,目前的研究在理解提高透明度如何促进算法建议的使用方面还很有限,例如透明度各方面的差异和内在机制。在本文中,我们研究了算法透明度的不同方面(性能、过程和目的)是否以及如何提高算法建议的使用率。借鉴基于知识的信任视角,我们研究了信任信念在透明度与算法建议使用之间关系中的中介作用。利用 "法官-顾问系统 "范式,我们进行了一个 2 × 2 × 2 实验,操纵透明度的三个方面,并考察它们对算法建议使用的影响。我们发现,绩效透明和流程透明会促进算法建议的使用。然而,当目的透明度较高时,过程透明度的影响就会减弱。只有当过程透明度较低时,目的透明度才有用。我们还发现,虽然所有三个方面的透明度都能促进不同的信任信念,但只有能力信念能显著促进算法建议的使用。能力信念还能完全调节绩效和流程透明度的促进作用。本研究通过实证调查透明度对算法建议使用的影响并确定其潜在机制,为算法决策支持的新兴研究做出了贡献。研究结果还就如何促进算法建议的接受度提供了实用指导,这对个人用户和从业人员都很有价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
How transparency affects algorithmic advice utilization: The mediating roles of trusting beliefs

Although algorithms are increasingly used to support professional tasks and routine decision-making, their opaque nature invites resistance and results in suboptimal use of their advice. Scholars argue for transparency to enhance the acceptability of algorithmic advice. However, current research is limited in understanding how improved transparency enhances the use of algorithmic advice, such as the differences among various aspects of transparency and the underlying mechanism. In this paper, we investigate whether and how different aspects of algorithmic transparency (performance, process, and purpose) enhance the use of algorithmic advice. Drawing on the knowledge-based trust perspective, we examine the mediating roles of trusting beliefs in the relationships between transparency and the use of algorithmic advice. Using the “judge-advisor system” paradigm, we conduct a 2 × 2 × 2 experiment to manipulate the three aspects of transparency and examine their effects on the use of algorithmic advice. We find that performance and process transparency promote the use of algorithmic advice. However, the effect of process transparency gets attenuated when purpose transparency is high. Purpose transparency is only useful when process transparency is low. We also find that while all three aspects of transparency facilitate different trusting beliefs, only competence belief significantly promotes the use of algorithmic advice. It also fully mediates the facilitating effects of performance and process transparency. This study contributes to the emerging research on algorithmic decision support by empirically investigating the effects of transparency on the use of algorithmic advice and identifying the underlying mechanism. The findings also provide practical guidance on how to promote the acceptance of algorithmic advice that is valuable to both individual users and practitioners.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Decision Support Systems
Decision Support Systems 工程技术-计算机:人工智能
CiteScore
14.70
自引率
6.70%
发文量
119
审稿时长
13 months
期刊介绍: The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).
期刊最新文献
A comparative analysis of the effect of initiative risk statement versus passive risk disclosure on the financing performance of Kickstarter campaigns DeepSecure: A computational design science approach for interpretable threat hunting in cybersecurity decision making Editorial Board Effects of visual-preview and information-sidedness features on website persuasiveness The evolution of organizations and stakeholders for metaverse ecosystems: Editorial for the special issue on metaverse part 1
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1