Transparency-Check: An Instrument for the Study and Design of Transparency in AI-based Personalization Systems

Laura Schelenz, Avi Segal, Oduma Adelio, K. Gal
{"title":"Transparency-Check: An Instrument for the Study and Design of Transparency in AI-based Personalization Systems","authors":"Laura Schelenz, Avi Segal, Oduma Adelio, K. Gal","doi":"10.1145/3636508","DOIUrl":null,"url":null,"abstract":"As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This paper provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations) and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"6 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal on Responsible Computing","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.1145/3636508","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This paper provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations) and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
透明度检查:研究和设计人工智能个性化系统透明度的工具
随着基于人工智能的系统在我们的日常生活中变得司空见惯,它们需要向用户提供可理解的信息,告诉他们如何收集、处理和输出与他们有关的信息。由于最近的道德准则和监管,以及研究表明基于人工智能的系统的透明度与用户满意度之间存在正相关关系,这种透明度实践的重要性变得越来越重要。本文为使用个性化的基于人工智能的系统的透明度设计和研究提供了一种新的工具。该工具名为“透明度检查”(transparency - check),它基于一个关于系统四个方面透明度问题的清单:输入(数据收集)、处理(算法模型)、输出(个性化建议)和用户控制(调整系统要素的用户反馈机制)。研究人员、设计人员和计算机系统的最终用户都可以使用transparent - check。为了从研究人员的角度展示透明度检查的有用性,我们收集了108名学生参与者的回答,他们使用透明度检查表对五个流行的现实世界系统(亚马逊,Facebook, Netflix, Spotify和YouTube)进行了评级。根据用户的主观评价,这些系统对透明度标准的遵从程度较低,在个别类别(特别是数据收集、处理、用户控制)上存在一些细微差别。我们使用这些结果来编写设计建议,以提高基于人工智能的系统的透明度,例如在用户与系统交互期间集成有关系统行为的信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Improving Group Fairness Assessments with Proxies Navigating the EU AI Act Maze using a Decision-Tree Approach This Is Going on Your Permanent Record: A Legal Analysis of Educational Data in the Cloud Mapping the complexity of legal challenges for trustworthy drones on construction sites in the United Kingdom Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1