Measuring the Impact of Picture-Based Explanations on the Acceptance of an AI System for Classifying Laundry

Nico Rabethge, Dominik Bentler
{"title":"Measuring the Impact of Picture-Based Explanations on the Acceptance of an AI System for Classifying Laundry","authors":"Nico Rabethge, Dominik Bentler","doi":"10.54941/ahfe1004181","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) systems have increasingly been employed in various industries, including the laundry sector, e.g., to assist the employees sorting the laundry. This study aims to investigate the influence of image-based explanations on the acceptance of an AI system, by using CNNs that were trained to classify color and type of laundry items, with the explanations being generated through Deep Taylor Decomposition, a popular Explainable AI technique. We specifically examined how providing reasonable and unreasonable visual explanations affected the confidence levels of participating employees from laundries in their respective decisions. 32 participants were recruited from a diverse range of laundries, age, experience in this sector and prior experience with AI technologies and were invited to take part in this study. Each participant was presented with a set of 20 laundry classifications made by the AI system. They were then asked to indicate whether the accompanying image-based explanation strengthened or weakened their confidence in each decision. A five-level Likert scale was utilized to measure the impact, ranging from 1 (strongly weakens confidence) to 5 (strongly strengthens confidence). By providing visual cues and contextual information, the explanations are expected to enhance participants' understanding of the AI system's decision-making process. Consequently, we hypothesize that the image-based explanations will strengthen participants' confidence in the AI system's classifications, leading to increased acceptance and trust in its capabilities. The analysis of the results indicated significant main effects for both the quality of explanation and neural network certainties variables. Moreover, the interaction between explanation quality and neural network certainties also demonstrated a notable level of significance.The outcomes of this study hold substantial implications for the integration of AI systems within the laundry industry and other related domains. By identifying the influence of image-based explanations on acceptance, organizations can refine their AI implementations, ensuring effective utilization and positive user experiences. By fostering a better understanding of how image-based explanations influence AI acceptance, this study contributes to the ongoing development and improvement of AI systems across industries. Ultimately, this research seeks to pave the way for enhanced human-AI collaboration and more widespread adoption of AI technologies. Future research in this area could explore alternative forms of visual explanations, to further examine their impact on user acceptance and confidence in AI systems.","PeriodicalId":470195,"journal":{"name":"AHFE international","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AHFE international","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1004181","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) systems have increasingly been employed in various industries, including the laundry sector, e.g., to assist the employees sorting the laundry. This study aims to investigate the influence of image-based explanations on the acceptance of an AI system, by using CNNs that were trained to classify color and type of laundry items, with the explanations being generated through Deep Taylor Decomposition, a popular Explainable AI technique. We specifically examined how providing reasonable and unreasonable visual explanations affected the confidence levels of participating employees from laundries in their respective decisions. 32 participants were recruited from a diverse range of laundries, age, experience in this sector and prior experience with AI technologies and were invited to take part in this study. Each participant was presented with a set of 20 laundry classifications made by the AI system. They were then asked to indicate whether the accompanying image-based explanation strengthened or weakened their confidence in each decision. A five-level Likert scale was utilized to measure the impact, ranging from 1 (strongly weakens confidence) to 5 (strongly strengthens confidence). By providing visual cues and contextual information, the explanations are expected to enhance participants' understanding of the AI system's decision-making process. Consequently, we hypothesize that the image-based explanations will strengthen participants' confidence in the AI system's classifications, leading to increased acceptance and trust in its capabilities. The analysis of the results indicated significant main effects for both the quality of explanation and neural network certainties variables. Moreover, the interaction between explanation quality and neural network certainties also demonstrated a notable level of significance.The outcomes of this study hold substantial implications for the integration of AI systems within the laundry industry and other related domains. By identifying the influence of image-based explanations on acceptance, organizations can refine their AI implementations, ensuring effective utilization and positive user experiences. By fostering a better understanding of how image-based explanations influence AI acceptance, this study contributes to the ongoing development and improvement of AI systems across industries. Ultimately, this research seeks to pave the way for enhanced human-AI collaboration and more widespread adoption of AI technologies. Future research in this area could explore alternative forms of visual explanations, to further examine their impact on user acceptance and confidence in AI systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
测量基于图片的解释对洗衣分类人工智能系统接受度的影响
人工智能(AI)系统越来越多地应用于各个行业,包括洗衣行业,例如帮助员工分类衣物。本研究旨在研究基于图像的解释对人工智能系统接受度的影响,方法是使用cnn进行训练,对洗衣物品的颜色和类型进行分类,并通过深度泰勒分解(一种流行的可解释人工智能技术)生成解释。我们专门研究了提供合理和不合理的视觉解释如何影响洗衣店参与员工在各自决策中的信心水平。32名参与者来自不同的洗衣店,年龄,该行业的经验以及之前的人工智能技术经验,并被邀请参加这项研究。每个参与者都得到了人工智能系统给出的一组20个洗衣分类。然后,他们被要求指出伴随的基于图像的解释是增强还是削弱了他们对每个决定的信心。采用李克特五级量表来衡量影响,从1(强烈削弱信心)到5(强烈增强信心)。通过提供视觉线索和上下文信息,这些解释有望增强参与者对人工智能系统决策过程的理解。因此,我们假设基于图像的解释将增强参与者对人工智能系统分类的信心,从而增加对其能力的接受和信任。分析结果表明,对解释质量和神经网络确定性变量均有显著的主效应。此外,解释质量与神经网络确定性之间的交互作用也表现出显著的显著性。这项研究的结果对洗衣行业和其他相关领域的人工智能系统集成具有重大意义。通过识别基于图像的解释对接受度的影响,组织可以改进他们的人工智能实施,确保有效利用和积极的用户体验。通过促进对基于图像的解释如何影响人工智能接受度的更好理解,本研究有助于各行业人工智能系统的持续发展和改进。最终,这项研究旨在为加强人类与人工智能的合作和更广泛地采用人工智能技术铺平道路。该领域的未来研究可以探索其他形式的视觉解释,以进一步研究它们对用户接受度和对人工智能系统的信心的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Exploring the User's Perception of Updates in Intelligent Systems Longitudinal Study of Communication in Nursing Organizations Using Wearable Sensors Usability of pre-flight planning interfaces for Supplemental Data Service Provider tools to support Uncrewed Aircraft System Traffic Management Usability of Booking a Flight Ticket Using Airline Applications on Smartphones The Impact of Parental Treatment and Education on Social Exclusion Sensitivity in Adult Children: A Questionnaire Survey and fNIRS Study Using the Cyberball Paradigm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1