模糊图像计算机视觉分类中分类概率与用户适当信任度的关系研究

IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Journal of Computer Languages Pub Date : 2022-10-01 DOI:10.1016/j.cola.2022.101149
Gabriel Diniz Junqueira Barbosa, Dalai dos Santos Ribeiro, Marisa do Carmo Silva, Hélio Lopes, Simone Diniz Junqueira Barbosa
{"title":"模糊图像计算机视觉分类中分类概率与用户适当信任度的关系研究","authors":"Gabriel Diniz Junqueira Barbosa,&nbsp;Dalai dos Santos Ribeiro,&nbsp;Marisa do Carmo Silva,&nbsp;Hélio Lopes,&nbsp;Simone Diniz Junqueira Barbosa","doi":"10.1016/j.cola.2022.101149","DOIUrl":null,"url":null,"abstract":"<div><p>The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms raises pressing challenges as they support or make decisions with profound consequences for human beings. It is important to understand how users’ trust is affected by ML<span> models’ suggestions, even when those models are wrong. Many research efforts have focused on the user’s ability to interpret what a model has learned. In this paper, we seek to understand another aspect of ML interpretability<span>: whether and how the presence of classification probabilities and their different distributions are related to users’ trust in model outcomes, especially in ambiguous instances. To this end, we conducted two online surveys in which we asked participants to evaluate their agreement with image classifications<span> of pictures of animals made by an ML model. In the first, we analyze their trust before and after presenting them the model classification probabilities. In the second, we investigate the relationships between class probability distributions and users’ trust in the model. We found that, in some cases, the additional information is correlated with undue trust in the model’s classifications. However, in others, they are associated with inappropriate skepticism.</span></span></span></p></div>","PeriodicalId":48552,"journal":{"name":"Journal of Computer Languages","volume":"72 ","pages":"Article 101149"},"PeriodicalIF":1.7000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating the relationships between class probabilities and users’ appropriate trust in computer vision classifications of ambiguous images\",\"authors\":\"Gabriel Diniz Junqueira Barbosa,&nbsp;Dalai dos Santos Ribeiro,&nbsp;Marisa do Carmo Silva,&nbsp;Hélio Lopes,&nbsp;Simone Diniz Junqueira Barbosa\",\"doi\":\"10.1016/j.cola.2022.101149\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms raises pressing challenges as they support or make decisions with profound consequences for human beings. It is important to understand how users’ trust is affected by ML<span> models’ suggestions, even when those models are wrong. Many research efforts have focused on the user’s ability to interpret what a model has learned. In this paper, we seek to understand another aspect of ML interpretability<span>: whether and how the presence of classification probabilities and their different distributions are related to users’ trust in model outcomes, especially in ambiguous instances. To this end, we conducted two online surveys in which we asked participants to evaluate their agreement with image classifications<span> of pictures of animals made by an ML model. In the first, we analyze their trust before and after presenting them the model classification probabilities. In the second, we investigate the relationships between class probability distributions and users’ trust in the model. We found that, in some cases, the additional information is correlated with undue trust in the model’s classifications. However, in others, they are associated with inappropriate skepticism.</span></span></span></p></div>\",\"PeriodicalId\":48552,\"journal\":{\"name\":\"Journal of Computer Languages\",\"volume\":\"72 \",\"pages\":\"Article 101149\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computer Languages\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590118422000478\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computer Languages","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590118422000478","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

使用机器学习(ML)算法自动分类的系统的大规模采用带来了紧迫的挑战,因为它们支持或做出对人类产生深远影响的决策。重要的是要了解ML模型的建议如何影响用户的信任,即使这些模型是错误的。许多研究工作都集中在用户解释模型所学内容的能力上。在本文中,我们试图理解ML可解释性的另一个方面:分类概率及其不同分布的存在是否以及如何与用户对模型结果的信任有关,尤其是在模糊的情况下。为此,我们进行了两次在线调查,要求参与者评估他们对ML模型制作的动物图片的图像分类的一致性。在第一部分中,我们分析了他们在给出模型分类概率之前和之后的信任。在第二章中,我们研究了类概率分布与用户对模型的信任之间的关系。我们发现,在某些情况下,额外的信息与对模型分类的过度信任有关。然而,在其他人身上,它们与不恰当的怀疑论联系在一起。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Investigating the relationships between class probabilities and users’ appropriate trust in computer vision classifications of ambiguous images

The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms raises pressing challenges as they support or make decisions with profound consequences for human beings. It is important to understand how users’ trust is affected by ML models’ suggestions, even when those models are wrong. Many research efforts have focused on the user’s ability to interpret what a model has learned. In this paper, we seek to understand another aspect of ML interpretability: whether and how the presence of classification probabilities and their different distributions are related to users’ trust in model outcomes, especially in ambiguous instances. To this end, we conducted two online surveys in which we asked participants to evaluate their agreement with image classifications of pictures of animals made by an ML model. In the first, we analyze their trust before and after presenting them the model classification probabilities. In the second, we investigate the relationships between class probability distributions and users’ trust in the model. We found that, in some cases, the additional information is correlated with undue trust in the model’s classifications. However, in others, they are associated with inappropriate skepticism.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Computer Languages
Journal of Computer Languages Computer Science-Computer Networks and Communications
CiteScore
5.00
自引率
13.60%
发文量
36
期刊最新文献
Near-Pruned single assignment transformation of programs MLAPW: A framework to assess the impact of feature selection and sampling techniques on anti-pattern prediction using WSDL metrics Editorial Board Code histories: Documenting development by recording code influences and changes in code A comprehensive meta-analysis of efficiency and effectiveness in the detection community
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1