挖掘用户研究数据以判断支持AI系统的用户特定解释的模型的优点

IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Computational Intelligence Pub Date : 2024-12-17 DOI:10.1111/coin.70015
Owen Chambers, Robin Cohen, Maura R. Grossman, Liam Hebert, Elias Awad
{"title":"挖掘用户研究数据以判断支持AI系统的用户特定解释的模型的优点","authors":"Owen Chambers,&nbsp;Robin Cohen,&nbsp;Maura R. Grossman,&nbsp;Liam Hebert,&nbsp;Elias Awad","doi":"10.1111/coin.70015","DOIUrl":null,"url":null,"abstract":"<p>In this paper, we present a model for supporting user-specific explanations of AI systems. We then discuss a user study that was conducted to gauge whether the decisions for adjusting output to users with certain characteristics was confirmed to be of value to participants. We focus on the merit of having explanations attuned to particular psychological profiles of users, and the value of having different options for the level of explanation that is offered (including allowing for no explanation, as one possibility). Following the description of the study, we present an approach for mining data from user participant responses in order to determine whether the model that was developed for varying the output to users was well-founded. While our results in this respect are preliminary, we explain how using varied machine learning methods is of value as a concrete step toward validation of specific approaches for AI explanation. We conclude with a discussion of related work and some ideas for new directions with the research, in the future.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 6","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.70015","citationCount":"0","resultStr":"{\"title\":\"Mining User Study Data to Judge the Merit of a Model for Supporting User-Specific Explanations of AI Systems\",\"authors\":\"Owen Chambers,&nbsp;Robin Cohen,&nbsp;Maura R. Grossman,&nbsp;Liam Hebert,&nbsp;Elias Awad\",\"doi\":\"10.1111/coin.70015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In this paper, we present a model for supporting user-specific explanations of AI systems. We then discuss a user study that was conducted to gauge whether the decisions for adjusting output to users with certain characteristics was confirmed to be of value to participants. We focus on the merit of having explanations attuned to particular psychological profiles of users, and the value of having different options for the level of explanation that is offered (including allowing for no explanation, as one possibility). Following the description of the study, we present an approach for mining data from user participant responses in order to determine whether the model that was developed for varying the output to users was well-founded. While our results in this respect are preliminary, we explain how using varied machine learning methods is of value as a concrete step toward validation of specific approaches for AI explanation. We conclude with a discussion of related work and some ideas for new directions with the research, in the future.</p>\",\"PeriodicalId\":55228,\"journal\":{\"name\":\"Computational Intelligence\",\"volume\":\"40 6\",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.70015\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/coin.70015\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coin.70015","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们提出了一个支持用户特定的AI系统解释的模型。然后,我们讨论了一项用户研究,该研究旨在衡量对具有某些特征的用户调整输出的决策是否被证实对参与者有价值。我们关注的是根据用户的特定心理特征提供解释的优点,以及为提供的解释级别提供不同选择的价值(包括允许不解释,作为一种可能性)。根据研究的描述,我们提出了一种从用户参与者的回答中挖掘数据的方法,以确定为改变用户输出而开发的模型是否有充分的基础。虽然我们在这方面的结果是初步的,但我们解释了如何使用各种机器学习方法作为验证人工智能解释特定方法的具体步骤是有价值的。最后对相关工作进行了讨论,并对今后的研究方向提出了一些设想。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Mining User Study Data to Judge the Merit of a Model for Supporting User-Specific Explanations of AI Systems

In this paper, we present a model for supporting user-specific explanations of AI systems. We then discuss a user study that was conducted to gauge whether the decisions for adjusting output to users with certain characteristics was confirmed to be of value to participants. We focus on the merit of having explanations attuned to particular psychological profiles of users, and the value of having different options for the level of explanation that is offered (including allowing for no explanation, as one possibility). Following the description of the study, we present an approach for mining data from user participant responses in order to determine whether the model that was developed for varying the output to users was well-founded. While our results in this respect are preliminary, we explain how using varied machine learning methods is of value as a concrete step toward validation of specific approaches for AI explanation. We conclude with a discussion of related work and some ideas for new directions with the research, in the future.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computational Intelligence
Computational Intelligence 工程技术-计算机:人工智能
CiteScore
6.90
自引率
3.60%
发文量
65
审稿时长
>12 weeks
期刊介绍: This leading international journal promotes and stimulates research in the field of artificial intelligence (AI). Covering a wide range of issues - from the tools and languages of AI to its philosophical implications - Computational Intelligence provides a vigorous forum for the publication of both experimental and theoretical research, as well as surveys and impact studies. The journal is designed to meet the needs of a wide range of AI workers in academic and industrial research.
期刊最新文献
Reb-DINO: A Lightweight Pedestrian Detection Model With Structural Re-Parameterization in Apple Orchard RETRACTION A Method for Constructing Open-Channel Velocity Field Prediction Model Based on Machine Learning and CFD Violence Detection in Video Using Statistical Features of the Optical Flow and 2D Convolutional Neural Network Real-Time Solutions for Dynamic Complex Matrix Inversion and Chaotic Control Using ODE-Based Neural Computing Methods
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1