Owen Chambers, Robin Cohen, Maura R. Grossman, Liam Hebert, Elias Awad
{"title":"Mining User Study Data to Judge the Merit of a Model for Supporting User-Specific Explanations of AI Systems","authors":"Owen Chambers, Robin Cohen, Maura R. Grossman, Liam Hebert, Elias Awad","doi":"10.1111/coin.70015","DOIUrl":null,"url":null,"abstract":"<p>In this paper, we present a model for supporting user-specific explanations of AI systems. We then discuss a user study that was conducted to gauge whether the decisions for adjusting output to users with certain characteristics was confirmed to be of value to participants. We focus on the merit of having explanations attuned to particular psychological profiles of users, and the value of having different options for the level of explanation that is offered (including allowing for no explanation, as one possibility). Following the description of the study, we present an approach for mining data from user participant responses in order to determine whether the model that was developed for varying the output to users was well-founded. While our results in this respect are preliminary, we explain how using varied machine learning methods is of value as a concrete step toward validation of specific approaches for AI explanation. We conclude with a discussion of related work and some ideas for new directions with the research, in the future.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 6","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coin.70015","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coin.70015","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we present a model for supporting user-specific explanations of AI systems. We then discuss a user study that was conducted to gauge whether the decisions for adjusting output to users with certain characteristics was confirmed to be of value to participants. We focus on the merit of having explanations attuned to particular psychological profiles of users, and the value of having different options for the level of explanation that is offered (including allowing for no explanation, as one possibility). Following the description of the study, we present an approach for mining data from user participant responses in order to determine whether the model that was developed for varying the output to users was well-founded. While our results in this respect are preliminary, we explain how using varied machine learning methods is of value as a concrete step toward validation of specific approaches for AI explanation. We conclude with a discussion of related work and some ideas for new directions with the research, in the future.
期刊介绍:
This leading international journal promotes and stimulates research in the field of artificial intelligence (AI). Covering a wide range of issues - from the tools and languages of AI to its philosophical implications - Computational Intelligence provides a vigorous forum for the publication of both experimental and theoretical research, as well as surveys and impact studies. The journal is designed to meet the needs of a wide range of AI workers in academic and industrial research.