Understanding via exemplification in XAI: how explaining image classification benefits from exemplars

IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE AI & Society Pub Date : 2024-01-27 DOI:10.1007/s00146-023-01837-4
Sara Mann
{"title":"Understanding via exemplification in XAI: how explaining image classification benefits from exemplars","authors":"Sara Mann","doi":"10.1007/s00146-023-01837-4","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these approaches use visual explanations. Drawing on Elgin’s work (True enough. MIT Press, Cambridge, 2017), I argue that analyzing what those explanations exemplify can help to assess their suitability for producing understanding. More specifically, I suggest to distinguish between two forms of examples according to their suitability for producing understanding. I call these forms <span>samples</span> and <span>exemplars</span>, respectively. S<span>amples</span> are prone to misinterpretation and thus carry the risk of leading to misunderstanding. E<span>xemplars</span>, by contrast, are intentionally designed or chosen to meet contextual requirements and to mitigate the risk of misinterpretation. They are thus preferable for bringing about understanding. By reviewing several XAI approaches directed at image classifiers, I show that most of them explain with <span>samples</span>. If my analysis is correct, it will be beneficial if such explainability methods use explanations that qualify as <span>exemplars</span>.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 1","pages":"37 - 52"},"PeriodicalIF":4.7000,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01837-4.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI & Society","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s00146-023-01837-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these approaches use visual explanations. Drawing on Elgin’s work (True enough. MIT Press, Cambridge, 2017), I argue that analyzing what those explanations exemplify can help to assess their suitability for producing understanding. More specifically, I suggest to distinguish between two forms of examples according to their suitability for producing understanding. I call these forms samples and exemplars, respectively. Samples are prone to misinterpretation and thus carry the risk of leading to misunderstanding. Exemplars, by contrast, are intentionally designed or chosen to meet contextual requirements and to mitigate the risk of misinterpretation. They are thus preferable for bringing about understanding. By reviewing several XAI approaches directed at image classifiers, I show that most of them explain with samples. If my analysis is correct, it will be beneficial if such explainability methods use explanations that qualify as exemplars.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过 XAI 中的范例进行理解:解释图像分类如何受益于范例
执行图像分类任务的人工智能(AI)系统在许多应用环境中获得了巨大的成功。然而,许多这样的系统是不透明的,甚至对专家来说也是如此。由于道德、法律或实际原因,这种理解的缺乏可能会造成问题。因此,研究领域Explainable AI (XAI)开发了几种解释图像分类器的方法。希望能带来理解,例如,关于为什么某些图像被归类为属于特定的目标类。这些方法大多使用视觉解释。借鉴埃尔金的作品(没错。麻省理工学院出版社,剑桥,2017),我认为分析这些解释的例证可以帮助评估它们是否适合产生理解。更具体地说,我建议根据它们是否适合产生理解来区分两种形式的例子。我分别把这些表单称为示例和范例。样本容易被误解,因此有导致误解的风险。相比之下,范例是有意设计或选择的,以满足上下文需求,并减轻误解的风险。因此,它们更有利于实现理解。通过回顾几种针对图像分类器的XAI方法,我展示了它们中的大多数都使用样本进行解释。如果我的分析是正确的,那么这种可解释性方法使用有资格作为范例的解释将是有益的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
AI & Society
AI & Society COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
8.00
自引率
20.00%
发文量
257
期刊介绍: AI & Society: Knowledge, Culture and Communication, is an International Journal publishing refereed scholarly articles, position papers, debates, short communications, and reviews of books and other publications. Established in 1987, the Journal focuses on societal issues including the design, use, management, and policy of information, communications and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications. AI & Society has a broad scope and is strongly interdisciplinary. We welcome contributions and participation from researchers and practitioners in a variety of fields including information technologies, humanities, social sciences, arts and sciences. This includes broader societal and cultural impacts, for example on governance, security, sustainability, identity, inclusion, working life, corporate and community welfare, and well-being of people. Co-authored articles from diverse disciplines are encouraged. AI & Society seeks to promote an understanding of the potential, transformative impacts and critical consequences of pervasive technology for societies. Technological innovations, including new sciences such as biotech, nanotech and neuroscience, offer a great potential for societies, but also pose existential risk. Rooted in the human-centred tradition of science and technology, the Journal acts as a catalyst, promoter and facilitator of engagement with diversity of voices and over-the-horizon issues of arts, science, technology and society. AI & Society expects that, in keeping with the ethos of the journal, submissions should provide a substantial and explicit argument on the societal dimension of research, particularly the benefits, impacts and implications for society. This may include factors such as trust, biases, privacy, reliability, responsibility, and competence of AI systems. Such arguments should be validated by critical comment on current research in this area. Curmudgeon Corner will retain its opinionated ethos. The journal is in three parts: a) full length scholarly articles; b) strategic ideas, critical reviews and reflections; c) Student Forum is for emerging researchers and new voices to communicate their ongoing research to the wider academic community, mentored by the Journal Advisory Board; Book Reviews and News; Curmudgeon Corner for the opinionated. Papers in the Original Section may include original papers, which are underpinned by theoretical, methodological, conceptual or philosophical foundations. The Open Forum Section may include strategic ideas, critical reviews and potential implications for society of current research. Network Research Section papers make substantial contributions to theoretical and methodological foundations within societal domains. These will be multi-authored papers that include a summary of the contribution of each author to the paper. Original, Open Forum and Network papers are peer reviewed. The Student Forum Section may include theoretical, methodological, and application orientations of ongoing research including case studies, as well as, contextual action research experiences. Papers in this section are normally single-authored and are also formally reviewed. Curmudgeon Corner is a short opinionated column on trends in technology, arts, science and society, commenting emphatically on issues of concern to the research community and wider society. Normal word length: Original and Network Articles 10k, Open Forum 8k, Student Forum 6k, Curmudgeon 1k. The exception to the co-author limit of Original and Open Forum (4), Network (10), Student (3) and Curmudgeon (2) articles will be considered for their special contributions. Please do not send your submissions by email but use the "Submit manuscript" button. NOTE TO AUTHORS: The Journal expects its authors to include, in their submissions: a) An acknowledgement of the pre-accept/pre-publication versions of their manuscripts on non-commercial and academic sites. b) Images: obtain permissions from the copyright holder/original sources. c) Formal permission from their ethics committees when conducting studies with people.
期刊最新文献
The risky success of a mindless automatism Reflexive ecologies of knowledge in the future of AI & Society Leveraging teleological explanation to support general-purpose AI assessment The machine in the manuscript: editorial dilemmas AI, society, and the shadows of our desires
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1