Explanation–Question–Response dialogue: An argumentative tool for explainable AI

Federico Castagna, P. McBurney, S. Parsons
{"title":"Explanation–Question–Response dialogue: An argumentative tool for explainable AI","authors":"Federico Castagna, P. McBurney, S. Parsons","doi":"10.3233/aac-230015","DOIUrl":null,"url":null,"abstract":"Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it is generated, which is largely obscure to laypeople. A dialectical interaction with such systems may enhance the users’ understanding and build a more robust trust towards AI. Commonly employed as specific formalisms for modelling intra-agent communications, dialogue games prove to be useful tools to rely upon when dealing with user’s explanation needs. The literature already offers some dialectical protocols that expressly handle explanations and their delivery. This paper fully formalises the novel Explanation–Question–Response (EQR) dialogue and its properties, whose main purpose is to provide satisfactory information (i.e., justified according to argumentative semantics) whilst ensuring a simplified protocol, in comparison with other existing approaches, for humans and artificial agents.","PeriodicalId":299930,"journal":{"name":"Argument & Computation","volume":" 44","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Argument & Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/aac-230015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it is generated, which is largely obscure to laypeople. A dialectical interaction with such systems may enhance the users’ understanding and build a more robust trust towards AI. Commonly employed as specific formalisms for modelling intra-agent communications, dialogue games prove to be useful tools to rely upon when dealing with user’s explanation needs. The literature already offers some dialectical protocols that expressly handle explanations and their delivery. This paper fully formalises the novel Explanation–Question–Response (EQR) dialogue and its properties, whose main purpose is to provide satisfactory information (i.e., justified according to argumentative semantics) whilst ensuring a simplified protocol, in comparison with other existing approaches, for humans and artificial agents.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
解释-提问-回答对话:可解释人工智能的论证工具
基于人工智能的系统,尤其是深度学习驱动的生成语言模型,在过去几年中取得了令人瞩目的成就。然而,在取得这些令人瞩目的成就的同时,人们也担心这些技术可能会导致人工智能全面放弃对我们生活的控制。造成这种担忧的主要原因是深度学习系统的输出不透明,而且其生成方式在很大程度上不为普通人所知。与这类系统的辩证互动可以增强用户的理解力,并建立起对人工智能更稳固的信任。对话游戏通常被用作模拟代理内部交流的特定形式,在处理用户的解释需求时,对话游戏被证明是可以依赖的有用工具。已有文献提供了一些明确处理解释及其传递的辩证协议。本文全面阐述了新颖的 "解释-问题-回应(EQR)"对话及其特性,其主要目的是为人类和人工代理提供令人满意的信息(即根据论证语义提供合理信息),同时确保与其他现有方法相比,简化协议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A tribute to Trevor Bench-Capon (1953–2024) ω-Groundedness of argumentation and completeness of grounded dialectical proof procedures Evaluating large language models’ ability to generate interpretive arguments Annotated insights into legal reasoning: A dataset of Article 6 ECHR cases The third and fourth international competitions on computational models of argumentation: Design, results and analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1