人工智能的解释:可计算与否?

Niko Tsakalakis, L. Carmichael, Sophie Stalla-Bourdillon, L. Moreau, D. Huynh, Ayah Helal
{"title":"人工智能的解释:可计算与否?","authors":"Niko Tsakalakis, L. Carmichael, Sophie Stalla-Bourdillon, L. Moreau, D. Huynh, Ayah Helal","doi":"10.1145/3394332.3402900","DOIUrl":null,"url":null,"abstract":"Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a ‘good’ explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. Given that the generation of compliant, valid and effective explanations for AI requires a high-level of critical, interdisciplinary thinking and collaboration, this area is therefore of particular interest for Web Science. The workshop ‘Explanations for AI: Computable or Not?’ (exAI’20) aims to bring together researchers, practitioners and representatives of those subjected to socially-sensitive decision-making to exchange ideas, methods and challenges as part of an interdisciplinary discussion on explanations for AI. It is hoped that this workshop will build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward. exAI’20 will hold two position paper sessions, where the panel members and workshop attendees will debate the following key issues in an interactive dialogue: The sessions are hoped to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper. The discussion will uncover key arguments for and against the computability of explanations for AI related to socially-sensitive decision-making. An introductory keynote from the team behind the project PLEAD (Provenance-Driven & Legally Grounded Explanations for Automated Decisions) will present use cases, scenarios and the practical experience of explanations for AI. The keynote will serve as a starting point for the discussions during the paper sessions about the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making. By the end of this workshop, attendees will have gained a good insight into the critiques and the advantages of explanations for AI, including the extent in which explanations can or should be made computable. They will have the opportunity to participate and inform the discussions on complex topics about AI explainability, such as the legal requirements for explanations, the extent in which data ethics may drive explanations for AI, reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions. exAI’20 is supported by the Engineering and Physical Sciences Research Council [grant number EP/S027238/1]. We would like to thank the organizers of the Web Science 2019 conference for agreeing to host our workshop and for their support.","PeriodicalId":435721,"journal":{"name":"Companion Publication of the 12th ACM Conference on Web Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Explanations for AI: Computable or Not?\",\"authors\":\"Niko Tsakalakis, L. Carmichael, Sophie Stalla-Bourdillon, L. Moreau, D. Huynh, Ayah Helal\",\"doi\":\"10.1145/3394332.3402900\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a ‘good’ explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. Given that the generation of compliant, valid and effective explanations for AI requires a high-level of critical, interdisciplinary thinking and collaboration, this area is therefore of particular interest for Web Science. The workshop ‘Explanations for AI: Computable or Not?’ (exAI’20) aims to bring together researchers, practitioners and representatives of those subjected to socially-sensitive decision-making to exchange ideas, methods and challenges as part of an interdisciplinary discussion on explanations for AI. It is hoped that this workshop will build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward. exAI’20 will hold two position paper sessions, where the panel members and workshop attendees will debate the following key issues in an interactive dialogue: The sessions are hoped to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper. The discussion will uncover key arguments for and against the computability of explanations for AI related to socially-sensitive decision-making. An introductory keynote from the team behind the project PLEAD (Provenance-Driven & Legally Grounded Explanations for Automated Decisions) will present use cases, scenarios and the practical experience of explanations for AI. The keynote will serve as a starting point for the discussions during the paper sessions about the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making. By the end of this workshop, attendees will have gained a good insight into the critiques and the advantages of explanations for AI, including the extent in which explanations can or should be made computable. They will have the opportunity to participate and inform the discussions on complex topics about AI explainability, such as the legal requirements for explanations, the extent in which data ethics may drive explanations for AI, reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions. exAI’20 is supported by the Engineering and Physical Sciences Research Council [grant number EP/S027238/1]. We would like to thank the organizers of the Web Science 2019 conference for agreeing to host our workshop and for their support.\",\"PeriodicalId\":435721,\"journal\":{\"name\":\"Companion Publication of the 12th ACM Conference on Web Science\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Companion Publication of the 12th ACM Conference on Web Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3394332.3402900\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Publication of the 12th ACM Conference on Web Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3394332.3402900","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

自动化决策继续在众多部门中用于各种目的。最终,一个“好的”解释不仅是人工智能系统的设计师和开发者的关注点,也是许多学科的关注点,包括法律、哲学、心理学、历史、社会学和人机交互。考虑到对人工智能产生兼容、有效和有效的解释需要高水平的批判性、跨学科思维和协作,因此这个领域对Web Science特别感兴趣。研讨会“人工智能的解释:可计算还是不可计算?”(exAI ' 20)旨在将研究人员、实践者和社会敏感决策的代表聚集在一起,交流思想、方法和挑战,作为对人工智能解释的跨学科讨论的一部分。希望这次研讨会能够建立一个跨部门、多学科的国际网络,专注于解释人工智能,并制定一个推动这项工作向前发展的议程。exAI ' 20将举行两次立场文件会议,小组成员和研讨会与会者将在互动对话中讨论以下关键问题:会议希望通过在每篇论文后提供互动讨论的时间,激发关于人工智能的解释是否可计算的热烈辩论。讨论将揭示支持和反对与社会敏感决策相关的人工智能解释的可计算性的关键论据。来自项目团队的介绍性主题演讲(出处驱动和基于法律的自动决策解释)将介绍人工智能解释的用例、场景和实践经验。主题演讲将作为论文会议讨论的起点,讨论所使用的基本原理、技术和/或组织措施;并且,从不同的角度进行描述——例如,软件设计师、实现者和那些受自动化决策影响的人。在本次研讨会结束时,与会者将对人工智能解释的批评和优势有一个很好的了解,包括解释可以或应该在多大程度上可计算。他们将有机会参与并告知有关人工智能可解释性的复杂主题的讨论,例如解释的法律要求,数据伦理在多大程度上可能推动对人工智能的解释,对人工智能决策和人工决策解释的异同的反思,以及什么是“好的”解释,以及对社会敏感决策的解释的词源。exAI ' 20得到工程与物理科学研究委员会[批准号EP/S027238/1]的支持。我们要感谢Web Science 2019会议的组织者同意举办我们的研讨会并给予他们的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explanations for AI: Computable or Not?
Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a ‘good’ explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. Given that the generation of compliant, valid and effective explanations for AI requires a high-level of critical, interdisciplinary thinking and collaboration, this area is therefore of particular interest for Web Science. The workshop ‘Explanations for AI: Computable or Not?’ (exAI’20) aims to bring together researchers, practitioners and representatives of those subjected to socially-sensitive decision-making to exchange ideas, methods and challenges as part of an interdisciplinary discussion on explanations for AI. It is hoped that this workshop will build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward. exAI’20 will hold two position paper sessions, where the panel members and workshop attendees will debate the following key issues in an interactive dialogue: The sessions are hoped to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper. The discussion will uncover key arguments for and against the computability of explanations for AI related to socially-sensitive decision-making. An introductory keynote from the team behind the project PLEAD (Provenance-Driven & Legally Grounded Explanations for Automated Decisions) will present use cases, scenarios and the practical experience of explanations for AI. The keynote will serve as a starting point for the discussions during the paper sessions about the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making. By the end of this workshop, attendees will have gained a good insight into the critiques and the advantages of explanations for AI, including the extent in which explanations can or should be made computable. They will have the opportunity to participate and inform the discussions on complex topics about AI explainability, such as the legal requirements for explanations, the extent in which data ethics may drive explanations for AI, reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions. exAI’20 is supported by the Engineering and Physical Sciences Research Council [grant number EP/S027238/1]. We would like to thank the organizers of the Web Science 2019 conference for agreeing to host our workshop and for their support.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Digital Inequality in Education in Argentina: How the pandemic of 2020 increased existing tensions BehaviourCoach: A Customisable and Socially-Enhanced Exergaming Application Development Framework The Secret Life of Immortal Data Multilingual Symbolic Support for Low Levels of Literacy on the Web Personalisation and Community 2020: User Modelling and Social Connections in Web Science, Healthcare and Education: Chairs’ Welcome and Workshop Summary
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1