{"title":"在可解释和可信赖的人工智能中指导人类评估的操作框架","authors":"Roberto Confalonieri, Jose Maria Alonso-Moral","doi":"10.1109/mis.2023.3334639","DOIUrl":null,"url":null,"abstract":"The assessment of explanations by humans presents a significant challenge within the context of explainable and trustworthy artificial intelligence. This is attributed not only to the absence of universal metrics and standardized evaluation methods but also to the complexities tied to devising user studies that assess the perceived human comprehensibility of these explanations. To address this gap, we introduce a survey-based methodology for guiding the human evaluation of explanations. This approach amalgamates leading practices from existing literature and is implemented as an operational framework. This framework assists researchers throughout the evaluation process, encompassing hypothesis formulation, online user study implementation and deployment, and analysis and interpretation of collected data. The application of this framework is exemplified through two practical user studies.","PeriodicalId":13160,"journal":{"name":"IEEE Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.6000,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Operational Framework for Guiding Human Evaluation in Explainable and Trustworthy Artificial Intelligence\",\"authors\":\"Roberto Confalonieri, Jose Maria Alonso-Moral\",\"doi\":\"10.1109/mis.2023.3334639\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The assessment of explanations by humans presents a significant challenge within the context of explainable and trustworthy artificial intelligence. This is attributed not only to the absence of universal metrics and standardized evaluation methods but also to the complexities tied to devising user studies that assess the perceived human comprehensibility of these explanations. To address this gap, we introduce a survey-based methodology for guiding the human evaluation of explanations. This approach amalgamates leading practices from existing literature and is implemented as an operational framework. This framework assists researchers throughout the evaluation process, encompassing hypothesis formulation, online user study implementation and deployment, and analysis and interpretation of collected data. The application of this framework is exemplified through two practical user studies.\",\"PeriodicalId\":13160,\"journal\":{\"name\":\"IEEE Intelligent Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.6000,\"publicationDate\":\"2023-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/mis.2023.3334639\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/mis.2023.3334639","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
An Operational Framework for Guiding Human Evaluation in Explainable and Trustworthy Artificial Intelligence
The assessment of explanations by humans presents a significant challenge within the context of explainable and trustworthy artificial intelligence. This is attributed not only to the absence of universal metrics and standardized evaluation methods but also to the complexities tied to devising user studies that assess the perceived human comprehensibility of these explanations. To address this gap, we introduce a survey-based methodology for guiding the human evaluation of explanations. This approach amalgamates leading practices from existing literature and is implemented as an operational framework. This framework assists researchers throughout the evaluation process, encompassing hypothesis formulation, online user study implementation and deployment, and analysis and interpretation of collected data. The application of this framework is exemplified through two practical user studies.
期刊介绍:
IEEE Intelligent Systems serves users, managers, developers, researchers, and purchasers who are interested in intelligent systems and artificial intelligence, with particular emphasis on applications. Typically they are degreed professionals, with backgrounds in engineering, hard science, or business. The publication emphasizes current practice and experience, together with promising new ideas that are likely to be used in the near future. Sample topic areas for feature articles include knowledge-based systems, intelligent software agents, natural-language processing, technologies for knowledge management, machine learning, data mining, adaptive and intelligent robotics, knowledge-intensive processing on the Web, and social issues relevant to intelligent systems. Also encouraged are application features, covering practice at one or more companies or laboratories; full-length product stories (which require refereeing by at least three reviewers); tutorials; surveys; and case studies. Often issues are theme-based and collect articles around a contemporary topic under the auspices of a Guest Editor working with the EIC.