{"title":"为非AI专家设计和评估可解释的AI:挑战和机遇","authors":"Maxwell Szymanski, K. Verbert, V. Abeele","doi":"10.1145/3523227.3547427","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has seen a steady increase in use in the health and medical field, where it is used by lay users and health experts alike. However, these AI systems often lack transparency regarding the inputs and decision making process (often called black boxes), which in turn can be detrimental to the user’s satisfaction and trust towards these systems. Explainable AI (XAI) aims to overcome this problem by opening up certain aspects of the black box, and has proven to be a successful means of increasing trust, transparency and even system effectiveness. However, for certain groups (i.e. lay users in health), explanation methods and evaluation metrics still remain underexplored. In this paper, we will outline our research regarding designing and evaluating explanations for health recommendations for lay users and domain experts, as well as list a few takeaways we were already able to find in our initial studies.","PeriodicalId":443279,"journal":{"name":"Proceedings of the 16th ACM Conference on Recommender Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Designing and evaluating explainable AI for non-AI experts: challenges and opportunities\",\"authors\":\"Maxwell Szymanski, K. Verbert, V. Abeele\",\"doi\":\"10.1145/3523227.3547427\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) has seen a steady increase in use in the health and medical field, where it is used by lay users and health experts alike. However, these AI systems often lack transparency regarding the inputs and decision making process (often called black boxes), which in turn can be detrimental to the user’s satisfaction and trust towards these systems. Explainable AI (XAI) aims to overcome this problem by opening up certain aspects of the black box, and has proven to be a successful means of increasing trust, transparency and even system effectiveness. However, for certain groups (i.e. lay users in health), explanation methods and evaluation metrics still remain underexplored. In this paper, we will outline our research regarding designing and evaluating explanations for health recommendations for lay users and domain experts, as well as list a few takeaways we were already able to find in our initial studies.\",\"PeriodicalId\":443279,\"journal\":{\"name\":\"Proceedings of the 16th ACM Conference on Recommender Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 16th ACM Conference on Recommender Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3523227.3547427\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3523227.3547427","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Designing and evaluating explainable AI for non-AI experts: challenges and opportunities
Artificial intelligence (AI) has seen a steady increase in use in the health and medical field, where it is used by lay users and health experts alike. However, these AI systems often lack transparency regarding the inputs and decision making process (often called black boxes), which in turn can be detrimental to the user’s satisfaction and trust towards these systems. Explainable AI (XAI) aims to overcome this problem by opening up certain aspects of the black box, and has proven to be a successful means of increasing trust, transparency and even system effectiveness. However, for certain groups (i.e. lay users in health), explanation methods and evaluation metrics still remain underexplored. In this paper, we will outline our research regarding designing and evaluating explanations for health recommendations for lay users and domain experts, as well as list a few takeaways we were already able to find in our initial studies.