{"title":"将社会解释纳入可解释人工智能 (XAI),以打击错误信息:愿景与挑战","authors":"Yeaeun Gong;Lanyu Shang;Dong Wang","doi":"10.1109/TCSS.2024.3404236","DOIUrl":null,"url":null,"abstract":"This article overviews the state of the art, research challenges, and future directions in our vision: integrating social explanation into explainable artificial intelligence (XAI) to combat misinformation. In our context, “social explanation” is an explanatory approach that reveals the social aspect of misinformation by analyzing sociocontextual cues, such as user attributes, user engagement metrics, diffusion patterns, and user comments. Our vision is motivated by the research gap in the existing XAI that tends to overlook the broader social context in which misinformation spreads. In this article, we first define social explanation, demonstrating it through examples, enabling technologies, and real-world applications. We then outline the unique benefits social explanation brings to the fight against misinformation and discuss the challenges that make our vision complex. The significance of this article lies in introducing the “social explanation” concept in XAI, which has been underexplored in the previous literature. Also, we demonstrate how social explanations can be effectively employed to tackle misinformation and promote collaboration across diverse fields by drawing upon interdisciplinary techniques spanning from computer science, social computing, human–computer interaction, to psychology. We hope that this article will advance progress in the field of XAI and contribute to the ongoing efforts to counter misinformation.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"11 5","pages":"6705-6726"},"PeriodicalIF":4.5000,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10565780","citationCount":"0","resultStr":"{\"title\":\"Integrating Social Explanations Into Explainable Artificial Intelligence (XAI) for Combating Misinformation: Vision and Challenges\",\"authors\":\"Yeaeun Gong;Lanyu Shang;Dong Wang\",\"doi\":\"10.1109/TCSS.2024.3404236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article overviews the state of the art, research challenges, and future directions in our vision: integrating social explanation into explainable artificial intelligence (XAI) to combat misinformation. In our context, “social explanation” is an explanatory approach that reveals the social aspect of misinformation by analyzing sociocontextual cues, such as user attributes, user engagement metrics, diffusion patterns, and user comments. Our vision is motivated by the research gap in the existing XAI that tends to overlook the broader social context in which misinformation spreads. In this article, we first define social explanation, demonstrating it through examples, enabling technologies, and real-world applications. We then outline the unique benefits social explanation brings to the fight against misinformation and discuss the challenges that make our vision complex. The significance of this article lies in introducing the “social explanation” concept in XAI, which has been underexplored in the previous literature. Also, we demonstrate how social explanations can be effectively employed to tackle misinformation and promote collaboration across diverse fields by drawing upon interdisciplinary techniques spanning from computer science, social computing, human–computer interaction, to psychology. We hope that this article will advance progress in the field of XAI and contribute to the ongoing efforts to counter misinformation.\",\"PeriodicalId\":13044,\"journal\":{\"name\":\"IEEE Transactions on Computational Social Systems\",\"volume\":\"11 5\",\"pages\":\"6705-6726\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2024-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10565780\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computational Social Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10565780/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Social Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10565780/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
Integrating Social Explanations Into Explainable Artificial Intelligence (XAI) for Combating Misinformation: Vision and Challenges
This article overviews the state of the art, research challenges, and future directions in our vision: integrating social explanation into explainable artificial intelligence (XAI) to combat misinformation. In our context, “social explanation” is an explanatory approach that reveals the social aspect of misinformation by analyzing sociocontextual cues, such as user attributes, user engagement metrics, diffusion patterns, and user comments. Our vision is motivated by the research gap in the existing XAI that tends to overlook the broader social context in which misinformation spreads. In this article, we first define social explanation, demonstrating it through examples, enabling technologies, and real-world applications. We then outline the unique benefits social explanation brings to the fight against misinformation and discuss the challenges that make our vision complex. The significance of this article lies in introducing the “social explanation” concept in XAI, which has been underexplored in the previous literature. Also, we demonstrate how social explanations can be effectively employed to tackle misinformation and promote collaboration across diverse fields by drawing upon interdisciplinary techniques spanning from computer science, social computing, human–computer interaction, to psychology. We hope that this article will advance progress in the field of XAI and contribute to the ongoing efforts to counter misinformation.
期刊介绍:
IEEE Transactions on Computational Social Systems focuses on such topics as modeling, simulation, analysis and understanding of social systems from the quantitative and/or computational perspective. "Systems" include man-man, man-machine and machine-machine organizations and adversarial situations as well as social media structures and their dynamics. More specifically, the proposed transactions publishes articles on modeling the dynamics of social systems, methodologies for incorporating and representing socio-cultural and behavioral aspects in computational modeling, analysis of social system behavior and structure, and paradigms for social systems modeling and simulation. The journal also features articles on social network dynamics, social intelligence and cognition, social systems design and architectures, socio-cultural modeling and representation, and computational behavior modeling, and their applications.