{"title":"XAI代表通信网络","authors":"Sayan Mukherjee, J. Rupe, J. Zhu","doi":"10.1109/ISSREW55968.2022.00093","DOIUrl":null,"url":null,"abstract":"Explainable AI (XAI) is a topic of intense activity in the research community today. However, for AI models deployed in the critical infrastructure of communications networks, explainability alone is not enough to earn the trust of network operations teams comprising human experts with many decades of collective experience. In the present work we discuss some use cases in communications networks and state some of the additional properties, including accountability, that XAI models would have to satisfy before they can be widely deployed. In particular, we advocate for a human-in-the-Ioop approach to train and validate XAI models. Additionally, we discuss the use cases of XAI models around improving data preprocessing and data augmentation techniques, and refining data labeling rules for producing consistently labeled network datasets.","PeriodicalId":178302,"journal":{"name":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"XAI for Communication Networks\",\"authors\":\"Sayan Mukherjee, J. Rupe, J. Zhu\",\"doi\":\"10.1109/ISSREW55968.2022.00093\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explainable AI (XAI) is a topic of intense activity in the research community today. However, for AI models deployed in the critical infrastructure of communications networks, explainability alone is not enough to earn the trust of network operations teams comprising human experts with many decades of collective experience. In the present work we discuss some use cases in communications networks and state some of the additional properties, including accountability, that XAI models would have to satisfy before they can be widely deployed. In particular, we advocate for a human-in-the-Ioop approach to train and validate XAI models. Additionally, we discuss the use cases of XAI models around improving data preprocessing and data augmentation techniques, and refining data labeling rules for producing consistently labeled network datasets.\",\"PeriodicalId\":178302,\"journal\":{\"name\":\"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSREW55968.2022.00093\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW55968.2022.00093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
可解释人工智能(XAI)是当今研究界的一个热门话题。然而,对于部署在通信网络关键基础设施中的人工智能模型,仅凭可解释性不足以赢得由具有数十年集体经验的人类专家组成的网络运营团队的信任。在目前的工作中,我们讨论了通信网络中的一些用例,并说明了XAI模型在被广泛部署之前必须满足的一些附加属性,包括责任。特别地,我们提倡使用human- In -the- loop方法来训练和验证XAI模型。此外,我们还讨论了XAI模型的用例,围绕改进数据预处理和数据增强技术,以及改进数据标记规则以生成一致标记的网络数据集。
Explainable AI (XAI) is a topic of intense activity in the research community today. However, for AI models deployed in the critical infrastructure of communications networks, explainability alone is not enough to earn the trust of network operations teams comprising human experts with many decades of collective experience. In the present work we discuss some use cases in communications networks and state some of the additional properties, including accountability, that XAI models would have to satisfy before they can be widely deployed. In particular, we advocate for a human-in-the-Ioop approach to train and validate XAI models. Additionally, we discuss the use cases of XAI models around improving data preprocessing and data augmentation techniques, and refining data labeling rules for producing consistently labeled network datasets.