{"title":"RobustMap:生成潜空间中 DNN 对抗鲁棒性的可视化探索。","authors":"Jie Li, Jielong Kuang","doi":"10.1109/TVCG.2024.3471551","DOIUrl":null,"url":null,"abstract":"<p><p>The paper presents a novel approach to visualizing adversarial robustness (called robustness below) of deep neural networks (DNNs). Traditional tests only return a value reflecting a DNN's overall robustness across a fixed number of test samples. Unlike them, we use test samples to train a generative model (GM) and render a DNN's robustness distribution over infinite generated samples within the GM's latent space. The approach extends test samples, enabling users to obtain new test samples to improve feature coverage constantly. Moreover, the distribution provides more information about a DNN's robustness, enabling users to understand a DNN's robustness comprehensively. We propose three methods to resolve the challenges of realizing the approach. Specifically, we (1) map a GM's high-dimensional latent space onto a plane with less information loss for visualization, (2) design a network to predict a DNN's robustness on massive samples to speed up the distribution rendering, and (3) develop a system to supports users to explore the distribution from multiple perspectives. Subjective and objective experiment results prove the usability and effectiveness of the approach.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RobustMap: Visual Exploration of DNN Adversarial Robustness in Generative Latent Space.\",\"authors\":\"Jie Li, Jielong Kuang\",\"doi\":\"10.1109/TVCG.2024.3471551\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The paper presents a novel approach to visualizing adversarial robustness (called robustness below) of deep neural networks (DNNs). Traditional tests only return a value reflecting a DNN's overall robustness across a fixed number of test samples. Unlike them, we use test samples to train a generative model (GM) and render a DNN's robustness distribution over infinite generated samples within the GM's latent space. The approach extends test samples, enabling users to obtain new test samples to improve feature coverage constantly. Moreover, the distribution provides more information about a DNN's robustness, enabling users to understand a DNN's robustness comprehensively. We propose three methods to resolve the challenges of realizing the approach. Specifically, we (1) map a GM's high-dimensional latent space onto a plane with less information loss for visualization, (2) design a network to predict a DNN's robustness on massive samples to speed up the distribution rendering, and (3) develop a system to supports users to explore the distribution from multiple perspectives. Subjective and objective experiment results prove the usability and effectiveness of the approach.</p>\",\"PeriodicalId\":94035,\"journal\":{\"name\":\"IEEE transactions on visualization and computer graphics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on visualization and computer graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TVCG.2024.3471551\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2024.3471551","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
RobustMap: Visual Exploration of DNN Adversarial Robustness in Generative Latent Space.
The paper presents a novel approach to visualizing adversarial robustness (called robustness below) of deep neural networks (DNNs). Traditional tests only return a value reflecting a DNN's overall robustness across a fixed number of test samples. Unlike them, we use test samples to train a generative model (GM) and render a DNN's robustness distribution over infinite generated samples within the GM's latent space. The approach extends test samples, enabling users to obtain new test samples to improve feature coverage constantly. Moreover, the distribution provides more information about a DNN's robustness, enabling users to understand a DNN's robustness comprehensively. We propose three methods to resolve the challenges of realizing the approach. Specifically, we (1) map a GM's high-dimensional latent space onto a plane with less information loss for visualization, (2) design a network to predict a DNN's robustness on massive samples to speed up the distribution rendering, and (3) develop a system to supports users to explore the distribution from multiple perspectives. Subjective and objective experiment results prove the usability and effectiveness of the approach.