{"title":"Exploring Classifiers with Differentiable Decision Boundary Maps","authors":"A. Machado, M. Behrisch, A. Telea","doi":"10.1111/cgf.15109","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Explaining Machine Learning (ML) — and especially Deep Learning (DL) — classifiers' decisions is a subject of interest across fields due to the increasing ubiquity of such models in computing systems. As models get increasingly complex, relying on sophisticated machinery to recognize data patterns, explaining their behavior becomes more difficult. Directly visualizing classifier behavior is in general infeasible, as they create partitions of the data space, which is typically high dimensional. In recent years, Decision Boundary Maps (DBMs) have been developed, taking advantage of projection and inverse projection techniques. By being able to map 2D points back to the data space and subsequently run a classifier, DBMs represent a slice of classifier outputs. However, we recognize that DBMs without additional explanatory views are limited in their applicability. In this work, we propose augmenting the naive DBM generating process with views that provide more in-depth information about classifier behavior, such as whether the training procedure is locally stable. We describe our proposed views — which we term Differentiable Decision Boundary Maps — over a running example, explaining how our work enables drawing new and useful conclusions from these dense maps. We further demonstrate the value of these conclusions by showing how useful they would be in carrying out or preventing a dataset poisoning attack. We thus provide evidence of the ability of our proposed views to make DBMs significantly more trustworthy and interpretable, increasing their utility as a model understanding tool.</p>\n </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15109","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Graphics Forum","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cgf.15109","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Explaining Machine Learning (ML) — and especially Deep Learning (DL) — classifiers' decisions is a subject of interest across fields due to the increasing ubiquity of such models in computing systems. As models get increasingly complex, relying on sophisticated machinery to recognize data patterns, explaining their behavior becomes more difficult. Directly visualizing classifier behavior is in general infeasible, as they create partitions of the data space, which is typically high dimensional. In recent years, Decision Boundary Maps (DBMs) have been developed, taking advantage of projection and inverse projection techniques. By being able to map 2D points back to the data space and subsequently run a classifier, DBMs represent a slice of classifier outputs. However, we recognize that DBMs without additional explanatory views are limited in their applicability. In this work, we propose augmenting the naive DBM generating process with views that provide more in-depth information about classifier behavior, such as whether the training procedure is locally stable. We describe our proposed views — which we term Differentiable Decision Boundary Maps — over a running example, explaining how our work enables drawing new and useful conclusions from these dense maps. We further demonstrate the value of these conclusions by showing how useful they would be in carrying out or preventing a dataset poisoning attack. We thus provide evidence of the ability of our proposed views to make DBMs significantly more trustworthy and interpretable, increasing their utility as a model understanding tool.
期刊介绍:
Computer Graphics Forum is the official journal of Eurographics, published in cooperation with Wiley-Blackwell, and is a unique, international source of information for computer graphics professionals interested in graphics developments worldwide. It is now one of the leading journals for researchers, developers and users of computer graphics in both commercial and academic environments. The journal reports on the latest developments in the field throughout the world and covers all aspects of the theory, practice and application of computer graphics.