Nadia Burkart, Philipp M. Faller, Elisabeth Peinsipp, Marco F. Huber
{"title":"Batch-wise Regularization of Deep Neural Networks for Interpretability","authors":"Nadia Burkart, Philipp M. Faller, Elisabeth Peinsipp, Marco F. Huber","doi":"10.1109/MFI49285.2020.9235209","DOIUrl":null,"url":null,"abstract":"Fast progress in the field of Machine Learning and Deep Learning strongly influences the research in many application domains like autonomous driving or health care. In this paper, we propose a batch-wise regularization technique to enhance the interpretability for deep neural networks (NN) by means of a global surrogate rule list. For this purpose, we introduce a novel regularization approach that yields a differentiable penalty term. Compared to other regularization approaches, our approach avoids repeated creating of surrogate models during training of the NN. The experiments show that the proposed approach has a high fidelity to the main model and also results in interpretable and more accurate models compared to some of the baselines.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI49285.2020.9235209","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Fast progress in the field of Machine Learning and Deep Learning strongly influences the research in many application domains like autonomous driving or health care. In this paper, we propose a batch-wise regularization technique to enhance the interpretability for deep neural networks (NN) by means of a global surrogate rule list. For this purpose, we introduce a novel regularization approach that yields a differentiable penalty term. Compared to other regularization approaches, our approach avoids repeated creating of surrogate models during training of the NN. The experiments show that the proposed approach has a high fidelity to the main model and also results in interpretable and more accurate models compared to some of the baselines.