{"title":"Making It Simple? Training Deep Learning Models Toward Simplicity","authors":"M. Repetto, D. La Torre","doi":"10.1109/DASA54658.2022.9765248","DOIUrl":null,"url":null,"abstract":"Deep Learning aims to achieve high performances at the expense of explainability. Explainable Artificial Intelligence consists of all the methods addressing this problem. These methods do not provide interpretability right away, and their usage is limited to model debugging. Furthermore, it’s unclear when an explanation qualifies as understandable. This paper aims at creating a double backpropagation technique restricting the model’s feature effects. The approach ensures interpretable Deep Learning models’ explanations during the learning phase. The problem is framed as a Multicriteria one allowing the stakeholders to control the degree of regularization. As a result, the Deep Learning model embodies simple interpretability from the start and is compliant with recent regulations. A series of numerical examples show that our method produces performant yet flexible models that can generalize even when data is scarce.","PeriodicalId":231066,"journal":{"name":"2022 International Conference on Decision Aid Sciences and Applications (DASA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Decision Aid Sciences and Applications (DASA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASA54658.2022.9765248","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep Learning aims to achieve high performances at the expense of explainability. Explainable Artificial Intelligence consists of all the methods addressing this problem. These methods do not provide interpretability right away, and their usage is limited to model debugging. Furthermore, it’s unclear when an explanation qualifies as understandable. This paper aims at creating a double backpropagation technique restricting the model’s feature effects. The approach ensures interpretable Deep Learning models’ explanations during the learning phase. The problem is framed as a Multicriteria one allowing the stakeholders to control the degree of regularization. As a result, the Deep Learning model embodies simple interpretability from the start and is compliant with recent regulations. A series of numerical examples show that our method produces performant yet flexible models that can generalize even when data is scarce.