Francesco Di Luzio, Antonello Rosato, Massimo Panella
{"title":"用于情感识别的可解释快速深度神经网络","authors":"Francesco Di Luzio, Antonello Rosato, Massimo Panella","doi":"10.1016/j.bspc.2024.107177","DOIUrl":null,"url":null,"abstract":"<div><div>In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107177"},"PeriodicalIF":4.9000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An explainable fast deep neural network for emotion recognition\",\"authors\":\"Francesco Di Luzio, Antonello Rosato, Massimo Panella\",\"doi\":\"10.1016/j.bspc.2024.107177\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"100 \",\"pages\":\"Article 107177\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809424012357\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809424012357","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
An explainable fast deep neural network for emotion recognition
In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.