{"title":"基于持续同源的深度神经网络剪枝","authors":"Satoru Watanabe, H. Yamana","doi":"10.1109/AIKE48582.2020.00030","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) have improved the performance of artificial intelligence systems in various fields including image analysis, speech recognition, and text classification. However, the consumption of enormous computation resources prevents DNNs from operating on small computers such as edge sensors and handheld devices. Network pruning (NP), which removes parameters from trained DNNs, is one of the prominent methods of reducing the resource consumption of DNNs. In this paper, we propose a novel method of NP, hereafter referred to as PHPM, using persistent homology (PH). PH investigates the inner representation of knowledge in DNNs, and PHPM utilizes the investigation in NP to improve the efficiency of pruning. PHPM prunes DNNs in ascending order of magnitudes of the combinational effects among neurons, which are calculated using the one-dimensional PH, to prevent the deterioration of the accuracy. We compared PHPM with global magnitude pruning method (GMP), which is one of the common baselines to evaluate pruning methods. Evaluation results show that the classification accuracy of DNNs pruned by PHPM outperforms that pruned by GMP.","PeriodicalId":370671,"journal":{"name":"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Deep Neural Network Pruning Using Persistent Homology\",\"authors\":\"Satoru Watanabe, H. Yamana\",\"doi\":\"10.1109/AIKE48582.2020.00030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks (DNNs) have improved the performance of artificial intelligence systems in various fields including image analysis, speech recognition, and text classification. However, the consumption of enormous computation resources prevents DNNs from operating on small computers such as edge sensors and handheld devices. Network pruning (NP), which removes parameters from trained DNNs, is one of the prominent methods of reducing the resource consumption of DNNs. In this paper, we propose a novel method of NP, hereafter referred to as PHPM, using persistent homology (PH). PH investigates the inner representation of knowledge in DNNs, and PHPM utilizes the investigation in NP to improve the efficiency of pruning. PHPM prunes DNNs in ascending order of magnitudes of the combinational effects among neurons, which are calculated using the one-dimensional PH, to prevent the deterioration of the accuracy. We compared PHPM with global magnitude pruning method (GMP), which is one of the common baselines to evaluate pruning methods. Evaluation results show that the classification accuracy of DNNs pruned by PHPM outperforms that pruned by GMP.\",\"PeriodicalId\":370671,\"journal\":{\"name\":\"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)\",\"volume\":\"64 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIKE48582.2020.00030\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Third International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIKE48582.2020.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Neural Network Pruning Using Persistent Homology
Deep neural networks (DNNs) have improved the performance of artificial intelligence systems in various fields including image analysis, speech recognition, and text classification. However, the consumption of enormous computation resources prevents DNNs from operating on small computers such as edge sensors and handheld devices. Network pruning (NP), which removes parameters from trained DNNs, is one of the prominent methods of reducing the resource consumption of DNNs. In this paper, we propose a novel method of NP, hereafter referred to as PHPM, using persistent homology (PH). PH investigates the inner representation of knowledge in DNNs, and PHPM utilizes the investigation in NP to improve the efficiency of pruning. PHPM prunes DNNs in ascending order of magnitudes of the combinational effects among neurons, which are calculated using the one-dimensional PH, to prevent the deterioration of the accuracy. We compared PHPM with global magnitude pruning method (GMP), which is one of the common baselines to evaluate pruning methods. Evaluation results show that the classification accuracy of DNNs pruned by PHPM outperforms that pruned by GMP.