{"title":"Efficient binarizing split learning based deep models for mobile applications","authors":"N. D. Pham, Hong Dien Nguyen, Dinh Hoa Dang","doi":"10.1063/5.0066470","DOIUrl":null,"url":null,"abstract":"Split Neural Network is a state-of-the-art distributed machine learning technique to enable on-device deep learning applications without accessing to local data. Recently, Abuadbba et al. carried out the use of split learning to perform privacy-preserving training for 1D CNN models on ECG medical data. However, the proposed method is limited by the processing ability of resource-constrained devices such as mobile devices. In this paper, we attempt to binarize localized neural networks to reduce computation costs and memory usage that is friendly with hardware. Theoretically analysis and evaluation results show that our method exceeds BNN and almost reaches CNN performance, while significantly reducing memory usage and computation costs on devices. Therefore, on the basis of these results, we have come to the conclusion that binarization is a potential technique for implementing deep learning models on mobile devices.","PeriodicalId":253890,"journal":{"name":"1ST VAN LANG INTERNATIONAL CONFERENCE ON HERITAGE AND TECHNOLOGY CONFERENCE PROCEEDING, 2021: VanLang-HeriTech, 2021","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1ST VAN LANG INTERNATIONAL CONFERENCE ON HERITAGE AND TECHNOLOGY CONFERENCE PROCEEDING, 2021: VanLang-HeriTech, 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1063/5.0066470","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Split Neural Network is a state-of-the-art distributed machine learning technique to enable on-device deep learning applications without accessing to local data. Recently, Abuadbba et al. carried out the use of split learning to perform privacy-preserving training for 1D CNN models on ECG medical data. However, the proposed method is limited by the processing ability of resource-constrained devices such as mobile devices. In this paper, we attempt to binarize localized neural networks to reduce computation costs and memory usage that is friendly with hardware. Theoretically analysis and evaluation results show that our method exceeds BNN and almost reaches CNN performance, while significantly reducing memory usage and computation costs on devices. Therefore, on the basis of these results, we have come to the conclusion that binarization is a potential technique for implementing deep learning models on mobile devices.