{"title":"分类中深度学习模型的可视化压缩","authors":"Marissa Dotter, C. Ward","doi":"10.1109/AIPR.2018.8707381","DOIUrl":null,"url":null,"abstract":"Deep learning models have made great strides in tasks like classification and object detection. However, these models are often computationally intensive, require vast amounts of data in the domain, and typically contain millions or even billions of parameters. They are also relative black-boxes when it comes to being able to interpret and analyze their functionality on data or evaluating the suitability of the network for the data that is available. To address these issues, we investigate compression techniques available off-the-shelf that aid in reducing the dimensionality of the parameter space within a Convolutional Neural Network. In this way, compression will allow us to interpret and evaluate the network more efficiently as only important features will be propagated throughout the network.","PeriodicalId":230582,"journal":{"name":"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Visualizing Compression of Deep Learning Models for Classification\",\"authors\":\"Marissa Dotter, C. Ward\",\"doi\":\"10.1109/AIPR.2018.8707381\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models have made great strides in tasks like classification and object detection. However, these models are often computationally intensive, require vast amounts of data in the domain, and typically contain millions or even billions of parameters. They are also relative black-boxes when it comes to being able to interpret and analyze their functionality on data or evaluating the suitability of the network for the data that is available. To address these issues, we investigate compression techniques available off-the-shelf that aid in reducing the dimensionality of the parameter space within a Convolutional Neural Network. In this way, compression will allow us to interpret and evaluate the network more efficiently as only important features will be propagated throughout the network.\",\"PeriodicalId\":230582,\"journal\":{\"name\":\"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2018.8707381\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2018.8707381","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visualizing Compression of Deep Learning Models for Classification
Deep learning models have made great strides in tasks like classification and object detection. However, these models are often computationally intensive, require vast amounts of data in the domain, and typically contain millions or even billions of parameters. They are also relative black-boxes when it comes to being able to interpret and analyze their functionality on data or evaluating the suitability of the network for the data that is available. To address these issues, we investigate compression techniques available off-the-shelf that aid in reducing the dimensionality of the parameter space within a Convolutional Neural Network. In this way, compression will allow us to interpret and evaluate the network more efficiently as only important features will be propagated throughout the network.