{"title":"深度信念网络的变换不变性研究","authors":"Zheng Shou, Yuhao Zhang, H. Cai","doi":"10.1109/IJCNN.2013.6706884","DOIUrl":null,"url":null,"abstract":"In order to learn transformation-invariant features, several effective deep architectures like hierarchical feature learning and variant Deep Belief Networks (DBN) have been proposed. Considering the complexity of those variants, people are interested in whether DBN itself has transformation-invariances. First of all, we use original DBN to test original data. Almost same error rates will be achieved, if we change weights in the bottom interlayer according to transformations occurred in testing data. It implies that weights in the bottom interlayer can store the knowledge to handle transformations such as rotation, shifting, and scaling. Along with the continuous learning ability and good storage of DBN, we present our Weight-Transformed Training Algorithm (WTTA) without augmenting other layers, units or filters to original DBN. Based upon original training method, WTTA is aiming at transforming weights and is still unsupervised. For MNIST handwritten digits recognizing experiments, we adopted 784-100-100-100 DBN to compare the differences of recognizing ability in weights-transformed ranges. Most error rates generated by WTTA were below 25% while most rates generated by original training algorithm exceeded 25%. Then we also did an experiment on part of MIT-CBCL face database, with varying illumination, and the best testing accuracy can be achieved is 87.5%. Besides, similar results can be achieved by datasets covering all kinds of transformations, but WTTA only needs original training data and transform weights after each training loop. Consequently, we can mine inherent transformation-invariances of DBN by WTTA, and DBN itself can recognize transformed data at satisfying error rates without inserting other components.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"A study of transformation-invariances of deep belief networks\",\"authors\":\"Zheng Shou, Yuhao Zhang, H. Cai\",\"doi\":\"10.1109/IJCNN.2013.6706884\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to learn transformation-invariant features, several effective deep architectures like hierarchical feature learning and variant Deep Belief Networks (DBN) have been proposed. Considering the complexity of those variants, people are interested in whether DBN itself has transformation-invariances. First of all, we use original DBN to test original data. Almost same error rates will be achieved, if we change weights in the bottom interlayer according to transformations occurred in testing data. It implies that weights in the bottom interlayer can store the knowledge to handle transformations such as rotation, shifting, and scaling. Along with the continuous learning ability and good storage of DBN, we present our Weight-Transformed Training Algorithm (WTTA) without augmenting other layers, units or filters to original DBN. Based upon original training method, WTTA is aiming at transforming weights and is still unsupervised. For MNIST handwritten digits recognizing experiments, we adopted 784-100-100-100 DBN to compare the differences of recognizing ability in weights-transformed ranges. Most error rates generated by WTTA were below 25% while most rates generated by original training algorithm exceeded 25%. Then we also did an experiment on part of MIT-CBCL face database, with varying illumination, and the best testing accuracy can be achieved is 87.5%. Besides, similar results can be achieved by datasets covering all kinds of transformations, but WTTA only needs original training data and transform weights after each training loop. Consequently, we can mine inherent transformation-invariances of DBN by WTTA, and DBN itself can recognize transformed data at satisfying error rates without inserting other components.\",\"PeriodicalId\":376975,\"journal\":{\"name\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2013.6706884\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6706884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A study of transformation-invariances of deep belief networks
In order to learn transformation-invariant features, several effective deep architectures like hierarchical feature learning and variant Deep Belief Networks (DBN) have been proposed. Considering the complexity of those variants, people are interested in whether DBN itself has transformation-invariances. First of all, we use original DBN to test original data. Almost same error rates will be achieved, if we change weights in the bottom interlayer according to transformations occurred in testing data. It implies that weights in the bottom interlayer can store the knowledge to handle transformations such as rotation, shifting, and scaling. Along with the continuous learning ability and good storage of DBN, we present our Weight-Transformed Training Algorithm (WTTA) without augmenting other layers, units or filters to original DBN. Based upon original training method, WTTA is aiming at transforming weights and is still unsupervised. For MNIST handwritten digits recognizing experiments, we adopted 784-100-100-100 DBN to compare the differences of recognizing ability in weights-transformed ranges. Most error rates generated by WTTA were below 25% while most rates generated by original training algorithm exceeded 25%. Then we also did an experiment on part of MIT-CBCL face database, with varying illumination, and the best testing accuracy can be achieved is 87.5%. Besides, similar results can be achieved by datasets covering all kinds of transformations, but WTTA only needs original training data and transform weights after each training loop. Consequently, we can mine inherent transformation-invariances of DBN by WTTA, and DBN itself can recognize transformed data at satisfying error rates without inserting other components.