Filipe Assunção, David Sereno, Nuno Lourenço, P. Machado, B. Ribeiro
{"title":"压缩表示的自动编码器的自动进化","authors":"Filipe Assunção, David Sereno, Nuno Lourenço, P. Machado, B. Ribeiro","doi":"10.1109/CEC.2018.8477874","DOIUrl":null,"url":null,"abstract":"Developing learning systems is challenging in many ways: often there is the need to optimise the learning algorithm structure and parameters, and it is necessary to decide which is the best data representation to use, i.e., we usually have to design features and select the most representative and useful ones. In this work we focus on the later and investigate whether or not it is possible to obtain good performances with compressed versions of the original data, possibly reducing the learning time. The process of compressing the data, i.e., reducing its dimensionality, is typically conducted by someone who has domain knowledge and expertise, and engineers features in a trial-and-error endless cycle. Our goal is to achieve such compressed versions automatically; for that, we use an Evolutionary Algorithm to generate the structure of AutoEncoders. Instead of targeting the reconstruction of the images, we focus on the reconstruction of the mean signal of each class, and therefore the goal is to acquire the most representative characteristics of each class. Results on the MNIST dataset show that the proposed approach can not only reduce the original dataset dimensionality, but the performance of the classifiers over the compressed representation is superior to the performance on the original uncompressed images.","PeriodicalId":212677,"journal":{"name":"2018 IEEE Congress on Evolutionary Computation (CEC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Automatic Evolution of AutoEncoders for Compressed Representations\",\"authors\":\"Filipe Assunção, David Sereno, Nuno Lourenço, P. Machado, B. Ribeiro\",\"doi\":\"10.1109/CEC.2018.8477874\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Developing learning systems is challenging in many ways: often there is the need to optimise the learning algorithm structure and parameters, and it is necessary to decide which is the best data representation to use, i.e., we usually have to design features and select the most representative and useful ones. In this work we focus on the later and investigate whether or not it is possible to obtain good performances with compressed versions of the original data, possibly reducing the learning time. The process of compressing the data, i.e., reducing its dimensionality, is typically conducted by someone who has domain knowledge and expertise, and engineers features in a trial-and-error endless cycle. Our goal is to achieve such compressed versions automatically; for that, we use an Evolutionary Algorithm to generate the structure of AutoEncoders. Instead of targeting the reconstruction of the images, we focus on the reconstruction of the mean signal of each class, and therefore the goal is to acquire the most representative characteristics of each class. Results on the MNIST dataset show that the proposed approach can not only reduce the original dataset dimensionality, but the performance of the classifiers over the compressed representation is superior to the performance on the original uncompressed images.\",\"PeriodicalId\":212677,\"journal\":{\"name\":\"2018 IEEE Congress on Evolutionary Computation (CEC)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Congress on Evolutionary Computation (CEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC.2018.8477874\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Congress on Evolutionary Computation (CEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.2018.8477874","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic Evolution of AutoEncoders for Compressed Representations
Developing learning systems is challenging in many ways: often there is the need to optimise the learning algorithm structure and parameters, and it is necessary to decide which is the best data representation to use, i.e., we usually have to design features and select the most representative and useful ones. In this work we focus on the later and investigate whether or not it is possible to obtain good performances with compressed versions of the original data, possibly reducing the learning time. The process of compressing the data, i.e., reducing its dimensionality, is typically conducted by someone who has domain knowledge and expertise, and engineers features in a trial-and-error endless cycle. Our goal is to achieve such compressed versions automatically; for that, we use an Evolutionary Algorithm to generate the structure of AutoEncoders. Instead of targeting the reconstruction of the images, we focus on the reconstruction of the mean signal of each class, and therefore the goal is to acquire the most representative characteristics of each class. Results on the MNIST dataset show that the proposed approach can not only reduce the original dataset dimensionality, but the performance of the classifiers over the compressed representation is superior to the performance on the original uncompressed images.