{"title":"使用自动编码器,以方便信息保留数据降维","authors":"Cheng-Yu Chen, Jenq-Shiou Leu, S. W. Prakosa","doi":"10.1109/IGBSG.2018.8393545","DOIUrl":null,"url":null,"abstract":"Due to the development of internet, plentiful different data appear rapidly. The amounts of features also increase when the technique for collecting data becomes mature. Observation of different data is usually not an easy task because it needs some background related to data pre-processing. Therefore, dimensionality reduction (DR) becomes a familiar method to reduce the amount of features and keep the critical information. However, the loss of information during the processing of dimensionality reduction is unavoidable. When the targeted dimension is far lower than original dimension, the loss is too high to be endurable. To solve this problem, we use the encoder structure from autoencoder to compare to some common dimensionality reduction methods. We use the simplest autoencoder structure as the preprocessing of Support Vector Machine (SVM) to see the result.","PeriodicalId":356367,"journal":{"name":"2018 3rd International Conference on Intelligent Green Building and Smart Grid (IGBSG)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Using autoencoder to facilitate information retention for data dimension reduction\",\"authors\":\"Cheng-Yu Chen, Jenq-Shiou Leu, S. W. Prakosa\",\"doi\":\"10.1109/IGBSG.2018.8393545\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to the development of internet, plentiful different data appear rapidly. The amounts of features also increase when the technique for collecting data becomes mature. Observation of different data is usually not an easy task because it needs some background related to data pre-processing. Therefore, dimensionality reduction (DR) becomes a familiar method to reduce the amount of features and keep the critical information. However, the loss of information during the processing of dimensionality reduction is unavoidable. When the targeted dimension is far lower than original dimension, the loss is too high to be endurable. To solve this problem, we use the encoder structure from autoencoder to compare to some common dimensionality reduction methods. We use the simplest autoencoder structure as the preprocessing of Support Vector Machine (SVM) to see the result.\",\"PeriodicalId\":356367,\"journal\":{\"name\":\"2018 3rd International Conference on Intelligent Green Building and Smart Grid (IGBSG)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 3rd International Conference on Intelligent Green Building and Smart Grid (IGBSG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IGBSG.2018.8393545\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 3rd International Conference on Intelligent Green Building and Smart Grid (IGBSG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IGBSG.2018.8393545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using autoencoder to facilitate information retention for data dimension reduction
Due to the development of internet, plentiful different data appear rapidly. The amounts of features also increase when the technique for collecting data becomes mature. Observation of different data is usually not an easy task because it needs some background related to data pre-processing. Therefore, dimensionality reduction (DR) becomes a familiar method to reduce the amount of features and keep the critical information. However, the loss of information during the processing of dimensionality reduction is unavoidable. When the targeted dimension is far lower than original dimension, the loss is too high to be endurable. To solve this problem, we use the encoder structure from autoencoder to compare to some common dimensionality reduction methods. We use the simplest autoencoder structure as the preprocessing of Support Vector Machine (SVM) to see the result.