{"title":"Classification performance evaluation of latent vector in encoder-decoder model","authors":"K. Kang, Changseok Bae","doi":"10.1109/ICOIN56518.2023.10048940","DOIUrl":null,"url":null,"abstract":"This paper compares and analyzes the classification performance of latent vectors in the encoder-decoder model. A typical encoder-decoder model, such as an autoencoder, transforms the encoder input into a latent vector and feeds it into the decoder. In this process, the encoder-decoder model learns to produce a decoder output similar to the encoder input. We can consider that the latent vector of the encoder-decoder model is well preserved by abstracting the characteristics of the encoder input. Further, it is possible to apply to unsupervised learning if the latent vector guarantees a sufficient distance between clusters in the feature space. In this paper, the classification performance of latent vectors is analyzed as a basic study for applying latent vectors in encoder-decoder models to unsupervised and continual learning. The latent vectors obtained by the stacked autoencoder and 2 types of CNN-based autoencoder are applied to 6 kinds of classifiers including KNN and random forest. Experimental results show that the latent vector using the CNN-based autoencoder with a dense layer shows superior classification performance by up to 2% compared to the result of the stacked autoencoder. Based on the results in this paper, it is possible to extend the latent vector obtained by using a CNN-based auto-encoder with dense layer to unsupervised learning.","PeriodicalId":285763,"journal":{"name":"2023 International Conference on Information Networking (ICOIN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Information Networking (ICOIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOIN56518.2023.10048940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper compares and analyzes the classification performance of latent vectors in the encoder-decoder model. A typical encoder-decoder model, such as an autoencoder, transforms the encoder input into a latent vector and feeds it into the decoder. In this process, the encoder-decoder model learns to produce a decoder output similar to the encoder input. We can consider that the latent vector of the encoder-decoder model is well preserved by abstracting the characteristics of the encoder input. Further, it is possible to apply to unsupervised learning if the latent vector guarantees a sufficient distance between clusters in the feature space. In this paper, the classification performance of latent vectors is analyzed as a basic study for applying latent vectors in encoder-decoder models to unsupervised and continual learning. The latent vectors obtained by the stacked autoencoder and 2 types of CNN-based autoencoder are applied to 6 kinds of classifiers including KNN and random forest. Experimental results show that the latent vector using the CNN-based autoencoder with a dense layer shows superior classification performance by up to 2% compared to the result of the stacked autoencoder. Based on the results in this paper, it is possible to extend the latent vector obtained by using a CNN-based auto-encoder with dense layer to unsupervised learning.