{"title":"Quaternion Vector Quantized Variational Autoencoder","authors":"Hui Luo;Xin Liu;Jian Sun;Yang Zhang","doi":"10.1109/LSP.2024.3504374","DOIUrl":null,"url":null,"abstract":"Vector quantized variational autoencoders, as variants of variational autoencoders, effectively capture discrete representations by quantizing continuous latent spaces and are widely used in generative tasks. However, these models still face limitations in handling complex image reconstruction, particularly in preserving high-quality details. Moreover, quaternion neural networks have shown unique advantages in handling multi-dimensional data, indicating that integrating quaternion approaches could potentially improve the performance of these autoencoders. To this end, we propose QVQ-VAE, a lightweight network in the quaternion domain that introduces a quaternion-based quantization layer and training strategy to improve reconstruction precision. By fully leveraging quaternion operations, QVQ-VAE reduces the number of model parameters, thereby lowering computational resource demands. Extensive evaluations on face and general object reconstruction tasks show that QVQ-VAE consistently outperforms existing methods while using significantly fewer parameters.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"151-155"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10763454/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Vector quantized variational autoencoders, as variants of variational autoencoders, effectively capture discrete representations by quantizing continuous latent spaces and are widely used in generative tasks. However, these models still face limitations in handling complex image reconstruction, particularly in preserving high-quality details. Moreover, quaternion neural networks have shown unique advantages in handling multi-dimensional data, indicating that integrating quaternion approaches could potentially improve the performance of these autoencoders. To this end, we propose QVQ-VAE, a lightweight network in the quaternion domain that introduces a quaternion-based quantization layer and training strategy to improve reconstruction precision. By fully leveraging quaternion operations, QVQ-VAE reduces the number of model parameters, thereby lowering computational resource demands. Extensive evaluations on face and general object reconstruction tasks show that QVQ-VAE consistently outperforms existing methods while using significantly fewer parameters.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.