{"title":"一种基于多模型堆叠自编码器的有损图像压缩算法","authors":"Salam Fraihat, Mohammed Azmi Al-Betar","doi":"10.1016/j.array.2023.100314","DOIUrl":null,"url":null,"abstract":"<div><p>The extensive use of images in many fields increased the demand for image compression algorithms to overcome the transfer bandwidth and storage limitations. With image compression, disk space, and transmission speed can be efficiently reduced. Some of the traditional techniques used for image compression are the JPEG and ZIP formats. The compression rate (CR) in JPEG can be high but to the detriment of the quality factor of the image. ZIP has a low compression rate, where the quality remains almost unaffected. Machine learning (ML) is considered an essential technique for image compression using different algorithms. The most widely used algorithm is Deep Learning (DL), which represents the features of the image at different scales by using different types of layers. In this research, an AutoEncoder (AE) deep learning-based compression algorithm is proposed for lossy image compression and experimented with using three standard dataset types: MNIST, Grayscale, and Color images datasets. A Stacked AE (SAE) for image compression and a binarized content-based image filter are used with a high compression rate while keeping the quality above 85% using structural similarity index metric (SSIM) compared to traditional techniques. In addition, a convolutional neural network (CNN) classification model has been utilized as SAEs compression model selector for each image class. Experimental results demonstrate that the proposed SAE image compression algorithm outperforms the JPEG-encoded algorithm in terms of compression rate (CR) and image quality. The CR that the proposed model achieved with an acceptable reconstruction accuracy was about 85%, which is almost 20% higher than the standard JPEG’s compression rate, with an accuracy of 94.63% SSIM score.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel lossy image compression algorithm using multi-models stacked AutoEncoders\",\"authors\":\"Salam Fraihat, Mohammed Azmi Al-Betar\",\"doi\":\"10.1016/j.array.2023.100314\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The extensive use of images in many fields increased the demand for image compression algorithms to overcome the transfer bandwidth and storage limitations. With image compression, disk space, and transmission speed can be efficiently reduced. Some of the traditional techniques used for image compression are the JPEG and ZIP formats. The compression rate (CR) in JPEG can be high but to the detriment of the quality factor of the image. ZIP has a low compression rate, where the quality remains almost unaffected. Machine learning (ML) is considered an essential technique for image compression using different algorithms. The most widely used algorithm is Deep Learning (DL), which represents the features of the image at different scales by using different types of layers. In this research, an AutoEncoder (AE) deep learning-based compression algorithm is proposed for lossy image compression and experimented with using three standard dataset types: MNIST, Grayscale, and Color images datasets. A Stacked AE (SAE) for image compression and a binarized content-based image filter are used with a high compression rate while keeping the quality above 85% using structural similarity index metric (SSIM) compared to traditional techniques. In addition, a convolutional neural network (CNN) classification model has been utilized as SAEs compression model selector for each image class. Experimental results demonstrate that the proposed SAE image compression algorithm outperforms the JPEG-encoded algorithm in terms of compression rate (CR) and image quality. The CR that the proposed model achieved with an acceptable reconstruction accuracy was about 85%, which is almost 20% higher than the standard JPEG’s compression rate, with an accuracy of 94.63% SSIM score.</p></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005623000395\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005623000395","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
A novel lossy image compression algorithm using multi-models stacked AutoEncoders
The extensive use of images in many fields increased the demand for image compression algorithms to overcome the transfer bandwidth and storage limitations. With image compression, disk space, and transmission speed can be efficiently reduced. Some of the traditional techniques used for image compression are the JPEG and ZIP formats. The compression rate (CR) in JPEG can be high but to the detriment of the quality factor of the image. ZIP has a low compression rate, where the quality remains almost unaffected. Machine learning (ML) is considered an essential technique for image compression using different algorithms. The most widely used algorithm is Deep Learning (DL), which represents the features of the image at different scales by using different types of layers. In this research, an AutoEncoder (AE) deep learning-based compression algorithm is proposed for lossy image compression and experimented with using three standard dataset types: MNIST, Grayscale, and Color images datasets. A Stacked AE (SAE) for image compression and a binarized content-based image filter are used with a high compression rate while keeping the quality above 85% using structural similarity index metric (SSIM) compared to traditional techniques. In addition, a convolutional neural network (CNN) classification model has been utilized as SAEs compression model selector for each image class. Experimental results demonstrate that the proposed SAE image compression algorithm outperforms the JPEG-encoded algorithm in terms of compression rate (CR) and image quality. The CR that the proposed model achieved with an acceptable reconstruction accuracy was about 85%, which is almost 20% higher than the standard JPEG’s compression rate, with an accuracy of 94.63% SSIM score.