{"title":"Lightweight Model for Occlusion Removal from Face Images","authors":"Sincy John, A. Danti","doi":"10.33166/aetic.2024.02.001","DOIUrl":null,"url":null,"abstract":"In the realm of deep learning, the prevalence of models with large number of parameters poses a significant challenge for low computation device. Critical influence of model size, primarily governed by weight parameters in shaping the computational demands of the occlusion removal process. Recognizing the computational burdens associated with existing occlusion removal algorithms, characterized by their propensity for substantial computational resources and large model sizes, we advocate for a paradigm shift towards solutions conducive to low-computation environments. Existing occlusion riddance techniques typically demand substantial computational resources and storage capacity. To support real-time applications, it's imperative to deploy trained models on resource-constrained devices like handheld devices and internet of things (IoT) devices possess limited memory and computational capabilities. There arises a critical need to compress and accelerate these models for deployment on resource-constrained devices, without compromising significantly on model accuracy. Our study introduces a significant contribution in the form of a compressed model designed specifically for addressing occlusion in face images for low computation devices. We perform dynamic quantization technique by reducing the weights of the Pix2pix generator model. The trained model is then compressed, which significantly reduces its size and execution time. The proposed model, is lightweight, due to storage space requirement reduced drastically with significant improvement in the execution time. The performance of the proposed method has been compared with other state of the art methods in terms of PSNR and SSIM. Hence the proposed lightweight model is more suitable for the real time applications with less computational cost.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"181 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Emerging Technologies in Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33166/aetic.2024.02.001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
Abstract
In the realm of deep learning, the prevalence of models with large number of parameters poses a significant challenge for low computation device. Critical influence of model size, primarily governed by weight parameters in shaping the computational demands of the occlusion removal process. Recognizing the computational burdens associated with existing occlusion removal algorithms, characterized by their propensity for substantial computational resources and large model sizes, we advocate for a paradigm shift towards solutions conducive to low-computation environments. Existing occlusion riddance techniques typically demand substantial computational resources and storage capacity. To support real-time applications, it's imperative to deploy trained models on resource-constrained devices like handheld devices and internet of things (IoT) devices possess limited memory and computational capabilities. There arises a critical need to compress and accelerate these models for deployment on resource-constrained devices, without compromising significantly on model accuracy. Our study introduces a significant contribution in the form of a compressed model designed specifically for addressing occlusion in face images for low computation devices. We perform dynamic quantization technique by reducing the weights of the Pix2pix generator model. The trained model is then compressed, which significantly reduces its size and execution time. The proposed model, is lightweight, due to storage space requirement reduced drastically with significant improvement in the execution time. The performance of the proposed method has been compared with other state of the art methods in terms of PSNR and SSIM. Hence the proposed lightweight model is more suitable for the real time applications with less computational cost.