{"title":"Forgery Detection in Digital Images through Lighting Environment Inconsistencies","authors":"A. Mazumdar, Jefin Jacob, P. Bora","doi":"10.1109/NCC.2018.8600175","DOIUrl":null,"url":null,"abstract":"In image splicing forgery, parts from two or more images are used to create a new composite image. Among the different approaches to expose splicing forgery, lighting environment-based forensics methods are more robust to different post-processing operations. In these methods, the 3D lighting environments are estimated from different parts of the image under investigation. They are later compared with each other to check the authenticity of the image. This paper proposes a novel 3D lighting environment-based image forensics method which can detect splicing forgeries in images containing human faces. The proposed method estimates the lighting environments from facial regions present in the image using shape, illumination, and reflectance from shading or the SIRFS method. SIRFS performs an optimization procedure to get the most likely shape, reflectance and illumination that construct a given image by imposing priors on shape, reflectance and illumination. Once the lighting environments are estimated from all the faces present in the image, they are compared with each other. In case of an authentic image under uniform illumination, the lighting environments estimated from different faces will be similar while there will be at least one pair of faces with different lighting environments in the case of a spliced image. Experimental results on two different datasets show that the proposed method can discriminate different lighting environments better than the state-of-the-art 3D lighting environment-based forensics methods and hence can expose forgeries better.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Twenty Fourth National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC.2018.8600175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In image splicing forgery, parts from two or more images are used to create a new composite image. Among the different approaches to expose splicing forgery, lighting environment-based forensics methods are more robust to different post-processing operations. In these methods, the 3D lighting environments are estimated from different parts of the image under investigation. They are later compared with each other to check the authenticity of the image. This paper proposes a novel 3D lighting environment-based image forensics method which can detect splicing forgeries in images containing human faces. The proposed method estimates the lighting environments from facial regions present in the image using shape, illumination, and reflectance from shading or the SIRFS method. SIRFS performs an optimization procedure to get the most likely shape, reflectance and illumination that construct a given image by imposing priors on shape, reflectance and illumination. Once the lighting environments are estimated from all the faces present in the image, they are compared with each other. In case of an authentic image under uniform illumination, the lighting environments estimated from different faces will be similar while there will be at least one pair of faces with different lighting environments in the case of a spliced image. Experimental results on two different datasets show that the proposed method can discriminate different lighting environments better than the state-of-the-art 3D lighting environment-based forensics methods and hence can expose forgeries better.