Pub Date : 2020-12-06DOI: 10.1109/WIFS49906.2020.9360891
A. Pedrouzo-Ulloa, J. Troncoso-Pastoriza, Nicolas Gama, Mariya Georgieva, F. Pérez-González
The Ring Learning with Errors (RLWE) problem has become one of the most widely used cryptographic assumptions for the construction of modern cryptographic primitives. Most of these solutions make use of power-of-two cyclotomic rings mainly due to its simplicity and efficiency. This work explores the possibility of substituting them for multiquadratic rings and shows that the latter can bring about important efficiency improvements in reducing the cost of the underlying polynomial operations. We introduce a generalized version of the fast Walsh-Hadamard Transform which enables faster degree-n polynomial multiplications by reducing the required elemental products by a factor of $mathcal{O}(log n)$. Finally, we showcase how these rings find immediate application in the implementation of OLE (Oblivious Linear Function Evaluation) primitives, which are one of the main building blocks used inside Secure Multiparty Computation (MPC) protocols.
{"title":"Multiquadratic Rings and Walsh-Hadamard Transforms for Oblivious Linear Function Evaluation","authors":"A. Pedrouzo-Ulloa, J. Troncoso-Pastoriza, Nicolas Gama, Mariya Georgieva, F. Pérez-González","doi":"10.1109/WIFS49906.2020.9360891","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360891","url":null,"abstract":"The Ring Learning with Errors (RLWE) problem has become one of the most widely used cryptographic assumptions for the construction of modern cryptographic primitives. Most of these solutions make use of power-of-two cyclotomic rings mainly due to its simplicity and efficiency. This work explores the possibility of substituting them for multiquadratic rings and shows that the latter can bring about important efficiency improvements in reducing the cost of the underlying polynomial operations. We introduce a generalized version of the fast Walsh-Hadamard Transform which enables faster degree-n polynomial multiplications by reducing the required elemental products by a factor of $mathcal{O}(log n)$. Finally, we showcase how these rings find immediate application in the implementation of OLE (Oblivious Linear Function Evaluation) primitives, which are one of the main building blocks used inside Secure Multiparty Computation (MPC) protocols.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-16DOI: 10.1109/WIFS49906.2020.9360901
L. Bondi, E. D. Cannas, Paolo Bestagini, S. Tubaro
The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
{"title":"Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection","authors":"L. Bondi, E. D. Cannas, Paolo Bestagini, S. Tubaro","doi":"10.1109/WIFS49906.2020.9360901","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360901","url":null,"abstract":"The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129903929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-08DOI: 10.1109/WIFS49906.2020.9360909
D. M. Montserrat, J'anos Horv'ath, S. Yarlagadda, F. Zhu, E. Delp
Satellite imagery is becoming increasingly accessible due to the growing number of orbiting commercial satellites. Many applications make use of such images: agricultural management, meteorological prediction, damage assessment from natural disasters, or cartography are some of the examples. Unfortunately, these images can be easily tampered and modified with image manipulation tools damaging downstream applications. Because the nature of the manipulation applied to the image is typically unknown, unsupervised methods that don’t require prior knowledge of the tampering techniques used are preferred. In this paper, we use ensembles of generative autoregressive models to model the distribution of the pixels of the image in order to detect potential manipulations. We evaluate the performance of the presented approach obtaining accurate localization results compared to previously presented approaches.
{"title":"Generative Autoregressive Ensembles for Satellite Imagery Manipulation Detection","authors":"D. M. Montserrat, J'anos Horv'ath, S. Yarlagadda, F. Zhu, E. Delp","doi":"10.1109/WIFS49906.2020.9360909","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360909","url":null,"abstract":"Satellite imagery is becoming increasingly accessible due to the growing number of orbiting commercial satellites. Many applications make use of such images: agricultural management, meteorological prediction, damage assessment from natural disasters, or cartography are some of the examples. Unfortunately, these images can be easily tampered and modified with image manipulation tools damaging downstream applications. Because the nature of the manipulation applied to the image is typically unknown, unsupervised methods that don’t require prior knowledge of the tampering techniques used are preferred. In this paper, we use ensembles of generative autoregressive models to model the distribution of the pixels of the image in order to detect potential manipulations. We evaluate the performance of the presented approach obtaining accurate localization results compared to previously presented approaches.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"391 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-08DOI: 10.1109/WIFS49906.2020.9360882
Lázaro J. González Soler, J. Patino, M. Gomez-Barrero, M. Todisco, C. Busch, N. Evans
Biometric systems are nowadays employed across a broad range of applications. They provide high security and efficiency and, in many cases, are user friendly. Despite these and other advantages, biometric systems in general and Automatic speaker verification (ASV) systems in particular can be vulnerable to attack presentations. The most recent ASVSpoof 2019 competition showed that most forms of attacks can be detected reliably with ensemble classifier-based presentation attack detection (PAD) approaches. These, though, depend fundamentally upon the complementarity of systems in the ensemble. With the motivation to increase the generalisability of PAD solutions, this paper reports our exploration of texture descriptors applied to the analysis of speech spectrogram images. In particular, we propose a common fisher vector feature space based on a generative model. Experimental results show the soundness of our approach: at most, 16 in 100 bona fide presentations are rejected whereas only one in 100 attack presentations are accepted.
{"title":"Texture-based Presentation Attack Detection for Automatic Speaker Verification","authors":"Lázaro J. González Soler, J. Patino, M. Gomez-Barrero, M. Todisco, C. Busch, N. Evans","doi":"10.1109/WIFS49906.2020.9360882","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360882","url":null,"abstract":"Biometric systems are nowadays employed across a broad range of applications. They provide high security and efficiency and, in many cases, are user friendly. Despite these and other advantages, biometric systems in general and Automatic speaker verification (ASV) systems in particular can be vulnerable to attack presentations. The most recent ASVSpoof 2019 competition showed that most forms of attacks can be detected reliably with ensemble classifier-based presentation attack detection (PAD) approaches. These, though, depend fundamentally upon the complementarity of systems in the ensemble. With the motivation to increase the generalisability of PAD solutions, this paper reports our exploration of texture descriptors applied to the analysis of speech spectrogram images. In particular, we propose a common fisher vector feature space based on a generative model. Experimental results show the soundness of our approach: at most, 16 in 100 bona fide presentations are rejected whereas only one in 100 attack presentations are accepted.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133385971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-25DOI: 10.1109/WIFS49906.2020.9360903
S. Mandelli, Nicolò Bonettini, Paolo Bestagini, S. Tubaro
Convolutional Neural Networks (CNNs) have proved very accurate in multiple computer vision image classification tasks that required visual inspection in the past (e.g., object recognition, face detection, etc.). Motivated by these astonishing results, researchers have also started using CNNs to cope with image forensic problems (e.g., camera model identification, tampering detection, etc.). However, in computer vision, image classification methods typically rely on visual cues easily detectable by human eyes. Conversely, forensic solutions rely on almost invisible traces that are often very subtle and lie in the fine details of the image under analysis. For this reason, training a CNN to solve a forensic task requires some special care, as common processing operations (e.g., resampling, compression, etc.) can strongly hinder forensic traces. In this work, we focus on the effect that JPEG has on CNN training considering different computer vision and forensic image classification problems. Specifically, we consider the issues that rise from JPEG compression and misalignment of the JPEG grid. We show that it is necessary to consider these effects when generating a training dataset in order to properly train a forensic detector not losing generalization capability, whereas it is almost possible to ignore these effects for computer vision tasks.
{"title":"Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision","authors":"S. Mandelli, Nicolò Bonettini, Paolo Bestagini, S. Tubaro","doi":"10.1109/WIFS49906.2020.9360903","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360903","url":null,"abstract":"Convolutional Neural Networks (CNNs) have proved very accurate in multiple computer vision image classification tasks that required visual inspection in the past (e.g., object recognition, face detection, etc.). Motivated by these astonishing results, researchers have also started using CNNs to cope with image forensic problems (e.g., camera model identification, tampering detection, etc.). However, in computer vision, image classification methods typically rely on visual cues easily detectable by human eyes. Conversely, forensic solutions rely on almost invisible traces that are often very subtle and lie in the fine details of the image under analysis. For this reason, training a CNN to solve a forensic task requires some special care, as common processing operations (e.g., resampling, compression, etc.) can strongly hinder forensic traces. In this work, we focus on the effect that JPEG has on CNN training considering different computer vision and forensic image classification problems. Specifically, we consider the issues that rise from JPEG compression and misalignment of the JPEG grid. We show that it is necessary to consider these effects when generating a training dataset in order to properly train a forensic detector not losing generalization capability, whereas it is almost possible to ignore these effects for computer vision tasks.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122666535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-09DOI: 10.1109/WIFS49906.2020.9360888
Behrooz Razeghi, F. Calmon, D. Gunduz, S. Voloshynovskiy
We consider the problem of privacy-preserving data release for a specific utility task under perfect obfuscation constraint. We establish the necessary and sufficient condition to extract features of the original data that carry as much information about a utility attribute as possible, while not revealing any information about the sensitive attribute. This problem formulation generalizes both the information bottleneck and privacy funnel problems. We adopt a local information geometry analysis that provides useful insight into information coupling and trajectory construction of spherical perturbation of probability mass functions. This analysis allows us to construct the modal decomposition of the joint distributions, divergence transfer matrices, and mutual information. By decomposing the mutual information into orthogonal modes, we obtain the locally sufficient statistics for inferences about the utility attribute, while satisfying perfect obfuscation constraint. Furthermore, we develop the notion of perfect obfuscation based on χ2-divergence and Kullback–Leibler divergence in the Euclidean information space.
{"title":"On Perfect Obfuscation: Local Information Geometry Analysis","authors":"Behrooz Razeghi, F. Calmon, D. Gunduz, S. Voloshynovskiy","doi":"10.1109/WIFS49906.2020.9360888","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360888","url":null,"abstract":"We consider the problem of privacy-preserving data release for a specific utility task under perfect obfuscation constraint. We establish the necessary and sufficient condition to extract features of the original data that carry as much information about a utility attribute as possible, while not revealing any information about the sensitive attribute. This problem formulation generalizes both the information bottleneck and privacy funnel problems. We adopt a local information geometry analysis that provides useful insight into information coupling and trajectory construction of spherical perturbation of probability mass functions. This analysis allows us to construct the modal decomposition of the joint distributions, divergence transfer matrices, and mutual information. By decomposing the mutual information into orthogonal modes, we obtain the locally sufficient statistics for inferences about the utility attribute, while satisfying perfect obfuscation constraint. Furthermore, we develop the notion of perfect obfuscation based on χ2-divergence and Kullback–Leibler divergence in the Euclidean information space.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131631635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-25DOI: 10.1109/WIFS49906.2020.9360905
M. Barni, Kassem Kallas, Ehsan Nowroozi, B. Tondi
Last-generation GAN models allow to generate synthetic images which are visually indistinguishable from natural ones, raising the need to develop tools to distinguish fake and natural images thus contributing to preserve the trustworthiness of digital images. While modern GAN models can generate very high-quality images with no visible spatial artifacts, reconstruction of consistent relationships among colour channels is expectedly more difficult. In this paper, we propose a method for distinguishing GAN-generated from natural images by exploiting inconsistencies among spectral bands, with specific focus on the generation of synthetic face images. Specifically, we use cross-band co-occurrence matrices, in addition to spatial co-occurrence matrices, as input to a CNN model, which is trained to distinguish between real and synthetic faces. The results of our experiments confirm the goodness of our approach which outperforms a similar detection technique based on intra-band spatial co-occurrences only. The performance gain is particularly significant with regard to robustness against post-processing, like geometric transformations, filtering and contrast manipulations.
{"title":"CNN Detection of GAN-Generated Face Images based on Cross-Band Co-occurrences Analysis","authors":"M. Barni, Kassem Kallas, Ehsan Nowroozi, B. Tondi","doi":"10.1109/WIFS49906.2020.9360905","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360905","url":null,"abstract":"Last-generation GAN models allow to generate synthetic images which are visually indistinguishable from natural ones, raising the need to develop tools to distinguish fake and natural images thus contributing to preserve the trustworthiness of digital images. While modern GAN models can generate very high-quality images with no visible spatial artifacts, reconstruction of consistent relationships among colour channels is expectedly more difficult. In this paper, we propose a method for distinguishing GAN-generated from natural images by exploiting inconsistencies among spectral bands, with specific focus on the generation of synthetic face images. Specifically, we use cross-band co-occurrence matrices, in addition to spatial co-occurrence matrices, as input to a CNN model, which is trained to distinguish between real and synthetic faces. The results of our experiments confirm the goodness of our approach which outperforms a similar detection technique based on intra-band spatial co-occurrences only. The performance gain is particularly significant with regard to robustness against post-processing, like geometric transformations, filtering and contrast manipulations.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-09DOI: 10.1109/WIFS49906.2020.9360890
Jin Keong, Xingbo Dong, Zhe Jin, Khawla Mallat, J. Dugelay
Thermal face image analysis is favorable for certain circumstances. For example, illumination-sensitive applications, like nighttime surveillance; and privacy-preserving demanded access control. However, the inadequate study on thermal face image analysis calls for attention in responding to the industry requirements. Detecting facial landmark points are important for many face analysis tasks, such as face recognition, 3D face reconstruction, and face expression recognition. In this paper, we propose a robust neural network enabled facial landmark detection, namely Deep Multi-Spectral Learning (DMSL). Briefly, DMSL consists of two sub-models, i.e. face boundary detection, and landmark coordinates detection. Such an architecture demonstrates the capability of detecting the facial landmarks on both visible and thermal images. Particularly, the proposed DMSL model is robust in facial landmark detection where the face is partially occluded, or facing different directions. The experiment conducted on Eurecom’s visible and thermal paired database shows the superior performance of DMSL over the state-of-the-art for thermal facial landmark detection. In addition to that, we have annotated a thermal face dataset with their respective facial landmark for the purpose of experimentation.
{"title":"Multi-spectral Facial Landmark Detection","authors":"Jin Keong, Xingbo Dong, Zhe Jin, Khawla Mallat, J. Dugelay","doi":"10.1109/WIFS49906.2020.9360890","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360890","url":null,"abstract":"Thermal face image analysis is favorable for certain circumstances. For example, illumination-sensitive applications, like nighttime surveillance; and privacy-preserving demanded access control. However, the inadequate study on thermal face image analysis calls for attention in responding to the industry requirements. Detecting facial landmark points are important for many face analysis tasks, such as face recognition, 3D face reconstruction, and face expression recognition. In this paper, we propose a robust neural network enabled facial landmark detection, namely Deep Multi-Spectral Learning (DMSL). Briefly, DMSL consists of two sub-models, i.e. face boundary detection, and landmark coordinates detection. Such an architecture demonstrates the capability of detecting the facial landmarks on both visible and thermal images. Particularly, the proposed DMSL model is robust in facial landmark detection where the face is partially occluded, or facing different directions. The experiment conducted on Eurecom’s visible and thermal paired database shows the superior performance of DMSL over the state-of-the-art for thermal facial landmark detection. In addition to that, we have annotated a thermal face dataset with their respective facial landmark for the purpose of experimentation.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114453882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-29DOI: 10.1109/WIFS49906.2020.9360904
S. Agarwal, Tarek El-Gaaly, H. Farid, Ser-Nam Lim
Synthetically-generated audios and videos - so-called deep fakes - continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create a sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel disinformation campaigns, and create non-consensual pornography. We describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. We show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.
{"title":"Detecting Deep-Fake Videos from Appearance and Behavior","authors":"S. Agarwal, Tarek El-Gaaly, H. Farid, Ser-Nam Lim","doi":"10.1109/WIFS49906.2020.9360904","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360904","url":null,"abstract":"Synthetically-generated audios and videos - so-called deep fakes - continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create a sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel disinformation campaigns, and create non-consensual pornography. We describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. We show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129724720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-04DOI: 10.1109/WIFS49906.2020.9360911
Sharad Joshi, Pawel Korus, N. Khanna, N. Memon
We assess the variability of PRNU-based camera fingerprints with mismatched imaging pipelines (e.g., different camera ISP or digital darkroom software). We show that camera fingerprints exhibit non-negligible variations in this setup, which may lead to unexpected degradation of detection statistics in real-world use-cases. We tested 13 different pipelines, including standard digital darkroom software and recent neural-networks. We observed that correlation between fingerprints from mismatched pipelines drops on average to 0.38 and the PCE detection statistic drops by over 40%. The degradation in error rates is the strongest for small patches commonly used in photo manipulation detection, and when neural networks are used for photo development. At a fixed 0.5% FPR setting, the TPR drops by 17 ppt (percentage points) for 128 px and 256 px patches.
{"title":"Empirical Evaluation of PRNU Fingerprint Variation for Mismatched Imaging Pipelines","authors":"Sharad Joshi, Pawel Korus, N. Khanna, N. Memon","doi":"10.1109/WIFS49906.2020.9360911","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360911","url":null,"abstract":"We assess the variability of PRNU-based camera fingerprints with mismatched imaging pipelines (e.g., different camera ISP or digital darkroom software). We show that camera fingerprints exhibit non-negligible variations in this setup, which may lead to unexpected degradation of detection statistics in real-world use-cases. We tested 13 different pipelines, including standard digital darkroom software and recent neural-networks. We observed that correlation between fingerprints from mismatched pipelines drops on average to 0.38 and the PCE detection statistic drops by over 40%. The degradation in error rates is the strongest for small patches commonly used in photo manipulation detection, and when neural networks are used for photo development. At a fixed 0.5% FPR setting, the TPR drops by 17 ppt (percentage points) for 128 px and 256 px patches.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122765234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}