首页 > 最新文献

2020 IEEE International Workshop on Information Forensics and Security (WIFS)最新文献

英文 中文
Multiquadratic Rings and Walsh-Hadamard Transforms for Oblivious Linear Function Evaluation 遗忘线性函数求值的多二次环与Walsh-Hadamard变换
Pub Date : 2020-12-06 DOI: 10.1109/WIFS49906.2020.9360891
A. Pedrouzo-Ulloa, J. Troncoso-Pastoriza, Nicolas Gama, Mariya Georgieva, F. Pérez-González
The Ring Learning with Errors (RLWE) problem has become one of the most widely used cryptographic assumptions for the construction of modern cryptographic primitives. Most of these solutions make use of power-of-two cyclotomic rings mainly due to its simplicity and efficiency. This work explores the possibility of substituting them for multiquadratic rings and shows that the latter can bring about important efficiency improvements in reducing the cost of the underlying polynomial operations. We introduce a generalized version of the fast Walsh-Hadamard Transform which enables faster degree-n polynomial multiplications by reducing the required elemental products by a factor of $mathcal{O}(log n)$. Finally, we showcase how these rings find immediate application in the implementation of OLE (Oblivious Linear Function Evaluation) primitives, which are one of the main building blocks used inside Secure Multiparty Computation (MPC) protocols.
带误差环学习(RLWE)问题已成为构造现代密码原语时使用最广泛的密码学假设之一。大多数这些解决方案主要是由于其简单和高效而使用二次幂环。这项工作探索了用它们代替多二次环的可能性,并表明后者可以在降低底层多项式操作的成本方面带来重要的效率提高。我们引入了快速Walsh-Hadamard变换的一个广义版本,它通过将所需的元素乘积减少一个因子$mathcal{O}(log n)$来实现更快的n次多项式乘法。最后,我们展示了这些环如何在OLE(遗忘线性函数求值)原语的实现中得到直接应用,OLE原语是安全多方计算(MPC)协议中使用的主要构建块之一。
{"title":"Multiquadratic Rings and Walsh-Hadamard Transforms for Oblivious Linear Function Evaluation","authors":"A. Pedrouzo-Ulloa, J. Troncoso-Pastoriza, Nicolas Gama, Mariya Georgieva, F. Pérez-González","doi":"10.1109/WIFS49906.2020.9360891","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360891","url":null,"abstract":"The Ring Learning with Errors (RLWE) problem has become one of the most widely used cryptographic assumptions for the construction of modern cryptographic primitives. Most of these solutions make use of power-of-two cyclotomic rings mainly due to its simplicity and efficiency. This work explores the possibility of substituting them for multiquadratic rings and shows that the latter can bring about important efficiency improvements in reducing the cost of the underlying polynomial operations. We introduce a generalized version of the fast Walsh-Hadamard Transform which enables faster degree-n polynomial multiplications by reducing the required elemental products by a factor of $mathcal{O}(log n)$. Finally, we showcase how these rings find immediate application in the implementation of OLE (Oblivious Linear Function Evaluation) primitives, which are one of the main building blocks used inside Secure Multiparty Computation (MPC) protocols.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection 基于cnn的DeepFake视频检测中的训练策略和数据增强
Pub Date : 2020-11-16 DOI: 10.1109/WIFS49906.2020.9360901
L. Bondi, E. D. Cannas, Paolo Bestagini, S. Tubaro
The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.
深度假视频数量和质量的快速持续增长,要求开发可靠的检测系统,能够在社交媒体和互联网上自动警告用户此类内容的潜在不真实性。虽然算法、软件和智能手机应用程序在生成操纵视频和交换人脸方面每天都在变得越来越好,但用于视频中人脸伪造检测的自动化系统的准确性仍然非常有限,并且通常偏向于用于设计和训练特定检测系统的数据集。在本文中,我们分析了不同的训练策略和数据增强技术在同一数据集或不同数据集上训练和测试时如何影响基于cnn的深度假检测器。
{"title":"Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection","authors":"L. Bondi, E. D. Cannas, Paolo Bestagini, S. Tubaro","doi":"10.1109/WIFS49906.2020.9360901","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360901","url":null,"abstract":"The fast and continuous growth in number and quality of deepfake videos calls for the development of reliable de-tection systems capable of automatically warning users on social media and on the Internet about the potential untruthfulness of such contents. While algorithms, software, and smartphone apps are getting better every day in generating manipulated videos and swapping faces, the accuracy of automated systems for face forgery detection in videos is still quite limited and generally biased toward the dataset used to design and train a specific detection system. In this paper we analyze how different training strategies and data augmentation techniques affect CNN-based deepfake detectors when training and testing on the same dataset or across different datasets.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129903929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Generative Autoregressive Ensembles for Satellite Imagery Manipulation Detection 用于卫星图像处理检测的生成式自回归集成
Pub Date : 2020-10-08 DOI: 10.1109/WIFS49906.2020.9360909
D. M. Montserrat, J'anos Horv'ath, S. Yarlagadda, F. Zhu, E. Delp
Satellite imagery is becoming increasingly accessible due to the growing number of orbiting commercial satellites. Many applications make use of such images: agricultural management, meteorological prediction, damage assessment from natural disasters, or cartography are some of the examples. Unfortunately, these images can be easily tampered and modified with image manipulation tools damaging downstream applications. Because the nature of the manipulation applied to the image is typically unknown, unsupervised methods that don’t require prior knowledge of the tampering techniques used are preferred. In this paper, we use ensembles of generative autoregressive models to model the distribution of the pixels of the image in order to detect potential manipulations. We evaluate the performance of the presented approach obtaining accurate localization results compared to previously presented approaches.
由于轨道商业卫星的数量不断增加,卫星图像越来越容易获得。许多应用程序都使用这样的图像:农业管理、气象预测、自然灾害的损害评估或制图就是其中的一些例子。不幸的是,这些图像很容易被破坏下游应用程序的图像处理工具篡改和修改。由于应用于图像的操作的性质通常是未知的,因此不需要事先了解所使用的篡改技术的无监督方法是首选的。在本文中,我们使用生成自回归模型的集合来模拟图像像素的分布,以检测潜在的操作。与先前提出的方法相比,我们评估了所提出方法的性能,获得了准确的定位结果。
{"title":"Generative Autoregressive Ensembles for Satellite Imagery Manipulation Detection","authors":"D. M. Montserrat, J'anos Horv'ath, S. Yarlagadda, F. Zhu, E. Delp","doi":"10.1109/WIFS49906.2020.9360909","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360909","url":null,"abstract":"Satellite imagery is becoming increasingly accessible due to the growing number of orbiting commercial satellites. Many applications make use of such images: agricultural management, meteorological prediction, damage assessment from natural disasters, or cartography are some of the examples. Unfortunately, these images can be easily tampered and modified with image manipulation tools damaging downstream applications. Because the nature of the manipulation applied to the image is typically unknown, unsupervised methods that don’t require prior knowledge of the tampering techniques used are preferred. In this paper, we use ensembles of generative autoregressive models to model the distribution of the pixels of the image in order to detect potential manipulations. We evaluate the performance of the presented approach obtaining accurate localization results compared to previously presented approaches.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"391 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Texture-based Presentation Attack Detection for Automatic Speaker Verification 基于纹理的自动说话人验证表示攻击检测
Pub Date : 2020-10-08 DOI: 10.1109/WIFS49906.2020.9360882
Lázaro J. González Soler, J. Patino, M. Gomez-Barrero, M. Todisco, C. Busch, N. Evans
Biometric systems are nowadays employed across a broad range of applications. They provide high security and efficiency and, in many cases, are user friendly. Despite these and other advantages, biometric systems in general and Automatic speaker verification (ASV) systems in particular can be vulnerable to attack presentations. The most recent ASVSpoof 2019 competition showed that most forms of attacks can be detected reliably with ensemble classifier-based presentation attack detection (PAD) approaches. These, though, depend fundamentally upon the complementarity of systems in the ensemble. With the motivation to increase the generalisability of PAD solutions, this paper reports our exploration of texture descriptors applied to the analysis of speech spectrogram images. In particular, we propose a common fisher vector feature space based on a generative model. Experimental results show the soundness of our approach: at most, 16 in 100 bona fide presentations are rejected whereas only one in 100 attack presentations are accepted.
如今,生物识别系统被广泛应用。它们提供了高安全性和效率,并且在许多情况下对用户友好。尽管有这些和其他优点,生物识别系统,特别是自动说话人验证(ASV)系统可能容易受到攻击。最近的ASVSpoof 2019竞赛表明,使用基于集成分类器的表示攻击检测(PAD)方法可以可靠地检测大多数形式的攻击。然而,这些从根本上依赖于整体中系统的互补性。为了提高PAD解决方案的通用性,本文报道了我们对应用于语音谱图分析的纹理描述符的探索。特别地,我们提出了一个基于生成模型的公共fisher向量特征空间。实验结果表明了我们的方法的合理性:100个善意的演示文稿中最多有16个被拒绝,而100个攻击演示文稿中只有一个被接受。
{"title":"Texture-based Presentation Attack Detection for Automatic Speaker Verification","authors":"Lázaro J. González Soler, J. Patino, M. Gomez-Barrero, M. Todisco, C. Busch, N. Evans","doi":"10.1109/WIFS49906.2020.9360882","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360882","url":null,"abstract":"Biometric systems are nowadays employed across a broad range of applications. They provide high security and efficiency and, in many cases, are user friendly. Despite these and other advantages, biometric systems in general and Automatic speaker verification (ASV) systems in particular can be vulnerable to attack presentations. The most recent ASVSpoof 2019 competition showed that most forms of attacks can be detected reliably with ensemble classifier-based presentation attack detection (PAD) approaches. These, though, depend fundamentally upon the complementarity of systems in the ensemble. With the motivation to increase the generalisability of PAD solutions, this paper reports our exploration of texture descriptors applied to the analysis of speech spectrogram images. In particular, we propose a common fisher vector feature space based on a generative model. Experimental results show the soundness of our approach: at most, 16 in 100 bona fide presentations are rejected whereas only one in 100 attack presentations are accepted.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133385971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision 在JPEG压缩下训练cnn:多媒体取证vs计算机视觉
Pub Date : 2020-09-25 DOI: 10.1109/WIFS49906.2020.9360903
S. Mandelli, Nicolò Bonettini, Paolo Bestagini, S. Tubaro
Convolutional Neural Networks (CNNs) have proved very accurate in multiple computer vision image classification tasks that required visual inspection in the past (e.g., object recognition, face detection, etc.). Motivated by these astonishing results, researchers have also started using CNNs to cope with image forensic problems (e.g., camera model identification, tampering detection, etc.). However, in computer vision, image classification methods typically rely on visual cues easily detectable by human eyes. Conversely, forensic solutions rely on almost invisible traces that are often very subtle and lie in the fine details of the image under analysis. For this reason, training a CNN to solve a forensic task requires some special care, as common processing operations (e.g., resampling, compression, etc.) can strongly hinder forensic traces. In this work, we focus on the effect that JPEG has on CNN training considering different computer vision and forensic image classification problems. Specifically, we consider the issues that rise from JPEG compression and misalignment of the JPEG grid. We show that it is necessary to consider these effects when generating a training dataset in order to properly train a forensic detector not losing generalization capability, whereas it is almost possible to ignore these effects for computer vision tasks.
卷积神经网络(cnn)在过去需要视觉检查的多种计算机视觉图像分类任务(例如,物体识别,人脸检测等)中被证明是非常准确的。在这些惊人结果的激励下,研究人员也开始使用cnn来处理图像取证问题(例如,相机模型识别,篡改检测等)。然而,在计算机视觉中,图像分类方法通常依赖于人眼容易检测到的视觉线索。相反,法医解决方案依赖于几乎看不见的痕迹,这些痕迹通常非常微妙,存在于被分析图像的精细细节中。因此,训练CNN解决取证任务需要特别注意,因为常见的处理操作(如重采样、压缩等)会严重阻碍取证痕迹。在这项工作中,我们关注JPEG对CNN训练的影响,考虑不同的计算机视觉和法医图像分类问题。具体来说,我们考虑了JPEG压缩和JPEG网格不对齐引起的问题。我们表明,在生成训练数据集时,有必要考虑这些影响,以便正确地训练取证检测器而不失去泛化能力,而对于计算机视觉任务,几乎可以忽略这些影响。
{"title":"Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision","authors":"S. Mandelli, Nicolò Bonettini, Paolo Bestagini, S. Tubaro","doi":"10.1109/WIFS49906.2020.9360903","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360903","url":null,"abstract":"Convolutional Neural Networks (CNNs) have proved very accurate in multiple computer vision image classification tasks that required visual inspection in the past (e.g., object recognition, face detection, etc.). Motivated by these astonishing results, researchers have also started using CNNs to cope with image forensic problems (e.g., camera model identification, tampering detection, etc.). However, in computer vision, image classification methods typically rely on visual cues easily detectable by human eyes. Conversely, forensic solutions rely on almost invisible traces that are often very subtle and lie in the fine details of the image under analysis. For this reason, training a CNN to solve a forensic task requires some special care, as common processing operations (e.g., resampling, compression, etc.) can strongly hinder forensic traces. In this work, we focus on the effect that JPEG has on CNN training considering different computer vision and forensic image classification problems. Specifically, we consider the issues that rise from JPEG compression and misalignment of the JPEG grid. We show that it is necessary to consider these effects when generating a training dataset in order to properly train a forensic detector not losing generalization capability, whereas it is almost possible to ignore these effects for computer vision tasks.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122666535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
On Perfect Obfuscation: Local Information Geometry Analysis 关于完全混淆:局部信息几何分析
Pub Date : 2020-09-09 DOI: 10.1109/WIFS49906.2020.9360888
Behrooz Razeghi, F. Calmon, D. Gunduz, S. Voloshynovskiy
We consider the problem of privacy-preserving data release for a specific utility task under perfect obfuscation constraint. We establish the necessary and sufficient condition to extract features of the original data that carry as much information about a utility attribute as possible, while not revealing any information about the sensitive attribute. This problem formulation generalizes both the information bottleneck and privacy funnel problems. We adopt a local information geometry analysis that provides useful insight into information coupling and trajectory construction of spherical perturbation of probability mass functions. This analysis allows us to construct the modal decomposition of the joint distributions, divergence transfer matrices, and mutual information. By decomposing the mutual information into orthogonal modes, we obtain the locally sufficient statistics for inferences about the utility attribute, while satisfying perfect obfuscation constraint. Furthermore, we develop the notion of perfect obfuscation based on χ2-divergence and Kullback–Leibler divergence in the Euclidean information space.
研究了在完全混淆约束下特定实用任务的隐私保护数据发布问题。我们建立了必要和充分条件,以提取原始数据的特征,这些特征携带尽可能多的关于实用属性的信息,同时不透露任何关于敏感属性的信息。这个问题的表述概括了信息瓶颈和隐私漏斗问题。我们采用了局部信息几何分析,为概率质量函数球面摄动的信息耦合和轨迹构建提供了有用的见解。这种分析使我们能够构造联合分布、散度转移矩阵和互信息的模态分解。通过将互信息分解为正交模态,在满足完全混淆约束的情况下,得到了效用属性推断的局部充分统计量。在此基础上,提出了欧几里得信息空间中基于χ2-散度和Kullback-Leibler散度的完全混淆概念。
{"title":"On Perfect Obfuscation: Local Information Geometry Analysis","authors":"Behrooz Razeghi, F. Calmon, D. Gunduz, S. Voloshynovskiy","doi":"10.1109/WIFS49906.2020.9360888","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360888","url":null,"abstract":"We consider the problem of privacy-preserving data release for a specific utility task under perfect obfuscation constraint. We establish the necessary and sufficient condition to extract features of the original data that carry as much information about a utility attribute as possible, while not revealing any information about the sensitive attribute. This problem formulation generalizes both the information bottleneck and privacy funnel problems. We adopt a local information geometry analysis that provides useful insight into information coupling and trajectory construction of spherical perturbation of probability mass functions. This analysis allows us to construct the modal decomposition of the joint distributions, divergence transfer matrices, and mutual information. By decomposing the mutual information into orthogonal modes, we obtain the locally sufficient statistics for inferences about the utility attribute, while satisfying perfect obfuscation constraint. Furthermore, we develop the notion of perfect obfuscation based on χ2-divergence and Kullback–Leibler divergence in the Euclidean information space.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131631635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
CNN Detection of GAN-Generated Face Images based on Cross-Band Co-occurrences Analysis 基于交叉频带共现分析的gan生成人脸图像CNN检测
Pub Date : 2020-07-25 DOI: 10.1109/WIFS49906.2020.9360905
M. Barni, Kassem Kallas, Ehsan Nowroozi, B. Tondi
Last-generation GAN models allow to generate synthetic images which are visually indistinguishable from natural ones, raising the need to develop tools to distinguish fake and natural images thus contributing to preserve the trustworthiness of digital images. While modern GAN models can generate very high-quality images with no visible spatial artifacts, reconstruction of consistent relationships among colour channels is expectedly more difficult. In this paper, we propose a method for distinguishing GAN-generated from natural images by exploiting inconsistencies among spectral bands, with specific focus on the generation of synthetic face images. Specifically, we use cross-band co-occurrence matrices, in addition to spatial co-occurrence matrices, as input to a CNN model, which is trained to distinguish between real and synthetic faces. The results of our experiments confirm the goodness of our approach which outperforms a similar detection technique based on intra-band spatial co-occurrences only. The performance gain is particularly significant with regard to robustness against post-processing, like geometric transformations, filtering and contrast manipulations.
上一代GAN模型允许生成在视觉上与自然图像无法区分的合成图像,从而提高了开发区分假图像和自然图像的工具的需求,从而有助于保持数字图像的可信度。虽然现代GAN模型可以生成没有可见空间伪像的高质量图像,但重建颜色通道之间的一致关系预计会更加困难。在本文中,我们提出了一种通过利用光谱波段之间的不一致性来区分gan生成与自然图像的方法,并特别关注合成人脸图像的生成。具体来说,除了空间共现矩阵外,我们还使用了跨频带共现矩阵作为CNN模型的输入,该模型被训练以区分真实人脸和合成人脸。我们的实验结果证实了我们的方法的优点,它优于仅基于带内空间共现的类似检测技术。在对后处理(如几何变换、过滤和对比度操作)的鲁棒性方面,性能增益尤其显著。
{"title":"CNN Detection of GAN-Generated Face Images based on Cross-Band Co-occurrences Analysis","authors":"M. Barni, Kassem Kallas, Ehsan Nowroozi, B. Tondi","doi":"10.1109/WIFS49906.2020.9360905","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360905","url":null,"abstract":"Last-generation GAN models allow to generate synthetic images which are visually indistinguishable from natural ones, raising the need to develop tools to distinguish fake and natural images thus contributing to preserve the trustworthiness of digital images. While modern GAN models can generate very high-quality images with no visible spatial artifacts, reconstruction of consistent relationships among colour channels is expectedly more difficult. In this paper, we propose a method for distinguishing GAN-generated from natural images by exploiting inconsistencies among spectral bands, with specific focus on the generation of synthetic face images. Specifically, we use cross-band co-occurrence matrices, in addition to spatial co-occurrence matrices, as input to a CNN model, which is trained to distinguish between real and synthetic faces. The results of our experiments confirm the goodness of our approach which outperforms a similar detection technique based on intra-band spatial co-occurrences only. The performance gain is particularly significant with regard to robustness against post-processing, like geometric transformations, filtering and contrast manipulations.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Multi-spectral Facial Landmark Detection 多光谱人脸特征检测
Pub Date : 2020-06-09 DOI: 10.1109/WIFS49906.2020.9360890
Jin Keong, Xingbo Dong, Zhe Jin, Khawla Mallat, J. Dugelay
Thermal face image analysis is favorable for certain circumstances. For example, illumination-sensitive applications, like nighttime surveillance; and privacy-preserving demanded access control. However, the inadequate study on thermal face image analysis calls for attention in responding to the industry requirements. Detecting facial landmark points are important for many face analysis tasks, such as face recognition, 3D face reconstruction, and face expression recognition. In this paper, we propose a robust neural network enabled facial landmark detection, namely Deep Multi-Spectral Learning (DMSL). Briefly, DMSL consists of two sub-models, i.e. face boundary detection, and landmark coordinates detection. Such an architecture demonstrates the capability of detecting the facial landmarks on both visible and thermal images. Particularly, the proposed DMSL model is robust in facial landmark detection where the face is partially occluded, or facing different directions. The experiment conducted on Eurecom’s visible and thermal paired database shows the superior performance of DMSL over the state-of-the-art for thermal facial landmark detection. In addition to that, we have annotated a thermal face dataset with their respective facial landmark for the purpose of experimentation.
热人脸图像分析在某些情况下是有利的。例如,照明敏感的应用,如夜间监视;保护隐私需要访问控制。然而,在热人脸图像分析方面的研究不足,需要重视,以满足行业的需求。人脸特征点的检测对于人脸识别、三维人脸重建、人脸表情识别等人脸分析任务具有重要意义。在本文中,我们提出了一种鲁棒神经网络支持的面部地标检测,即深度多光谱学习(DMSL)。简单地说,DMSL包括两个子模型,即人脸边界检测和地标坐标检测。这种结构证明了在可见图像和热图像上检测面部地标的能力。特别是在人脸被部分遮挡或面向不同方向的情况下,所提出的DMSL模型具有较强的鲁棒性。在Eurecom的可见和热配对数据库上进行的实验表明,DMSL在热面部地标检测方面的性能优于最先进的技术。除此之外,我们还用各自的面部地标注释了热人脸数据集,用于实验目的。
{"title":"Multi-spectral Facial Landmark Detection","authors":"Jin Keong, Xingbo Dong, Zhe Jin, Khawla Mallat, J. Dugelay","doi":"10.1109/WIFS49906.2020.9360890","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360890","url":null,"abstract":"Thermal face image analysis is favorable for certain circumstances. For example, illumination-sensitive applications, like nighttime surveillance; and privacy-preserving demanded access control. However, the inadequate study on thermal face image analysis calls for attention in responding to the industry requirements. Detecting facial landmark points are important for many face analysis tasks, such as face recognition, 3D face reconstruction, and face expression recognition. In this paper, we propose a robust neural network enabled facial landmark detection, namely Deep Multi-Spectral Learning (DMSL). Briefly, DMSL consists of two sub-models, i.e. face boundary detection, and landmark coordinates detection. Such an architecture demonstrates the capability of detecting the facial landmarks on both visible and thermal images. Particularly, the proposed DMSL model is robust in facial landmark detection where the face is partially occluded, or facing different directions. The experiment conducted on Eurecom’s visible and thermal paired database shows the superior performance of DMSL over the state-of-the-art for thermal facial landmark detection. In addition to that, we have annotated a thermal face dataset with their respective facial landmark for the purpose of experimentation.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114453882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Detecting Deep-Fake Videos from Appearance and Behavior 从外观和行为检测深度假视频
Pub Date : 2020-04-29 DOI: 10.1109/WIFS49906.2020.9360904
S. Agarwal, Tarek El-Gaaly, H. Farid, Ser-Nam Lim
Synthetically-generated audios and videos - so-called deep fakes - continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create a sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel disinformation campaigns, and create non-consensual pornography. We describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. We show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.
人工合成的音频和视频——所谓的深度伪造——继续吸引着计算机图形学和计算机视觉社区的想象力。与此同时,技术的民主化可以为任何人的言论制作复杂的操纵视频,这继续令人担忧,因为它有能力扰乱民主选举,实施小规模到大规模的欺诈,助长虚假宣传活动,以及制造未经同意的色情内容。我们描述了一种基于生物特征的法医技术,用于检测人脸交换深度伪造。该技术将基于面部识别的静态生物识别与基于面部表情和头部运动的时间行为生物识别相结合,其中使用具有度量学习目标函数的CNN学习行为嵌入。我们展示了这种方法在几个大规模视频数据集以及野外深度伪造中的有效性。
{"title":"Detecting Deep-Fake Videos from Appearance and Behavior","authors":"S. Agarwal, Tarek El-Gaaly, H. Farid, Ser-Nam Lim","doi":"10.1109/WIFS49906.2020.9360904","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360904","url":null,"abstract":"Synthetically-generated audios and videos - so-called deep fakes - continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create a sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel disinformation campaigns, and create non-consensual pornography. We describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. We show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129724720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
Empirical Evaluation of PRNU Fingerprint Variation for Mismatched Imaging Pipelines 错配成像管道PRNU指纹变异的实证评价
Pub Date : 2020-04-04 DOI: 10.1109/WIFS49906.2020.9360911
Sharad Joshi, Pawel Korus, N. Khanna, N. Memon
We assess the variability of PRNU-based camera fingerprints with mismatched imaging pipelines (e.g., different camera ISP or digital darkroom software). We show that camera fingerprints exhibit non-negligible variations in this setup, which may lead to unexpected degradation of detection statistics in real-world use-cases. We tested 13 different pipelines, including standard digital darkroom software and recent neural-networks. We observed that correlation between fingerprints from mismatched pipelines drops on average to 0.38 and the PCE detection statistic drops by over 40%. The degradation in error rates is the strongest for small patches commonly used in photo manipulation detection, and when neural networks are used for photo development. At a fixed 0.5% FPR setting, the TPR drops by 17 ppt (percentage points) for 128 px and 256 px patches.
我们用不匹配的成像管道(例如,不同的相机ISP或数字暗室软件)评估基于prnu的相机指纹的可变性。我们表明,相机指纹在这种设置中表现出不可忽略的变化,这可能会导致实际用例中检测统计数据的意外下降。我们测试了13种不同的管道,包括标准的数字暗房软件和最新的神经网络。我们观察到,不匹配管道的指纹之间的相关性平均下降到0.38,PCE检测统计量下降了40%以上。对于通常用于照片处理检测的小块,以及当神经网络用于照片显影时,误差率的下降是最强的。在固定的0.5% FPR设置下,128像素和256像素补丁的TPR下降了17个百分点。
{"title":"Empirical Evaluation of PRNU Fingerprint Variation for Mismatched Imaging Pipelines","authors":"Sharad Joshi, Pawel Korus, N. Khanna, N. Memon","doi":"10.1109/WIFS49906.2020.9360911","DOIUrl":"https://doi.org/10.1109/WIFS49906.2020.9360911","url":null,"abstract":"We assess the variability of PRNU-based camera fingerprints with mismatched imaging pipelines (e.g., different camera ISP or digital darkroom software). We show that camera fingerprints exhibit non-negligible variations in this setup, which may lead to unexpected degradation of detection statistics in real-world use-cases. We tested 13 different pipelines, including standard digital darkroom software and recent neural-networks. We observed that correlation between fingerprints from mismatched pipelines drops on average to 0.38 and the PCE detection statistic drops by over 40%. The degradation in error rates is the strongest for small patches commonly used in photo manipulation detection, and when neural networks are used for photo development. At a fixed 0.5% FPR setting, the TPR drops by 17 ppt (percentage points) for 128 px and 256 px patches.","PeriodicalId":354881,"journal":{"name":"2020 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122765234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2020 IEEE International Workshop on Information Forensics and Security (WIFS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1