首页 > 最新文献

2021 IEEE International Workshop on Information Forensics and Security (WIFS)最新文献

英文 中文
Self-embedding watermarking method for G-code used in 3D printing 3D打印中g码的自嵌入水印方法
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648386
Zhenyu Li, Daofu Gong, Lei Tan, Xiangyang Luo, Fenlin Liu, A. Bors
3D printing is faced with a lot of security issues, such as malicious tampering, intellectual property theft and so on. This work aims to protect the G-code file which controls the 3D printing process by proposing a self-embedding watermarking method for G-code file. This method groups the G-code lines into code blocks and achieves a random mapping relationship for each code block. Each code block is divided into two parts, carrying the authentication and recovery bits, respectively. The tampered regions are detected by leveraging the authentication bits in each code block. Meanwhile, the G-code files are restored based on the recovery bits and the geometric information of the neighboring code blocks. Experimental results indicate that the proposed method can effectively detect the tampered region and restore the G-code file to a large extent, while limiting the distortion caused to the 3D printed object by the watermarking.
3D打印面临着很多安全问题,如恶意篡改、知识产权盗窃等。本文提出了一种G-code文件自嵌入水印方法,旨在保护控制3D打印过程的G-code文件。该方法将g代码行分组为代码块,并实现每个代码块的随机映射关系。每个码块分为两部分,分别携带认证位和恢复位。通过利用每个代码块中的身份验证位来检测被篡改的区域。同时,根据恢复位和相邻码块的几何信息对g码文件进行恢复。实验结果表明,该方法可以有效地检测出篡改区域,在很大程度上恢复g码文件,同时限制了水印对3D打印对象造成的失真。
{"title":"Self-embedding watermarking method for G-code used in 3D printing","authors":"Zhenyu Li, Daofu Gong, Lei Tan, Xiangyang Luo, Fenlin Liu, A. Bors","doi":"10.1109/WIFS53200.2021.9648386","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648386","url":null,"abstract":"3D printing is faced with a lot of security issues, such as malicious tampering, intellectual property theft and so on. This work aims to protect the G-code file which controls the 3D printing process by proposing a self-embedding watermarking method for G-code file. This method groups the G-code lines into code blocks and achieves a random mapping relationship for each code block. Each code block is divided into two parts, carrying the authentication and recovery bits, respectively. The tampered regions are detected by leveraging the authentication bits in each code block. Meanwhile, the G-code files are restored based on the recovery bits and the geometric information of the neighboring code blocks. Experimental results indicate that the proposed method can effectively detect the tampered region and restore the G-code file to a large extent, while limiting the distortion caused to the 3D printed object by the watermarking.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128835360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics 实用数字图像取证的无监督JPEG域自适应
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648397
Rony Abecidan, V. Itier, Jérémie Boulanger, P. Bas
Domain adaptation is a major issue for doing practical forensics. Since examined images are likely to come from a different development pipeline compared to the ones used for training our models, that may disturb them by a lot, degrading their performances. In this paper, we present a method enabling to make a forgery detector more robust to distributions different but related to its training one, inspired by [1]. The strategy exhibited in this paper foster a detector to find a feature invariant space where source and target distributions are close. Our study deals more precisely with discrepancies observed due to JPEG compressions and our experiments reveal that the proposed adaptation scheme can reasonably reduce the mismatch, even with a rather small target set with no labels when the source domain is properly selected. On top of that, when a small portion of labelled target images is available this method reduces the gap with mix training while being unsupervised.
领域适应是进行实际取证的一个主要问题。由于检查的图像可能来自不同的开发管道,而不是用于训练我们的模型的开发管道,这可能会对它们造成很大的干扰,降低它们的性能。在本文中,我们提出了一种方法,使伪造检测器对与其训练分布不同但相关的分布更具鲁棒性,灵感来自[1]。本文所展示的策略培养了一个检测器来寻找源和目标分布接近的特征不变空间。我们的研究更精确地处理了由于JPEG压缩而观察到的差异,我们的实验表明,当源域选择得当时,即使目标集很小且没有标签,所提出的自适应方案也可以合理地减少不匹配。最重要的是,当一小部分标记的目标图像可用时,该方法在无监督的情况下减少了与混合训练的差距。
{"title":"Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics","authors":"Rony Abecidan, V. Itier, Jérémie Boulanger, P. Bas","doi":"10.1109/WIFS53200.2021.9648397","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648397","url":null,"abstract":"Domain adaptation is a major issue for doing practical forensics. Since examined images are likely to come from a different development pipeline compared to the ones used for training our models, that may disturb them by a lot, degrading their performances. In this paper, we present a method enabling to make a forgery detector more robust to distributions different but related to its training one, inspired by [1]. The strategy exhibited in this paper foster a detector to find a feature invariant space where source and target distributions are close. Our study deals more precisely with discrepancies observed due to JPEG compressions and our experiments reveal that the proposed adaptation scheme can reasonably reduce the mismatch, even with a rather small target set with no labels when the source domain is properly selected. On top of that, when a small portion of labelled target images is available this method reduces the gap with mix training while being unsupervised.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115135709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Iteratively Generated Adversarial Perturbation for Audio Stego Post-processing 音频隐去后处理的迭代生成对抗摄动
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648380
Kaiyu Ying, Rangding Wang, Diqun Yan
Recent studies have shown that adversarial examples can easily deceive neural networks. But how to ensure the accuracy of extraction while introducing perturbations to steganography is a major difficulty. In this paper, we propose a method of iterative adversarial stego post-processing model called IA-SPP that can generate enhanced post-stego audio to resist steganalysis networks and the SPL of adversarial perturbations is restricted. The model decomposes the perturbation to the point level and updates point-wise perturbations iteratively by the large-absolute-gradient-first rule. The enhanced post-stego obtained by adding the stego and the adversarial perturbation has a high probability of being judged as a cover by the target network. In particular, We further considered how to simultaneously fight against multiple networks. The extensive experiments on the TIMIT show that the proposed model generalizes well across different steganography methods.
最近的研究表明,对抗性示例很容易欺骗神经网络。但如何在引入干扰的同时保证提取的准确性是隐写的一大难点。在本文中,我们提出了一种迭代对抗隐写后处理模型IA-SPP方法,该方法可以生成增强的隐写后音频以抵抗隐写分析网络,并且限制了对抗扰动的SPL。该模型将扰动分解到点水平,并根据大绝对梯度优先原则迭代更新逐点扰动。加入隐进和对抗摄动得到的增强后隐进有很高的概率被目标网络判断为掩护。特别是,我们进一步考虑了如何同时对抗多个网络。在TIMIT上的大量实验表明,所提出的模型在不同的隐写方法中具有良好的泛化性。
{"title":"Iteratively Generated Adversarial Perturbation for Audio Stego Post-processing","authors":"Kaiyu Ying, Rangding Wang, Diqun Yan","doi":"10.1109/WIFS53200.2021.9648380","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648380","url":null,"abstract":"Recent studies have shown that adversarial examples can easily deceive neural networks. But how to ensure the accuracy of extraction while introducing perturbations to steganography is a major difficulty. In this paper, we propose a method of iterative adversarial stego post-processing model called IA-SPP that can generate enhanced post-stego audio to resist steganalysis networks and the SPL of adversarial perturbations is restricted. The model decomposes the perturbation to the point level and updates point-wise perturbations iteratively by the large-absolute-gradient-first rule. The enhanced post-stego obtained by adding the stego and the adversarial perturbation has a high probability of being judged as a cover by the target network. In particular, We further considered how to simultaneously fight against multiple networks. The extensive experiments on the TIMIT show that the proposed model generalizes well across different steganography methods.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129120268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can Copy Detection Patterns be copied? Evaluating the performance of attacks and highlighting the role of the detector 复制检测模式可以复制吗?评估攻击的性能并强调检测器的作用
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648384
Elyes Khermaza, Iuliia Tkachenko, J. Picard
Copy Detection Patterns (CDP) have received significant attention from academia and industry as a practical mean of detecting counterfeits. Their security level against sophisticated attacks has been studied theoretically and practically in different research papers, but for reasons that will be explained below, the results are not fully conclusive. In addition, the publicly available CDP datasets are not practically usable to evaluate the performance of authentication algorithms. In short, the apparently simple question: “are copy detection patterns secure against copy?”, remains unanswered as of today. The primary contribution of this paper is to present a publicly available dataset of CDPs including multiple types of copies and attacks, allowing to systematically compare the performance level of CDPs against different attacks proposed in the prior art. The specific case in which a CDP is the same for an entire batch of prints, which is of practical importance as it covers applications with widely used industrial printers such as offset, flexo and rotogravure, is also studied. A second contribution is to highlight the role played by the CDP detector and its different processing steps. Indeed, depending on the specific processing involved, the detection performance can widely outperform the CDP bit error rate which has been used as a reference metrics in the prior art.
复制检测模式(CDP)作为一种检测假冒产品的实用手段,受到了学术界和工业界的广泛关注。在不同的研究论文中,他们对复杂攻击的安全水平已经在理论上和实践中进行了研究,但由于下面将解释的原因,结果并不是完全结论性的。此外,公开可用的CDP数据集不能实际用于评估认证算法的性能。简而言之,就是一个看似简单的问题:“复制检测模式对复制安全吗?”,至今仍未得到答复。本文的主要贡献是提供了一个公开可用的cdp数据集,包括多种类型的副本和攻击,允许系统地比较cdp针对现有技术中提出的不同攻击的性能水平。还研究了整批印刷品的CDP相同的具体情况,这具有实际重要性,因为它涵盖了广泛使用的工业打印机(如胶印,柔印和凹印)的应用。第二个贡献是突出了CDP检测器所起的作用及其不同的处理步骤。事实上,根据所涉及的具体处理,检测性能可以远远优于CDP误码率,这在现有技术中已被用作参考指标。
{"title":"Can Copy Detection Patterns be copied? Evaluating the performance of attacks and highlighting the role of the detector","authors":"Elyes Khermaza, Iuliia Tkachenko, J. Picard","doi":"10.1109/WIFS53200.2021.9648384","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648384","url":null,"abstract":"Copy Detection Patterns (CDP) have received significant attention from academia and industry as a practical mean of detecting counterfeits. Their security level against sophisticated attacks has been studied theoretically and practically in different research papers, but for reasons that will be explained below, the results are not fully conclusive. In addition, the publicly available CDP datasets are not practically usable to evaluate the performance of authentication algorithms. In short, the apparently simple question: “are copy detection patterns secure against copy?”, remains unanswered as of today. The primary contribution of this paper is to present a publicly available dataset of CDPs including multiple types of copies and attacks, allowing to systematically compare the performance level of CDPs against different attacks proposed in the prior art. The specific case in which a CDP is the same for an entire batch of prints, which is of practical importance as it covers applications with widely used industrial printers such as offset, flexo and rotogravure, is also studied. A second contribution is to highlight the role played by the CDP detector and its different processing steps. Indeed, depending on the specific processing involved, the detection performance can widely outperform the CDP bit error rate which has been used as a reference metrics in the prior art.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123190500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Robust Image Hashing for Detecting Small Tampering Using a Hyperrectangular Region 利用超矩形区域检测小篡改的鲁棒图像哈希
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648383
Toshiki Itagaki, Yuki Funabiki, T. Akishita
In this paper, we propose a robust image hashing method that enables detecting small tampering. Existing hashing methods are too robust, and the trade-off relation between the robustness and the sensitivity to visual content changes needs to be improved to detect small tampering. Though the adaptive thresholding method can improve the trade-off, there's more room to improve and it requires tampered image derived from the original, which limits its applications. To overcome these two drawbacks, we introduce a new concept of a hyperrectangular region in multi-dimensional hash space, which is determined at the timing of hash generation as the region that covers a hash cluster by using the maximum and the minimum of the cluster per each hash axis. We evaluate our method and the existing methods. Our method improves the trade-off, which achieves 0.9428 as AUC (Area Under the Curve) for detecting tampering that occupies about 0.1% area of the image in the presence of JPEG compression and reducing the size as content-preserving operations. Furthermore, our method does not require tampered image derived from the original, which differs from the existing method.
在本文中,我们提出了一种鲁棒图像哈希方法,可以检测到小篡改。现有的哈希方法鲁棒性太强,需要改进鲁棒性与视觉内容变化敏感性之间的权衡关系,以检测微小的篡改。自适应阈值法虽然可以改善这种权衡,但有较大的改进空间,并且需要从原始图像中提取篡改图像,这限制了其应用。为了克服这两个缺点,我们在多维哈希空间中引入了超矩形区域的新概念,该区域在哈希生成时通过使用每个哈希轴的簇的最大值和最小值来确定作为覆盖哈希簇的区域。我们评估了我们的方法和现有的方法。我们的方法改进了权衡,在存在JPEG压缩的情况下,检测占用图像约0.1%面积的篡改,并减少作为内容保留操作的大小,AUC(曲线下面积)达到0.9428。此外,我们的方法不需要从原始图像中提取篡改图像,这与现有方法不同。
{"title":"Robust Image Hashing for Detecting Small Tampering Using a Hyperrectangular Region","authors":"Toshiki Itagaki, Yuki Funabiki, T. Akishita","doi":"10.1109/WIFS53200.2021.9648383","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648383","url":null,"abstract":"In this paper, we propose a robust image hashing method that enables detecting small tampering. Existing hashing methods are too robust, and the trade-off relation between the robustness and the sensitivity to visual content changes needs to be improved to detect small tampering. Though the adaptive thresholding method can improve the trade-off, there's more room to improve and it requires tampered image derived from the original, which limits its applications. To overcome these two drawbacks, we introduce a new concept of a hyperrectangular region in multi-dimensional hash space, which is determined at the timing of hash generation as the region that covers a hash cluster by using the maximum and the minimum of the cluster per each hash axis. We evaluate our method and the existing methods. Our method improves the trade-off, which achieves 0.9428 as AUC (Area Under the Curve) for detecting tampering that occupies about 0.1% area of the image in the presence of JPEG compression and reducing the size as content-preserving operations. Furthermore, our method does not require tampered image derived from the original, which differs from the existing method.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133600082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi Loss Fusion For Matching Smartphone Captured Contactless Finger Images 多损失融合匹配智能手机捕获的非接触式手指图像
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648393
Bhavin Jawade, Akshay Agarwal, S. Setlur, N. Ratha
Traditional fingerprint authentication requires the acquisition of data through touch-based specialized sensors. However, due to many hygienic concerns including the global spread of the COVID virus through contact with a surface has led to an increased interest in contactless fingerprint image acquisition methods. Matching fingerprints acquired using contactless imaging against contact-based images brings up the problem of performing cross modal fingerprint matching for identity verification. In this paper, we propose a cost-effective, highly accurate and secure end-to-end contactless fingerprint recognition solution. The proposed framework first segments the finger region from an image scan of the hand using a mobile phone camera. For this purpose, we developed a cross-platform mobile application for fingerprint enrollment, verification, and authentication keeping security, robustness, and accessibility in mind. The segmented finger images go through fingerprint enhancement to highlight discriminative ridge-based features. A novel deep convolutional network is proposed to learn a representation from the enhanced images based on the optimization of various losses. The proposed algorithms for each stage are evaluated on multiple publicly available contactless databases. Our matching accuracy and the associated security employed in the system establishes the strength of the proposed solution framework.
传统的指纹认证需要通过基于触摸的专用传感器获取数据。然而,由于许多卫生问题,包括COVID病毒通过接触表面在全球传播,导致人们对非接触式指纹图像采集方法的兴趣增加。将使用非接触式成像获取的指纹与基于接触式图像进行匹配,提出了进行身份验证的跨模态指纹匹配的问题。本文提出了一种低成本、高精度、安全的端到端非接触式指纹识别解决方案。所提出的框架首先使用手机相机从手部图像扫描中分割手指区域。为此,我们开发了一个跨平台移动应用程序,用于指纹注册、验证和身份验证,同时考虑到安全性、健壮性和可访问性。分割后的手指图像通过指纹增强来突出区别性的基于脊的特征。提出了一种新颖的深度卷积网络,基于各种损失的优化,从增强图像中学习表征。每个阶段提出的算法在多个公开可用的非接触式数据库上进行了评估。我们在系统中使用的匹配准确性和相关安全性建立了所建议的解决方案框架的强度。
{"title":"Multi Loss Fusion For Matching Smartphone Captured Contactless Finger Images","authors":"Bhavin Jawade, Akshay Agarwal, S. Setlur, N. Ratha","doi":"10.1109/WIFS53200.2021.9648393","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648393","url":null,"abstract":"Traditional fingerprint authentication requires the acquisition of data through touch-based specialized sensors. However, due to many hygienic concerns including the global spread of the COVID virus through contact with a surface has led to an increased interest in contactless fingerprint image acquisition methods. Matching fingerprints acquired using contactless imaging against contact-based images brings up the problem of performing cross modal fingerprint matching for identity verification. In this paper, we propose a cost-effective, highly accurate and secure end-to-end contactless fingerprint recognition solution. The proposed framework first segments the finger region from an image scan of the hand using a mobile phone camera. For this purpose, we developed a cross-platform mobile application for fingerprint enrollment, verification, and authentication keeping security, robustness, and accessibility in mind. The segmented finger images go through fingerprint enhancement to highlight discriminative ridge-based features. A novel deep convolutional network is proposed to learn a representation from the enhanced images based on the optimization of various losses. The proposed algorithms for each stage are evaluated on multiple publicly available contactless databases. Our matching accuracy and the associated security employed in the system establishes the strength of the proposed solution framework.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128146085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
CNN Steganalyzers Leverage Local Embedding Artifacts CNN隐写分析器利用局部嵌入伪影
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648400
Yassine Yousfi, Jan Butora, J. Fridrich
While convolutional neural networks have firmly established themselves as the superior steganography detectors, little human-interpretable feedback to the steganographer as to how the network reaches its decision has so far been obtained from trained models. The folklore has it that, unlike rich models, which rely on global statistics, CNNs can leverage spatially localized signals. In this paper, we adapt existing attribution tools, such as Integrated Gradients and Last Activation Maps, to show that CNNs can indeed find overwhelming evidence for steganography from a few highly localized embedding artifacts. We look at the nature of these artifacts via case studies of both modern content-adaptive and older steganographic algorithms. The main culprit is linked to “content creating changes” when the magnitude of a DCT coefficient is increased (Jsteg, –F5), which can be especially detectable for high frequency DCT modes that were originally zeros (J-MiPOD). In contrast, J-UNIWARD introduces the smallest number of locally detectable embedding artifacts among all tested algorithms. Moreover, we find examples of inhibition that facilitate distinguishing between the selection channels of stego algorithms in a multi-class detector. The authors believe that identifying and characterizing local embedding artifacts provides useful feedback for future design of steganographic schemes.
虽然卷积神经网络已经牢固地确立了自己作为卓越隐写检测器的地位,但迄今为止,从训练过的模型中获得的关于网络如何做出决定的人类可解释的反馈很少。民间流传的说法是,与依赖全球统计数据的富模型不同,cnn可以利用空间定位信号。在本文中,我们采用了现有的归因工具,如集成梯度和最后激活图,以表明cnn确实可以从一些高度本地化的嵌入工件中找到压倒性的隐写证据。我们通过现代内容自适应和旧隐写算法的案例研究来研究这些工件的性质。当DCT系数的大小增加时(Jsteg, -F5),“内容创造变化”是罪魁祸首,这对于原本为零的高频DCT模式(J-MiPOD)来说尤其明显。相比之下,J-UNIWARD在所有测试算法中引入了最少数量的局部可检测嵌入伪像。此外,我们发现了抑制的例子,有助于区分多类检测器中隐写算法的选择通道。作者认为,识别和表征局部嵌入伪影为今后的隐写方案设计提供了有用的反馈。
{"title":"CNN Steganalyzers Leverage Local Embedding Artifacts","authors":"Yassine Yousfi, Jan Butora, J. Fridrich","doi":"10.1109/WIFS53200.2021.9648400","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648400","url":null,"abstract":"While convolutional neural networks have firmly established themselves as the superior steganography detectors, little human-interpretable feedback to the steganographer as to how the network reaches its decision has so far been obtained from trained models. The folklore has it that, unlike rich models, which rely on global statistics, CNNs can leverage spatially localized signals. In this paper, we adapt existing attribution tools, such as Integrated Gradients and Last Activation Maps, to show that CNNs can indeed find overwhelming evidence for steganography from a few highly localized embedding artifacts. We look at the nature of these artifacts via case studies of both modern content-adaptive and older steganographic algorithms. The main culprit is linked to “content creating changes” when the magnitude of a DCT coefficient is increased (Jsteg, –F5), which can be especially detectable for high frequency DCT modes that were originally zeros (J-MiPOD). In contrast, J-UNIWARD introduces the smallest number of locally detectable embedding artifacts among all tested algorithms. Moreover, we find examples of inhibition that facilitate distinguishing between the selection channels of stego algorithms in a multi-class detector. The authors believe that identifying and characterizing local embedding artifacts provides useful feedback for future design of steganographic schemes.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114869290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Secure Collaborative Editing Using Secret Sharing 使用秘密共享的安全协同编辑
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648395
Shashank Arora, P. Atrey
With the advent of cloud-based collaborative editing, there have been security and privacy concerns about user data since the users are not the sole owners of the data stored over the cloud. Most secure collaborative editing solutions thus employ the use of AES to secure user content. In this work, we explore the use of secret sharing to maintain the confidentiality of user data in a collaborative document. We establish that using secret sharing provides an average increase of 56.01% in performance over AES with a single set of coefficients and an average performance increase of 30.37% with multiple sets of coefficients, while not requiring maintenance and distribution of symmetric keys as in the case of AES. We discuss the incorporation of keyword-based search with the proposed framework and present the operability and security analysis.
随着基于云的协同编辑的出现,由于用户不是存储在云上的数据的唯一所有者,因此出现了用户数据的安全和隐私问题。因此,大多数安全的协作编辑解决方案都使用AES来保护用户内容。在这项工作中,我们探索了使用秘密共享来维护协作文档中用户数据的机密性。我们建立了使用秘密共享比使用单个系数集的AES平均性能提高56.01%,使用多个系数集的AES平均性能提高30.37%,同时不需要像AES那样维护和分发对称密钥。我们讨论了基于关键字的搜索与所提出的框架的结合,并给出了可操作性和安全性分析。
{"title":"Secure Collaborative Editing Using Secret Sharing","authors":"Shashank Arora, P. Atrey","doi":"10.1109/WIFS53200.2021.9648395","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648395","url":null,"abstract":"With the advent of cloud-based collaborative editing, there have been security and privacy concerns about user data since the users are not the sole owners of the data stored over the cloud. Most secure collaborative editing solutions thus employ the use of AES to secure user content. In this work, we explore the use of secret sharing to maintain the confidentiality of user data in a collaborative document. We establish that using secret sharing provides an average increase of 56.01% in performance over AES with a single set of coefficients and an average performance increase of 30.37% with multiple sets of coefficients, while not requiring maintenance and distribution of symmetric keys as in the case of AES. We discuss the incorporation of keyword-based search with the proposed framework and present the operability and security analysis.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"14 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120859748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of Synthetically Generated Mated Samples from Single Fingerprint Samples Instances 单个指纹样本实例合成的配对样本评估
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648394
Simon Kirchgasser, Christof Kauba, A. Uhl
The availability of biometric data (here fingerprint samples) is a crucial requirement in all areas of biometrics. Due to recent changes in cross-border regulations (GDPR) sharing and accessing biometric sample data has become more difficult. An alternative way to facilitate a sufficient amount of test data is to synthetically generate biometric samples, which has its limitations. One of them is the generated data being not realistic enough and a more common one is that most free solutions are not able to generate mated samples, especially for fingerprints. In this work we propose a multi-level methodology to assess synthetically generated fingerprint data in terms of their similarity to real fingerprint samples. Furthermore, we present a generic approach to extend an existing synthetic fingerprint generator to be able to produce mated samples on the basis of single instances of non-mated ones which is then evaluated using the aforementioned multi-level methodology.
生物特征数据(这里是指纹样本)的可用性在生物特征学的所有领域都是至关重要的要求。由于最近跨境法规(GDPR)的变化,共享和访问生物识别样本数据变得更加困难。另一种促进获得足够数量测试数据的方法是合成生成生物识别样本,这有其局限性。其中之一是生成的数据不够真实,更常见的是大多数免费解决方案无法生成匹配的样本,特别是对于指纹。在这项工作中,我们提出了一种多层次的方法来评估合成指纹数据与真实指纹样本的相似性。此外,我们提出了一种通用的方法来扩展现有的合成指纹发生器,使其能够在非配对样本的单个实例的基础上产生配对样本,然后使用上述多级方法进行评估。
{"title":"Assessment of Synthetically Generated Mated Samples from Single Fingerprint Samples Instances","authors":"Simon Kirchgasser, Christof Kauba, A. Uhl","doi":"10.1109/WIFS53200.2021.9648394","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648394","url":null,"abstract":"The availability of biometric data (here fingerprint samples) is a crucial requirement in all areas of biometrics. Due to recent changes in cross-border regulations (GDPR) sharing and accessing biometric sample data has become more difficult. An alternative way to facilitate a sufficient amount of test data is to synthetically generate biometric samples, which has its limitations. One of them is the generated data being not realistic enough and a more common one is that most free solutions are not able to generate mated samples, especially for fingerprints. In this work we propose a multi-level methodology to assess synthetically generated fingerprint data in terms of their similarity to real fingerprint samples. Furthermore, we present a generic approach to extend an existing synthetic fingerprint generator to be able to produce mated samples on the basis of single instances of non-mated ones which is then evaluated using the aforementioned multi-level methodology.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133944539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Source Attribution of Online News Images by Compression Analysis 基于压缩分析的在线新闻图像来源归属
Pub Date : 2021-12-07 DOI: 10.1109/WIFS53200.2021.9648385
Michael Albright, Nitesh Menon, Kristy Roschke, Arslan Basharat
The rapid increase in the amount of online disinformation warrants new and robust digital forensics methods for validating purported sources of multimodal news articles. We conducted a survey of news photojournalists for insights into their workflows. A high percentage (91%) of respondents reported standardized photo publishing procedures, which we hypothesize facilitates source verification. In this work, we demonstrate that the online news sites leave predictable and discernible patterns in the compression settings of the images they publish. We propose novel, simple, and very efficient algorithms to analyze the image compression profiles for news source verification and identification. We evaluate the algorithms' effectiveness through extensive experiments on a newly-released dataset of over 64K images from over 34K articles collected from 30 news sites. The image compression features are modeled by Naive Bayes variants or XGBoost classifiers for source attribution and verification. For these news sources we are able to achieve very strong performance with the proposed algorithms resulting in 0.92–0.94 average AUC for source verification under a closed set scenario, and compelling open set generalization with only 0.0–0.04 reduction in the average AUC.
在线虚假信息数量的迅速增加需要新的和强大的数字取证方法来验证多式联运新闻文章的据称来源。我们对新闻摄影记者进行了一项调查,以了解他们的工作流程。高百分比(91%)的受访者报告了标准化的照片发布程序,我们假设这有助于来源验证。在这项工作中,我们证明了在线新闻网站在他们发布的图像的压缩设置中留下了可预测和可识别的模式。我们提出新颖、简单、高效的算法来分析新闻源验证和识别的图像压缩配置文件。我们在一个新发布的数据集上进行了广泛的实验,该数据集包含来自30个新闻网站的34K多篇文章的64K多张图像,从而评估了算法的有效性。图像压缩特征由朴素贝叶斯变体或XGBoost分类器建模,用于源属性和验证。对于这些新闻源,我们能够通过所提出的算法获得非常强的性能,在封闭集场景下,源验证的平均AUC为0.92-0.94,而开放集泛化的平均AUC仅降低了0.0-0.04。
{"title":"Source Attribution of Online News Images by Compression Analysis","authors":"Michael Albright, Nitesh Menon, Kristy Roschke, Arslan Basharat","doi":"10.1109/WIFS53200.2021.9648385","DOIUrl":"https://doi.org/10.1109/WIFS53200.2021.9648385","url":null,"abstract":"The rapid increase in the amount of online disinformation warrants new and robust digital forensics methods for validating purported sources of multimodal news articles. We conducted a survey of news photojournalists for insights into their workflows. A high percentage (91%) of respondents reported standardized photo publishing procedures, which we hypothesize facilitates source verification. In this work, we demonstrate that the online news sites leave predictable and discernible patterns in the compression settings of the images they publish. We propose novel, simple, and very efficient algorithms to analyze the image compression profiles for news source verification and identification. We evaluate the algorithms' effectiveness through extensive experiments on a newly-released dataset of over 64K images from over 34K articles collected from 30 news sites. The image compression features are modeled by Naive Bayes variants or XGBoost classifiers for source attribution and verification. For these news sources we are able to achieve very strong performance with the proposed algorithms resulting in 0.92–0.94 average AUC for source verification under a closed set scenario, and compelling open set generalization with only 0.0–0.04 reduction in the average AUC.","PeriodicalId":196985,"journal":{"name":"2021 IEEE International Workshop on Information Forensics and Security (WIFS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123750763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2021 IEEE International Workshop on Information Forensics and Security (WIFS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1