首页 > 最新文献

IEEE Transactions on Information Forensics and Security最新文献

英文 中文
DRFormer: A Discriminable and Reliable Feature Transformer for Person Re-Identification DRFormer:一种可鉴别、可靠的人物再识别特征变压器
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-19 DOI: 10.1109/TIFS.2024.3520304
Pingyu Wang;Xingjian Zheng;Linbo Qing;Bonan Li;Fei Su;Zhicheng Zhao;Honggang Chen
As person image variations are likely to cause a part misalignment problem, most previous person Re-Identification (ReID) works may adopt local feature partition or additional landmark annotations to acquire aligned person features and boost ReID performance. However, such approaches either only achieve coarse-grained part alignments without considering detailed image variations within each part, or require extra annotated landmarks to train an available pose estimation model. In this work, we propose an effective Discriminable and Reliable Transformer (DRFormer) framework to learn part-aligned person representations with only person identity labels. Specifically, the DRFormer framework consists of Discriminable Feature Transformer (DFT) and Reliable Feature Transformer (RFT) modules, which generate discriminable and reliable high-order features, respectively. For reducing the dimension of high-order features, the DFT module utilizes a Self-Attentive Kronecker Product (SAKP) algorithm to promote the representational capabilities of compressed features via a self-attention strategy. For eliminating the background noise, the RFT module mines the foreground regions to adaptively aggregate foreground features via a Gumbel-Softmax strategy. Moreover, the proposed framework derives from an interpretable motivation and elegantly solves part misalignments without using feature partition or pose estimation. This paper theoretically and experimentally demonstrates the superiority of the proposed DRFormer framework, achieving state-of-the-art performance on various person ReID datasets.
由于人物图像的变化容易导致部分不对齐问题,大多数先前的人物再识别(ReID)工作可能会采用局部特征划分或额外的地标注释来获取对齐的人物特征,从而提高ReID的性能。然而,这些方法要么只能实现粗粒度的部分对齐,而不考虑每个部分内的详细图像变化,要么需要额外的注释地标来训练可用的姿态估计模型。在这项工作中,我们提出了一个有效的可判别和可靠的变压器(DRFormer)框架来学习只有人身份标签的部分对齐的人表示。具体来说,DRFormer框架由可判别特征变压器(Discriminable Feature Transformer, DFT)和可靠特征变压器(Reliable Feature Transformer, RFT)模块组成,它们分别生成可判别和可靠的高阶特征。为了降低高阶特征的维数,DFT模块利用自注意Kronecker积(SAKP)算法通过自注意策略来提高压缩特征的表示能力。为了消除背景噪声,RFT模块通过Gumbel-Softmax策略挖掘前景区域自适应聚合前景特征。此外,所提出的框架源自可解释的动机,并且在不使用特征划分或姿态估计的情况下优雅地解决了零件错位。本文从理论上和实验上证明了所提出的DRFormer框架的优越性,在各种个人ReID数据集上实现了最先进的性能。
{"title":"DRFormer: A Discriminable and Reliable Feature Transformer for Person Re-Identification","authors":"Pingyu Wang;Xingjian Zheng;Linbo Qing;Bonan Li;Fei Su;Zhicheng Zhao;Honggang Chen","doi":"10.1109/TIFS.2024.3520304","DOIUrl":"10.1109/TIFS.2024.3520304","url":null,"abstract":"As person image variations are likely to cause a part misalignment problem, most previous person Re-Identification (ReID) works may adopt local feature partition or additional landmark annotations to acquire aligned person features and boost ReID performance. However, such approaches either only achieve coarse-grained part alignments without considering detailed image variations within each part, or require extra annotated landmarks to train an available pose estimation model. In this work, we propose an effective Discriminable and Reliable Transformer (DRFormer) framework to learn part-aligned person representations with only person identity labels. Specifically, the DRFormer framework consists of Discriminable Feature Transformer (DFT) and Reliable Feature Transformer (RFT) modules, which generate discriminable and reliable high-order features, respectively. For reducing the dimension of high-order features, the DFT module utilizes a Self-Attentive Kronecker Product (SAKP) algorithm to promote the representational capabilities of compressed features via a self-attention strategy. For eliminating the background noise, the RFT module mines the foreground regions to adaptively aggregate foreground features via a Gumbel-Softmax strategy. Moreover, the proposed framework derives from an interpretable motivation and elegantly solves part misalignments without using feature partition or pose estimation. This paper theoretically and experimentally demonstrates the superiority of the proposed DRFormer framework, achieving state-of-the-art performance on various person ReID datasets.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"980-995"},"PeriodicalIF":6.3,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GroupFace: Imbalanced Age Estimation Based on Multi-Hop Attention Graph Convolutional Network and Group-Aware Margin Optimization GroupFace:基于多跳注意力图卷积网络和群体感知边际优化的不平衡年龄估计
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3520020
Yiping Zhang;Yuntao Shou;Wei Ai;Tao Meng;Keqin Li
With the recent advances in computer vision, age estimation has significantly improved in overall accuracy. However, owing to the most common methods do not take into account the class imbalance problem in age estimation datasets, they suffer from a large bias in recognizing long-tailed groups. To achieve high-quality imbalanced learning in long-tailed groups, the dominant solution lies in that the feature extractor learns the discriminative features of different groups and the classifier is able to provide appropriate and unbiased margins for different groups by the discriminative features. Therefore, in this novel, we propose an innovative collaborative learning framework (GroupFace) that integrates a multi-hop attention graph convolutional network and a dynamic group-aware margin strategy based on reinforcement learning. Specifically, to extract the discriminative features of different groups, we design an enhanced multi-hop attention graph convolutional network. This network is capable of capturing the interactions of neighboring nodes at different distances, fusing local and global information to model facial deep aging, and exploring diverse representations of different groups. In addition, to further address the class imbalance problem, we design a dynamic group-aware margin strategy based on reinforcement learning to provide appropriate and unbiased margins for different groups. The strategy divides the sample into four age groups and considers identifying the optimum margins for various age groups by employing a Markov decision process. Under the guidance of the agent, the feature representation bias and the classification margin deviation between different groups can be reduced simultaneously, balancing inter-class separability and intra-class proximity. After joint optimization, our architecture achieves excellent performance on several age estimation benchmark datasets. It not only achieves large improvements in overall estimation accuracy but also gains balanced performance in long-tailed group estimation.
随着计算机视觉技术的进步,年龄估计在整体精度上有了显著提高。然而,由于大多数常用的方法没有考虑到年龄估计数据集中的类不平衡问题,因此在识别长尾群体时存在较大的偏差。为了在长尾群体中实现高质量的不平衡学习,主流的解决方案是特征提取器学习不同群体的判别特征,分类器通过判别特征为不同群体提供合适的无偏边缘。因此,在本文中,我们提出了一种创新的协作学习框架(GroupFace),该框架集成了多跳注意图卷积网络和基于强化学习的动态群体意识边际策略。具体来说,为了提取不同群体的判别特征,我们设计了一个增强的多跳注意图卷积网络。该网络能够捕捉不同距离相邻节点之间的相互作用,融合局部和全局信息来模拟面部深度老化,并探索不同群体的不同表征。此外,为了进一步解决类别失衡问题,我们设计了一种基于强化学习的动态群体感知边际策略,为不同的群体提供适当和无偏的边际。该策略将样本分为四个年龄组,并考虑通过采用马尔可夫决策过程确定不同年龄组的最佳利润率。在智能体的引导下,可以同时减少不同组之间的特征表示偏差和分类裕度偏差,平衡类间可分离性和类内接近性。经过联合优化,我们的架构在多个年龄估计基准数据集上取得了优异的性能。该方法不仅在整体估计精度上有较大的提高,而且在长尾群估计中获得了均衡的性能。
{"title":"GroupFace: Imbalanced Age Estimation Based on Multi-Hop Attention Graph Convolutional Network and Group-Aware Margin Optimization","authors":"Yiping Zhang;Yuntao Shou;Wei Ai;Tao Meng;Keqin Li","doi":"10.1109/TIFS.2024.3520020","DOIUrl":"10.1109/TIFS.2024.3520020","url":null,"abstract":"With the recent advances in computer vision, age estimation has significantly improved in overall accuracy. However, owing to the most common methods do not take into account the class imbalance problem in age estimation datasets, they suffer from a large bias in recognizing long-tailed groups. To achieve high-quality imbalanced learning in long-tailed groups, the dominant solution lies in that the feature extractor learns the discriminative features of different groups and the classifier is able to provide appropriate and unbiased margins for different groups by the discriminative features. Therefore, in this novel, we propose an innovative collaborative learning framework (GroupFace) that integrates a multi-hop attention graph convolutional network and a dynamic group-aware margin strategy based on reinforcement learning. Specifically, to extract the discriminative features of different groups, we design an enhanced multi-hop attention graph convolutional network. This network is capable of capturing the interactions of neighboring nodes at different distances, fusing local and global information to model facial deep aging, and exploring diverse representations of different groups. In addition, to further address the class imbalance problem, we design a dynamic group-aware margin strategy based on reinforcement learning to provide appropriate and unbiased margins for different groups. The strategy divides the sample into four age groups and considers identifying the optimum margins for various age groups by employing a Markov decision process. Under the guidance of the agent, the feature representation bias and the classification margin deviation between different groups can be reduced simultaneously, balancing inter-class separability and intra-class proximity. After joint optimization, our architecture achieves excellent performance on several age estimation benchmark datasets. It not only achieves large improvements in overall estimation accuracy but also gains balanced performance in long-tailed group estimation.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"605-619"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and Secure Post-Quantum Certificateless Signcryption With Linkability for IoMT IoMT中高效安全的可链接后量子无证书签名加密
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3520007
Shiyuan Xu;Xue Chen;Yu Guo;Siu-Ming Yiu;Shang Gao;Bin Xiao
The Internet of Medical Things (IoMT) has gained significant research focus in both academic and medical institutions. Nevertheless, the sensitive data involved in IoMT raises concerns regarding user validation and data privacy. To address these concerns, certificateless signcryption (CLSC) has emerged as a promising solution, offering authenticity, confidentiality, and unforgeability. Unfortunately, most existing CLSC schemes are impractical for IoMT due to their heavy computational and storage requirements. Additionally, these schemes are vulnerable to quantum computing attacks. Therefore, research focusing on designing an efficient post-quantum CLSC scheme is still far-reaching. In this work, we propose PQ-CLSCL, a novel post-quantum CLSC scheme with linkability for IoMT. Our proposed design facilitates secure transmission of medical data between physicians and patients, effectively validating user legitimacy and minimizing the risk of private information leakage. To achieve this, we leverage lattice sampling algorithms and hash functions to generate the partial secret key, then employ the sign-then-encrypt method and design a link label. We also formalize and prove the security of our design, including indistinguishability against chosen-ciphertext attacks (IND-CCA2), existential unforgeability against chosen-message attacks (EU-CMA), and linkability. Finally, through comprehensive performance evaluation, our computation overhead is just 5% of other existing schemes. The evaluation results demonstrate that our solution is practical and efficient.
医疗物联网(IoMT)已成为学术界和医疗机构的重要研究热点。然而,IoMT中涉及的敏感数据引起了对用户验证和数据隐私的关注。为了解决这些问题,无证书签名加密(CLSC)作为一种很有前途的解决方案出现了,它提供了真实性、机密性和不可伪造性。不幸的是,由于大量的计算和存储需求,大多数现有的CLSC方案对于IoMT来说是不切实际的。此外,这些方案容易受到量子计算攻击。因此,设计一种高效的后量子CLSC方案的研究仍具有深远的意义。在这项工作中,我们提出了PQ-CLSCL,一种新颖的后量子CLSC方案,具有可链接性。我们提出的设计促进了医生和患者之间医疗数据的安全传输,有效地验证了用户的合法性,并最大限度地降低了私人信息泄露的风险。为了实现这一点,我们利用格采样算法和哈希函数来生成部分秘密密钥,然后采用先签名后加密的方法并设计一个链接标签。我们还形式化并证明了我们设计的安全性,包括针对选择密文攻击的不可区分性(IND-CCA2),针对选择消息攻击的存在不可伪造性(EU-CMA)和可链接性。最后,通过综合性能评估,我们的计算开销仅为其他现有方案的5%。评价结果表明,该方案是实用、高效的。
{"title":"Efficient and Secure Post-Quantum Certificateless Signcryption With Linkability for IoMT","authors":"Shiyuan Xu;Xue Chen;Yu Guo;Siu-Ming Yiu;Shang Gao;Bin Xiao","doi":"10.1109/TIFS.2024.3520007","DOIUrl":"10.1109/TIFS.2024.3520007","url":null,"abstract":"The Internet of Medical Things (IoMT) has gained significant research focus in both academic and medical institutions. Nevertheless, the sensitive data involved in IoMT raises concerns regarding user validation and data privacy. To address these concerns, certificateless signcryption (CLSC) has emerged as a promising solution, offering authenticity, confidentiality, and unforgeability. Unfortunately, most existing CLSC schemes are impractical for IoMT due to their heavy computational and storage requirements. Additionally, these schemes are vulnerable to quantum computing attacks. Therefore, research focusing on designing an efficient post-quantum CLSC scheme is still far-reaching. In this work, we propose PQ-CLSCL, a novel post-quantum CLSC scheme with linkability for IoMT. Our proposed design facilitates secure transmission of medical data between physicians and patients, effectively validating user legitimacy and minimizing the risk of private information leakage. To achieve this, we leverage lattice sampling algorithms and hash functions to generate the partial secret key, then employ the sign-then-encrypt method and design a link label. We also formalize and prove the security of our design, including indistinguishability against chosen-ciphertext attacks (IND-CCA2), existential unforgeability against chosen-message attacks (EU-CMA), and linkability. Finally, through comprehensive performance evaluation, our computation overhead is just 5% of other existing schemes. The evaluation results demonstrate that our solution is practical and efficient.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1119-1134"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10806671","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Localization for Underwater Acoustic Sensor Networks: A Differential Privacy-Based Deep Learning Approach 水下声学传感器网络的隐私保护定位:基于差异隐私的深度学习方法
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3518069
Jing Yan;Yuhan Zheng;Xian Yang;Cailian Chen;Xinping Guan
Localization is a key premise for implementing the applications of underwater acoustic sensor networks (UASNs). However, the inhomogeneous medium and the open feature of underwater environment make it challenging to accomplish the above task. This paper studies the privacy-preserving localization issue of UASNs with consideration of direct and indirect data threats. To handle the direct data threat, a privacy-preserving localization protocol is designed for sensor nodes, where the mutual information is adopted to acquire the optimal noises added on anchor nodes. With the collected range information from anchor nodes, a ray tracing model is employed for sensor nodes to compensate the range bias caused by straight-line propagation. Then, a differential privacy (DP) based deep learning localization estimator is designed to calculate the positions of sensor nodes, and the perturbations are added to the forward propagation of deep learning framework, such that the indirect data leakage can be avoided. Besides that, the theory analyses including the Cramer-Rao Lower Bound (CRLB), the privacy budget and the complexity are provided. Main innovations of this paper include: 1) the mutual information-based localization protocol can acquire the optimal noise over the traditional noise-adding mechanisms; 2) the DP-based deep learning estimator can avoid the leakage of training data caused by overfitting in traditional deep learning-based solutions. Finally, simulation and experimental results are both conducted to verify the effectiveness of our approach.
定位是实现水声传感器网络应用的关键前提。然而,水下环境的非均匀性和开放性给上述任务的实现带来了挑战。本文在考虑直接和间接数据威胁的情况下,研究了usns的隐私保护定位问题。为了应对直接的数据威胁,设计了传感器节点的隐私保护定位协议,利用互信息获取锚节点上添加的最优噪声。利用锚节点采集的距离信息,对传感器节点采用光线跟踪模型补偿直线传播引起的距离偏差。然后,设计了基于差分隐私(DP)的深度学习定位估计器来计算传感器节点的位置,并在深度学习框架的前向传播中加入扰动,避免了间接数据泄漏。此外,还对该算法进行了理论分析,包括crmer - rao下界、隐私预算和复杂度。本文的主要创新点包括:1)相对于传统的加噪机制,基于互信息的定位协议能够获得最优的噪声;2)基于dp的深度学习估计器可以避免传统深度学习方案因过拟合而导致训练数据泄漏的问题。最后,通过仿真和实验验证了该方法的有效性。
{"title":"Privacy-Preserving Localization for Underwater Acoustic Sensor Networks: A Differential Privacy-Based Deep Learning Approach","authors":"Jing Yan;Yuhan Zheng;Xian Yang;Cailian Chen;Xinping Guan","doi":"10.1109/TIFS.2024.3518069","DOIUrl":"10.1109/TIFS.2024.3518069","url":null,"abstract":"Localization is a key premise for implementing the applications of underwater acoustic sensor networks (UASNs). However, the inhomogeneous medium and the open feature of underwater environment make it challenging to accomplish the above task. This paper studies the privacy-preserving localization issue of UASNs with consideration of direct and indirect data threats. To handle the direct data threat, a privacy-preserving localization protocol is designed for sensor nodes, where the mutual information is adopted to acquire the optimal noises added on anchor nodes. With the collected range information from anchor nodes, a ray tracing model is employed for sensor nodes to compensate the range bias caused by straight-line propagation. Then, a differential privacy (DP) based deep learning localization estimator is designed to calculate the positions of sensor nodes, and the perturbations are added to the forward propagation of deep learning framework, such that the indirect data leakage can be avoided. Besides that, the theory analyses including the Cramer-Rao Lower Bound (CRLB), the privacy budget and the complexity are provided. Main innovations of this paper include: 1) the mutual information-based localization protocol can acquire the optimal noise over the traditional noise-adding mechanisms; 2) the DP-based deep learning estimator can avoid the leakage of training data caused by overfitting in traditional deep learning-based solutions. Finally, simulation and experimental results are both conducted to verify the effectiveness of our approach.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"737-752"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust AI-Synthesized Speech Detection Using Feature Decomposition Learning and Synthesizer Feature Augmentation 基于特征分解学习和合成器特征增强的鲁棒人工智能合成语音检测
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3520001
Kuiyuan Zhang;Zhongyun Hua;Yushu Zhang;Yifang Guo;Tao Xiang
AI-synthesized speech, also known as deepfake speech, has recently raised significant concerns due to the rapid advancement of speech synthesis and speech conversion techniques. Previous works often rely on distinguishing synthesizer artifacts to identify deepfake speech. However, excessive reliance on these specific synthesizer artifacts may result in unsatisfactory performance when addressing speech signals created by unseen synthesizers. In this paper, we propose a robust deepfake speech detection method that employs feature decomposition to learn synthesizer-independent content features as complementary for detection. Specifically, we propose a dual-stream feature decomposition learning strategy that decomposes the learned speech representation using a synthesizer stream and a content stream. The synthesizer stream specializes in learning synthesizer features through supervised training with synthesizer labels. Meanwhile, the content stream focuses on learning synthesizer-independent content features, enabled by a pseudo-labeling-based supervised learning method. This method randomly transforms speech to generate speed and compression labels for training. Additionally, we employ an adversarial learning technique to reduce the synthesizer-related components in the content stream. The final classification is determined by concatenating the synthesizer and content features. To enhance the model’s robustness to different synthesizer characteristics, we further propose a synthesizer feature augmentation strategy that randomly blends the characteristic styles within real and fake audio features and randomly shuffles the synthesizer features with the content features. This strategy effectively enhances the feature diversity and simulates more feature combinations. Experimental results on four deepfake speech benchmark datasets demonstrate that our model achieves state-of-the-art robust detection performance across various evaluation scenarios, including cross-method, cross-dataset, and cross-language evaluations.
由于语音合成和语音转换技术的快速发展,人工智能合成语音也被称为深度假语音,最近引起了人们的极大关注。以前的工作通常依赖于区分合成器伪影来识别深度假语音。然而,过度依赖这些特定的合成器工件可能导致在处理由看不见的合成器创建的语音信号时性能不理想。在本文中,我们提出了一种鲁棒的深度假语音检测方法,该方法使用特征分解来学习与合成器无关的内容特征,作为检测的补充。具体来说,我们提出了一种双流特征分解学习策略,该策略使用合成器流和内容流来分解学习到的语音表示。合成器流专门通过带有合成器标签的监督训练来学习合成器的特征。同时,内容流侧重于学习与合成器无关的内容特征,通过基于伪标记的监督学习方法实现。该方法对语音进行随机变换,生成用于训练的速度和压缩标签。此外,我们采用对抗学习技术来减少内容流中与合成器相关的组件。最后的分类是通过连接合成器和内容特征来确定的。为了增强模型对不同合成器特征的鲁棒性,我们进一步提出了一种合成器特征增强策略,该策略随机混合真假音频特征中的特征样式,并将合成器特征与内容特征随机洗牌。该策略有效地增强了特征的多样性,模拟了更多的特征组合。在四个深度假语音基准数据集上的实验结果表明,我们的模型在各种评估场景(包括跨方法、跨数据集和跨语言评估)中实现了最先进的鲁棒检测性能。
{"title":"Robust AI-Synthesized Speech Detection Using Feature Decomposition Learning and Synthesizer Feature Augmentation","authors":"Kuiyuan Zhang;Zhongyun Hua;Yushu Zhang;Yifang Guo;Tao Xiang","doi":"10.1109/TIFS.2024.3520001","DOIUrl":"10.1109/TIFS.2024.3520001","url":null,"abstract":"AI-synthesized speech, also known as deepfake speech, has recently raised significant concerns due to the rapid advancement of speech synthesis and speech conversion techniques. Previous works often rely on distinguishing synthesizer artifacts to identify deepfake speech. However, excessive reliance on these specific synthesizer artifacts may result in unsatisfactory performance when addressing speech signals created by unseen synthesizers. In this paper, we propose a robust deepfake speech detection method that employs feature decomposition to learn synthesizer-independent content features as complementary for detection. Specifically, we propose a dual-stream feature decomposition learning strategy that decomposes the learned speech representation using a synthesizer stream and a content stream. The synthesizer stream specializes in learning synthesizer features through supervised training with synthesizer labels. Meanwhile, the content stream focuses on learning synthesizer-independent content features, enabled by a pseudo-labeling-based supervised learning method. This method randomly transforms speech to generate speed and compression labels for training. Additionally, we employ an adversarial learning technique to reduce the synthesizer-related components in the content stream. The final classification is determined by concatenating the synthesizer and content features. To enhance the model’s robustness to different synthesizer characteristics, we further propose a synthesizer feature augmentation strategy that randomly blends the characteristic styles within real and fake audio features and randomly shuffles the synthesizer features with the content features. This strategy effectively enhances the feature diversity and simulates more feature combinations. Experimental results on four deepfake speech benchmark datasets demonstrate that our model achieves state-of-the-art robust detection performance across various evaluation scenarios, including cross-method, cross-dataset, and cross-language evaluations.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"871-885"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Level Resource-Coherented Graph Learning for Website Fingerprinting Attacks 针对网站指纹攻击的多层次资源连贯图学习
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3520014
Bo Gao;Weiwei Liu;Guangjie Liu;Fengyuan Nie;Jianan Huang
Deep learning-based website fingerprinting (WF) attacks dominate website traffic classification. In the real world, the main challenges limiting their effectiveness are, on the one hand, the difficulty in countering the effect of content updates on the basis of accurate descriptions of page features in traffic representations. On the other hand, the model’s accuracy relies on training numerous samples, requiring constant manual labeling. The key to solving the problem is to find a website traffic representation that can stably and accurately display page features, as well as to perform self-supervised learning that is not reliant on manual labeling. This study introduces the multi-level resource-coherented graph convolutional neural network (MRCGCN), a self-supervised learning-based WF attack. It analyzes website traffic using resources as the basic unit, which are coarser than packets, ensuring the page’s unique resource layout while improving the robustness of the representations. Then, we utilized an echelon-ordered graph kernel function to extract the graph topology as the label for website traffic. Finally, a two-channel graph convolutional neural network is designed for constructing a self-supervised learning-based traffic classifier. We evaluated the WF attacks using real data in both closed- and open-world scenarios. The results demonstrate that the proposed WF attack has superior and more comprehensive performance compared to state-of-the-art methods.
基于深度学习的网站指纹(WF)攻击在网站流量分类中占据主导地位。在现实世界中,限制其有效性的主要挑战是,一方面,难以根据流量表示中页面特征的准确描述来对抗内容更新的影响。另一方面,模型的准确性依赖于训练大量的样本,需要不断的人工标记。解决这个问题的关键是找到一种能够稳定、准确地显示页面特征的网站流量表示,以及进行不依赖人工标注的自监督学习。本文介绍了一种基于自监督学习的WF攻击——多级资源相干图卷积神经网络(MRCGCN)。它以比数据包更粗的资源为基本单位对网站流量进行分析,保证了页面资源布局的唯一性,同时提高了表示的鲁棒性。然后,我们利用一个阶梯形图核函数来提取图拓扑作为网站流量的标签。最后,设计了一种双通道图卷积神经网络,用于构建基于自监督学习的流量分类器。我们使用封闭和开放场景下的真实数据评估了WF攻击。结果表明,与现有的攻击方法相比,所提出的WF攻击具有更优越、更全面的性能。
{"title":"Multi-Level Resource-Coherented Graph Learning for Website Fingerprinting Attacks","authors":"Bo Gao;Weiwei Liu;Guangjie Liu;Fengyuan Nie;Jianan Huang","doi":"10.1109/TIFS.2024.3520014","DOIUrl":"10.1109/TIFS.2024.3520014","url":null,"abstract":"Deep learning-based website fingerprinting (WF) attacks dominate website traffic classification. In the real world, the main challenges limiting their effectiveness are, on the one hand, the difficulty in countering the effect of content updates on the basis of accurate descriptions of page features in traffic representations. On the other hand, the model’s accuracy relies on training numerous samples, requiring constant manual labeling. The key to solving the problem is to find a website traffic representation that can stably and accurately display page features, as well as to perform self-supervised learning that is not reliant on manual labeling. This study introduces the multi-level resource-coherented graph convolutional neural network (MRCGCN), a self-supervised learning-based WF attack. It analyzes website traffic using resources as the basic unit, which are coarser than packets, ensuring the page’s unique resource layout while improving the robustness of the representations. Then, we utilized an echelon-ordered graph kernel function to extract the graph topology as the label for website traffic. Finally, a two-channel graph convolutional neural network is designed for constructing a self-supervised learning-based traffic classifier. We evaluated the WF attacks using real data in both closed- and open-world scenarios. The results demonstrate that the proposed WF attack has superior and more comprehensive performance compared to state-of-the-art methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"693-708"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Query-Efficient Model Inversion Attacks: An Information Flow View 查询高效模型反转攻击:信息流视图
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3518779
Yixiao Xu;Binxing Fang;Mohan Li;Xiaolei Liu;Zhihong Tian
Model Inversion Attacks (MIAs) pose a certain threat to the data privacy of learning-based systems, as they enable adversaries to reconstruct identifiable features of the training distribution with only query access to the victim model. In the context of deep learning, the primary challenges associated with MIAs are suboptimal attack success rates and the corresponding high computational costs. Prior efforts assumed that the expansive search space caused these limitations, employing generative models to constrain the dimensions of the search space. Despite the initial success of these generative-based solutions, recent experiments have cast doubt on this fundamental assumption, leaving two open questions about the influential factors determining MIA performance and how to manipulate these factors to improve MIAs. To answer these questions, we reframe MIAs from the perspective of information flow. This new formulation allows us to establish a lower bound for the error probability of MIAs, determined by two critical factors: (1) the size of the search space and (2) the mutual information between input and output random variables. Through a detailed analysis of generative-based MIAs within this theoretical framework, we uncover a trade-off between the size of the search space and the generation capability of generative models. Based on the theoretical conclusions, we introduce the Query-Efficient Model Inversion Approach (QE-MIA). By strategically selecting an appropriate search space and introducing additional mutual information, QE-MIA achieves a reduction of $60%sim 70%$ in query overhead while concurrently enhancing the attack success rate by $5%sim 25%$ .
模型反转攻击(mia)对基于学习的系统的数据隐私构成了一定的威胁,因为它们使攻击者能够仅通过对受害者模型的查询访问来重建训练分布的可识别特征。在深度学习的背景下,与MIAs相关的主要挑战是次优攻击成功率和相应的高计算成本。先前的研究假设是庞大的搜索空间造成了这些限制,使用生成模型来约束搜索空间的维度。尽管这些基于生成的解决方案取得了初步成功,但最近的实验对这一基本假设提出了质疑,留下了两个悬而未决的问题,即决定MIA性能的影响因素以及如何操纵这些因素来改善MIA。为了回答这些问题,我们从信息流的角度重新构建mia。这个新公式允许我们建立mia错误概率的下界,它由两个关键因素决定:(1)搜索空间的大小和(2)输入和输出随机变量之间的互信息。通过在该理论框架内对基于生成的MIAs的详细分析,我们发现了搜索空间大小与生成模型的生成能力之间的权衡。在理论结论的基础上,提出了查询高效模型反演方法(Query-Efficient Model Inversion Approach, QE-MIA)。通过战略性地选择适当的搜索空间并引入额外的互信息,QE-MIA实现了查询开销减少60%至70%,同时将攻击成功率提高了5%至25%。
{"title":"Query-Efficient Model Inversion Attacks: An Information Flow View","authors":"Yixiao Xu;Binxing Fang;Mohan Li;Xiaolei Liu;Zhihong Tian","doi":"10.1109/TIFS.2024.3518779","DOIUrl":"10.1109/TIFS.2024.3518779","url":null,"abstract":"Model Inversion Attacks (MIAs) pose a certain threat to the data privacy of learning-based systems, as they enable adversaries to reconstruct identifiable features of the training distribution with only query access to the victim model. In the context of deep learning, the primary challenges associated with MIAs are suboptimal attack success rates and the corresponding high computational costs. Prior efforts assumed that the expansive search space caused these limitations, employing generative models to constrain the dimensions of the search space. Despite the initial success of these generative-based solutions, recent experiments have cast doubt on this fundamental assumption, leaving two open questions about the influential factors determining MIA performance and how to manipulate these factors to improve MIAs. To answer these questions, we reframe MIAs from the perspective of information flow. This new formulation allows us to establish a lower bound for the error probability of MIAs, determined by two critical factors: (1) the size of the search space and (2) the mutual information between input and output random variables. Through a detailed analysis of generative-based MIAs within this theoretical framework, we uncover a trade-off between the size of the search space and the generation capability of generative models. Based on the theoretical conclusions, we introduce the Query-Efficient Model Inversion Approach (QE-MIA). By strategically selecting an appropriate search space and introducing additional mutual information, QE-MIA achieves a reduction of <inline-formula> <tex-math>$60%sim 70%$ </tex-math></inline-formula> in query overhead while concurrently enhancing the attack success rate by <inline-formula> <tex-math>$5%sim 25%$ </tex-math></inline-formula>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1023-1036"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IFViT: Interpretable Fixed-Length Representation for Fingerprint Matching via Vision Transformer IFViT:通过视觉转换器进行指纹匹配的可解释固定长度表示法
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3520015
Yuhang Qiu;Honghui Chen;Xingbo Dong;Zheng Lin;Iman Yi Liao;Massimo Tistarelli;Zhe Jin
Determining dense feature points on fingerprints used in constructing deep fixed-length representations for accurate matching, particularly at the pixel level, is of significant interest. To explore the interpretability of fingerprint matching, we propose a multi-stage interpretable fingerprint matching network, namely Interpretable Fixed-length Representation for Fingerprint Matching via Vision Transformer (IFViT), which consists of two primary modules. The first module, an interpretable dense registration module, establishes a Vision Transformer (ViT)-based Siamese Network to capture long-range dependencies and the global context in fingerprint pairs. It provides interpretable dense pixel-wise correspondences of feature points for fingerprint alignment and enhances the interpretability in the subsequent matching stage. The second module takes into account both local and global representations of the aligned fingerprint pair to achieve an interpretable fixed-length representation extraction and matching. It employs the ViTs trained in the first module with the additional fully connected layer and retrains them to simultaneously produce the discriminative fixed-length representation and interpretable dense pixel-wise correspondences of feature points. Extensive experimental results on diverse publicly available fingerprint databases demonstrate that the proposed framework not only exhibits superior performance on dense registration and matching but also significantly promotes the interpretability in deep fixed-length representations-based fingerprint matching.
确定用于构建深度固定长度表示以进行准确匹配的指纹上的密集特征点,特别是在像素级,是一个非常重要的问题。为了探索指纹匹配的可解释性,我们提出了一种多阶段可解释指纹匹配网络,即可解释的固定长度表示指纹匹配通过视觉变压器(IFViT),它由两个主要模块组成。第一个模块是一个可解释的密集配准模块,它建立了一个基于视觉转换(Vision Transformer, ViT)的Siamese网络,以捕获指纹对中的远程依赖关系和全局上下文。它为指纹对齐提供了可解释的密集像素对应特征点,并增强了后续匹配阶段的可解释性。第二个模块考虑了对齐指纹对的局部和全局表示,以实现可解释的固定长度表示的提取和匹配。它使用在第一个模块中训练的vit和额外的全连接层,并对它们进行重新训练,同时产生判别定长表示和特征点的可解释的密集像素对应。在多种公开指纹数据库上的大量实验结果表明,该框架不仅在密集配准和匹配方面表现出优异的性能,而且显著提高了基于深度定长表示的指纹匹配的可解释性。
{"title":"IFViT: Interpretable Fixed-Length Representation for Fingerprint Matching via Vision Transformer","authors":"Yuhang Qiu;Honghui Chen;Xingbo Dong;Zheng Lin;Iman Yi Liao;Massimo Tistarelli;Zhe Jin","doi":"10.1109/TIFS.2024.3520015","DOIUrl":"10.1109/TIFS.2024.3520015","url":null,"abstract":"Determining dense feature points on fingerprints used in constructing deep fixed-length representations for accurate matching, particularly at the pixel level, is of significant interest. To explore the interpretability of fingerprint matching, we propose a multi-stage interpretable fingerprint matching network, namely Interpretable Fixed-length Representation for Fingerprint Matching via Vision Transformer (IFViT), which consists of two primary modules. The first module, an interpretable dense registration module, establishes a Vision Transformer (ViT)-based Siamese Network to capture long-range dependencies and the global context in fingerprint pairs. It provides interpretable dense pixel-wise correspondences of feature points for fingerprint alignment and enhances the interpretability in the subsequent matching stage. The second module takes into account both local and global representations of the aligned fingerprint pair to achieve an interpretable fixed-length representation extraction and matching. It employs the ViTs trained in the first module with the additional fully connected layer and retrains them to simultaneously produce the discriminative fixed-length representation and interpretable dense pixel-wise correspondences of feature points. Extensive experimental results on diverse publicly available fingerprint databases demonstrate that the proposed framework not only exhibits superior performance on dense registration and matching but also significantly promotes the interpretability in deep fixed-length representations-based fingerprint matching.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"559-573"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stealthiness Assessment of Adversarial Perturbation: From a Visual Perspective 对抗性干扰的隐蔽性评估:从视觉角度
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-18 DOI: 10.1109/TIFS.2024.3520016
Hangcheng Liu;Yuan Zhou;Ying Yang;Qingchuan Zhao;Tianwei Zhang;Tao Xiang
Assessing the stealthiness of adversarial perturbations is challenging due to the lack of appropriate evaluation metrics. Existing evaluation metrics, e.g., $L_{p}$ norms or Image Quality Assessment (IQA), fall short of assessing the pixel-level stealthiness of subtle adversarial perturbations since these metrics are primarily designed for traditional distortions. To bridge this gap, we present the first comprehensive study on the subjective and objective assessment of the stealthiness of adversarial perturbations from a visual perspective at a pixel level. Specifically, we propose new subjective assessment criteria for human observers to score adversarial stealthiness in a fine-grained manner. Then, we create a large-scale adversarial example dataset comprising 10586 pairs of clean and adversarial samples encompassing twelve state-of-the-art adversarial attacks. To obtain the subjective scores according to the proposed criterion, we recruit 60 human observers, and each adversarial example is evaluated by at least 15 observers. The mean opinion score of each adversarial example is utilized for labeling. Finally, we develop a three-stage objective scoring model that mimics human scoring habits to predict adversarial perturbation’s stealthiness. Experimental results demonstrate that our objective model exhibits superior consistency with the human visual system, surpassing commonly employed metrics like PSNR and SSIM.
由于缺乏适当的评估指标,评估对抗性扰动的隐身性具有挑战性。现有的评估指标,例如$L_{p}$规范或图像质量评估(IQA),无法评估微妙的对抗性扰动的像素级隐身性,因为这些指标主要是为传统的扭曲而设计的。为了弥补这一差距,我们提出了第一个综合研究,从视觉角度在像素水平上对对抗性扰动的隐身性进行主观和客观评估。具体来说,我们提出了新的主观评估标准,供人类观察者以细粒度的方式对对抗性隐身进行评分。然后,我们创建了一个大规模的对抗性示例数据集,其中包括10586对干净和对抗性样本,其中包含12种最先进的对抗性攻击。为了根据提出的标准获得主观分数,我们招募了60名人类观察者,每个对抗示例由至少15名观察者进行评估。利用每个对抗样本的平均意见得分进行标记。最后,我们开发了一个模拟人类评分习惯的三阶段客观评分模型来预测对抗性扰动的隐身性。实验结果表明,我们的客观模型与人类视觉系统具有良好的一致性,优于常用的指标,如PSNR和SSIM。
{"title":"Stealthiness Assessment of Adversarial Perturbation: From a Visual Perspective","authors":"Hangcheng Liu;Yuan Zhou;Ying Yang;Qingchuan Zhao;Tianwei Zhang;Tao Xiang","doi":"10.1109/TIFS.2024.3520016","DOIUrl":"10.1109/TIFS.2024.3520016","url":null,"abstract":"Assessing the stealthiness of adversarial perturbations is challenging due to the lack of appropriate evaluation metrics. Existing evaluation metrics, e.g., <inline-formula> <tex-math>$L_{p}$ </tex-math></inline-formula> norms or Image Quality Assessment (IQA), fall short of assessing the pixel-level stealthiness of subtle adversarial perturbations since these metrics are primarily designed for traditional distortions. To bridge this gap, we present the first comprehensive study on the subjective and objective assessment of the stealthiness of adversarial perturbations from a visual perspective at a pixel level. Specifically, we propose new subjective assessment criteria for human observers to score adversarial stealthiness in a fine-grained manner. Then, we create a large-scale adversarial example dataset comprising 10586 pairs of clean and adversarial samples encompassing twelve state-of-the-art adversarial attacks. To obtain the subjective scores according to the proposed criterion, we recruit 60 human observers, and each adversarial example is evaluated by at least 15 observers. The mean opinion score of each adversarial example is utilized for labeling. Finally, we develop a three-stage objective scoring model that mimics human scoring habits to predict adversarial perturbation’s stealthiness. Experimental results demonstrate that our objective model exhibits superior consistency with the human visual system, surpassing commonly employed metrics like PSNR and SSIM.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"898-913"},"PeriodicalIF":6.3,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142849467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learnability of Optical Physical Unclonable Functions Through the Lens of Learning With Errors 基于误差学习的光学物理不可克隆函数的可学习性
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-16 DOI: 10.1109/TIFS.2024.3518065
Apollo Albright;Boris Gelfand;Michael Dixon
We show that a class of optical physical unclonable functions (PUFs) can be efficiently PAC-learned to arbitrary precision with arbitrarily high probability, even in the presence of intentionally injected noise, given access to polynomially many challenge-response pairs, under mild and practical assumptions about the distributions of the noise and challenge vectors. We motivate our analysis by identifying similarities between the integrated version of Pappu’s original optical PUF design and the post-quantum Learning with Errors (LWE) cryptosystem. We derive polynomial bounds for the required number of samples and the computational complexity of a linear regression algorithm, based on size parameters of the PUF, the distributions of the challenge and noise vectors, and the desired accuracy and probability of success of the regression algorithm. We use a similar analysis to that done by Bootle et al. [“LWE without modular reduction and improved side-channel attacks against BLISS,” in Advances in Cryptology – ASIACRYPT 2018], who demonstrated a learning attack on poorly implemented versions of LWE cryptosystems. This extends the results of Rührmair et al. [“Optical PUFs reloaded,” Cryptology ePrint Archive, 2013], who presented a theoretical framework showing that a subset of this class of PUFs is learnable in polynomial time in the absence of injected noise, under the assumption that the optics of the PUF were either linear or had negligible nonlinear effects. (Rührmair et al. also included an experimental validation of this technique, which of course included measurement uncertainty, demonstrating robustness to the presence of natural noise.) We recommend that the design of strong PUFs should be treated as a cryptographic engineering problem in physics, as PUF designs would benefit greatly from basing their physics and security on standard cryptographic assumptions. Finally, we identify future research directions, including suggestions for how to modify an LWE-based optical PUF design to better defend against cryptanalytic attacks.
我们证明了一类光学物理不可克隆函数(puf)即使在有意注入噪声的情况下,在关于噪声和挑战向量分布的温和和实用假设下,给定多项式多个挑战-响应对的访问,也可以以任意高概率有效地pac -学习到任意精度。我们通过识别Pappu原始光学PUF设计的集成版本与后量子错误学习(LWE)密码系统之间的相似性来激发我们的分析。我们根据PUF的大小参数、挑战向量和噪声向量的分布以及回归算法的期望精度和成功概率,推导出线性回归算法所需样本数量和计算复杂度的多项式界限。我们使用了与Bootle等人所做的类似的分析[“没有模块化减少的LWE和改进的针对BLISS的侧信道攻击”,在密码学进展- ASIACRYPT 2018中],他们展示了对实现不良版本的LWE密码系统的学习攻击。这扩展了r hrmair等人的结果。[“Optical PUF reloaded,”Cryptology ePrint Archive, 2013],他们提出了一个理论框架,表明在假设PUF的光学是线性的或具有可忽略的非线性影响的情况下,在没有注入噪声的情况下,该类PUF的子集可以在多项式时间内学习。(r hrmaal等人也对该技术进行了实验验证,其中当然包括测量不确定度,证明了对自然噪声的鲁棒性。)我们建议将强PUF的设计视为物理中的密码工程问题,因为基于标准密码假设的PUF物理和安全性将大大受益。最后,我们确定了未来的研究方向,包括如何修改基于lwe的光学PUF设计以更好地防御密码分析攻击的建议。
{"title":"Learnability of Optical Physical Unclonable Functions Through the Lens of Learning With Errors","authors":"Apollo Albright;Boris Gelfand;Michael Dixon","doi":"10.1109/TIFS.2024.3518065","DOIUrl":"10.1109/TIFS.2024.3518065","url":null,"abstract":"We show that a class of optical physical unclonable functions (PUFs) can be efficiently PAC-learned to arbitrary precision with arbitrarily high probability, even in the presence of intentionally injected noise, given access to polynomially many challenge-response pairs, under mild and practical assumptions about the distributions of the noise and challenge vectors. We motivate our analysis by identifying similarities between the integrated version of Pappu’s original optical PUF design and the post-quantum Learning with Errors (LWE) cryptosystem. We derive polynomial bounds for the required number of samples and the computational complexity of a linear regression algorithm, based on size parameters of the PUF, the distributions of the challenge and noise vectors, and the desired accuracy and probability of success of the regression algorithm. We use a similar analysis to that done by Bootle et al. [“LWE without modular reduction and improved side-channel attacks against BLISS,” in Advances in Cryptology – ASIACRYPT 2018], who demonstrated a learning attack on poorly implemented versions of LWE cryptosystems. This extends the results of Rührmair et al. [“Optical PUFs reloaded,” Cryptology ePrint Archive, 2013], who presented a theoretical framework showing that a subset of this class of PUFs is learnable in polynomial time in the absence of injected noise, under the assumption that the optics of the PUF were either linear or had negligible nonlinear effects. (Rührmair et al. also included an experimental validation of this technique, which of course included measurement uncertainty, demonstrating robustness to the presence of natural noise.) We recommend that the design of strong PUFs should be treated as a cryptographic engineering problem in physics, as PUF designs would benefit greatly from basing their physics and security on standard cryptographic assumptions. Finally, we identify future research directions, including suggestions for how to modify an LWE-based optical PUF design to better defend against cryptanalytic attacks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"886-897"},"PeriodicalIF":6.3,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10802998","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142832519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Information Forensics and Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1