首页 > 最新文献

IEEE Transactions on Information Forensics and Security最新文献

英文 中文
Adaptive Generation of Privileged Intermediate Information for Visible-Infrared Person Re-Identification
IF 6.8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-14 DOI: 10.1109/tifs.2025.3541969
Mahdi Alehdaghi, Arthur Josi, Rafael M. O. Cruz, Pourya Shamsolameli, Eric Granger
{"title":"Adaptive Generation of Privileged Intermediate Information for Visible-Infrared Person Re-Identification","authors":"Mahdi Alehdaghi, Arthur Josi, Rafael M. O. Cruz, Pourya Shamsolameli, Eric Granger","doi":"10.1109/tifs.2025.3541969","DOIUrl":"https://doi.org/10.1109/tifs.2025.3541969","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"10 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Cost First-Order Secure Boolean Masking in Glitchy Hardware - full version*
IF 6.8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-14 DOI: 10.1109/tifs.2025.3541442
Dilip Kumar S.V., Josep Balasch, Benedikt Gierlichs, Ingrid Verbauwhede
{"title":"Low-Cost First-Order Secure Boolean Masking in Glitchy Hardware - full version*","authors":"Dilip Kumar S.V., Josep Balasch, Benedikt Gierlichs, Ingrid Verbauwhede","doi":"10.1109/tifs.2025.3541442","DOIUrl":"https://doi.org/10.1109/tifs.2025.3541442","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PEAFOWL: Private Entity Alignment in Multi-Party Privacy-Preserving Machine Learning
IF 6.8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-14 DOI: 10.1109/tifs.2025.3542244
Ying Gao, Huanghao Deng, Zukun Zhu, Xiaofeng Chen, Yuxin Xie, Pei Duan, Peixuan Chen
{"title":"PEAFOWL: Private Entity Alignment in Multi-Party Privacy-Preserving Machine Learning","authors":"Ying Gao, Huanghao Deng, Zukun Zhu, Xiaofeng Chen, Yuxin Xie, Pei Duan, Peixuan Chen","doi":"10.1109/tifs.2025.3542244","DOIUrl":"https://doi.org/10.1109/tifs.2025.3542244","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"13 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MUFTI: Multi-Domain Distillation-based Heterogeneous Federated Continuous Learning
IF 6.8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-14 DOI: 10.1109/tifs.2025.3542246
Keke Gai, Zijun Wang, Jing Yu, Liehuang Zhu
{"title":"MUFTI: Multi-Domain Distillation-based Heterogeneous Federated Continuous Learning","authors":"Keke Gai, Zijun Wang, Jing Yu, Liehuang Zhu","doi":"10.1109/tifs.2025.3542246","DOIUrl":"https://doi.org/10.1109/tifs.2025.3542246","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"23 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QUEEN: Query Unlearning Against Model Extraction
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-13 DOI: 10.1109/TIFS.2025.3538266
Huajie Chen;Tianqing Zhu;Lefeng Zhang;Bo Liu;Derui Wang;Wanlei Zhou;Minhui Xue
Model extraction attacks currently pose a non-negligible threat to the security and privacy of deep learning models. By querying the model with a small dataset and using the query results as the ground-truth labels, an adversary can steal a piracy model with performance comparable to the original model. Two key issues that cause the threat are, on the one hand, accurate and unlimited queries can be obtained by the adversary; on the other hand, the adversary can aggregate the query results to train the model step by step. The existing defenses usually employ model watermarking or fingerprinting to protect the ownership. However, these methods cannot proactively prevent the violation from happening. To mitigate the threat, we propose QUEEN (QUEry unlEarNing) that proactively launches counterattacks on potential model extraction attacks from the very beginning. To limit the potential threat, QUEEN has sensitivity measurement and outputs perturbation that prevents the adversary from training a piracy model with high performance. In sensitivity measurement, QUEEN measures the single query sensitivity by its distance from the center of its cluster in the feature space. To reduce the learning accuracy of attacks, for the highly sensitive query batch, QUEEN applies query unlearning, which is implemented by gradient reverse to perturb the softmax output such that the piracy model will generate reverse gradients to worsen its performance unconsciously. Experiments show that QUEEN outperforms the state-of-the-art defenses against various model extraction attacks with a relatively low cost to the model accuracy. The artifact is publicly available at https://github.com/MaraPapMann/QUEEN.
{"title":"QUEEN: Query Unlearning Against Model Extraction","authors":"Huajie Chen;Tianqing Zhu;Lefeng Zhang;Bo Liu;Derui Wang;Wanlei Zhou;Minhui Xue","doi":"10.1109/TIFS.2025.3538266","DOIUrl":"https://doi.org/10.1109/TIFS.2025.3538266","url":null,"abstract":"Model extraction attacks currently pose a non-negligible threat to the security and privacy of deep learning models. By querying the model with a small dataset and using the query results as the ground-truth labels, an adversary can steal a piracy model with performance comparable to the original model. Two key issues that cause the threat are, on the one hand, accurate and unlimited queries can be obtained by the adversary; on the other hand, the adversary can aggregate the query results to train the model step by step. The existing defenses usually employ model watermarking or fingerprinting to protect the ownership. However, these methods cannot proactively prevent the violation from happening. To mitigate the threat, we propose QUEEN (QUEry unlEarNing) that proactively launches counterattacks on potential model extraction attacks from the very beginning. To limit the potential threat, QUEEN has sensitivity measurement and outputs perturbation that prevents the adversary from training a piracy model with high performance. In sensitivity measurement, QUEEN measures the single query sensitivity by its distance from the center of its cluster in the feature space. To reduce the learning accuracy of attacks, for the highly sensitive query batch, QUEEN applies query unlearning, which is implemented by gradient reverse to perturb the softmax output such that the piracy model will generate reverse gradients to worsen its performance unconsciously. Experiments show that QUEEN outperforms the state-of-the-art defenses against various model extraction attacks with a relatively low cost to the model accuracy. The artifact is publicly available at <uri>https://github.com/MaraPapMann/QUEEN</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2143-2156"},"PeriodicalIF":6.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASDroid: Resisting Evolving Android Malware With API Clusters Derived From Source Code
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-13 DOI: 10.1109/TIFS.2025.3536280
Qihua Hu;Weiping Wang;Hong Song;Song Guo;Jian Zhang;Shigeng Zhang
Machine learning-based Android malware detection has consistently demonstrated superior results. However, with the continual evolution of the Android framework, the efficacy of the deployed models declines markedly. Existing solutions necessitate frequent and expensive model retraining to resist the constant evolution of malware accompanying Android framework updates. To address this, we introduce a solution called ASDroid, which generalizes specific APIs into similar API clusters to counteract evolving Android malware threats. One primary challenge lies in identifying analogous API clusters that correspond to specific APIs. Our approach involves extracting semantic information from open-source API source code to construct a heterogeneous information graph, and utilizing embedding algorithms to obtain semantic vector representations of APIs. APIs that are close in embedding distance are presumed to have similar semantics. Our dataset encompasses Android applications spanning nine years from 2011 to 2019. In comparison to existing Android malware detection model aging mitigation solutions like APIGraph, SDAC and MaMaDroid, ASDroid demonstrates greater accuracy and more effective at resisting continuously evolving malware.
{"title":"ASDroid: Resisting Evolving Android Malware With API Clusters Derived From Source Code","authors":"Qihua Hu;Weiping Wang;Hong Song;Song Guo;Jian Zhang;Shigeng Zhang","doi":"10.1109/TIFS.2025.3536280","DOIUrl":"https://doi.org/10.1109/TIFS.2025.3536280","url":null,"abstract":"Machine learning-based Android malware detection has consistently demonstrated superior results. However, with the continual evolution of the Android framework, the efficacy of the deployed models declines markedly. Existing solutions necessitate frequent and expensive model retraining to resist the constant evolution of malware accompanying Android framework updates. To address this, we introduce a solution called ASDroid, which generalizes specific APIs into similar API clusters to counteract evolving Android malware threats. One primary challenge lies in identifying analogous API clusters that correspond to specific APIs. Our approach involves extracting semantic information from open-source API source code to construct a heterogeneous information graph, and utilizing embedding algorithms to obtain semantic vector representations of APIs. APIs that are close in embedding distance are presumed to have similar semantics. Our dataset encompasses Android applications spanning nine years from 2011 to 2019. In comparison to existing Android malware detection model aging mitigation solutions like APIGraph, SDAC and MaMaDroid, ASDroid demonstrates greater accuracy and more effective at resisting continuously evolving malware.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1822-1835"},"PeriodicalIF":6.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-Stream Coteaching Network for Weakly-Supervised Deepfake Localization in Videos
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-11 DOI: 10.1109/TIFS.2025.3533906
Zhaoyang Li;Zhu Teng;Baopeng Zhang;Jianping Fan
With the rapid evolution of deepfake technologies, attackers can arbitrarily alter the intended message of a video by modifying just a few frames. To this extent, simplistic binary judgments of entire videos increasingly seem less convincing and interpretable. Although numerous efforts have been made to develop fine-grained interpretations, these typically depend on elaborate annotations, which are both costly and challenging to obtain in real-world scenarios. To push the related frontier research, we introduce a novel task called Weakly-Supervised Deepfake Localization (WSDL), which aims to identify manipulated frames only with cushy video-level labels. Meanwhile, we propose a new framework named Bi-stream coteaching Deepfake Localization (CoDL), which advances the WSDL task through a progressive mutual refinement strategy across complementary spatial and temporal modalities. The CoDL framework incorporates an inconsistency perception module that discerns subtle forgeries by assessing spatial and temporal incoherence, and a prototype-based enhancement module that mitigates frame noise and amplifies discrepancies to create a robust feature space. Additionally, a progressive coteaching mechanism is implemented to facilitate the exchange of valuable knowledge between modalities, enhancing the detection of subtle frame-level forgery features and thereby improving the model’s generalization capabilities. Extensive experiments are conducted to demonstrate the superiority of our approach, particularly achieving an impressive 8.83% improvement in AUC on highly compressed datasets when learning from weak supervision.
{"title":"Bi-Stream Coteaching Network for Weakly-Supervised Deepfake Localization in Videos","authors":"Zhaoyang Li;Zhu Teng;Baopeng Zhang;Jianping Fan","doi":"10.1109/TIFS.2025.3533906","DOIUrl":"10.1109/TIFS.2025.3533906","url":null,"abstract":"With the rapid evolution of deepfake technologies, attackers can arbitrarily alter the intended message of a video by modifying just a few frames. To this extent, simplistic binary judgments of entire videos increasingly seem less convincing and interpretable. Although numerous efforts have been made to develop fine-grained interpretations, these typically depend on elaborate annotations, which are both costly and challenging to obtain in real-world scenarios. To push the related frontier research, we introduce a novel task called Weakly-Supervised Deepfake Localization (WSDL), which aims to identify manipulated frames only with cushy video-level labels. Meanwhile, we propose a new framework named Bi-stream coteaching Deepfake Localization (CoDL), which advances the WSDL task through a progressive mutual refinement strategy across complementary spatial and temporal modalities. The CoDL framework incorporates an inconsistency perception module that discerns subtle forgeries by assessing spatial and temporal incoherence, and a prototype-based enhancement module that mitigates frame noise and amplifies discrepancies to create a robust feature space. Additionally, a progressive coteaching mechanism is implemented to facilitate the exchange of valuable knowledge between modalities, enhancing the detection of subtle frame-level forgery features and thereby improving the model’s generalization capabilities. Extensive experiments are conducted to demonstrate the superiority of our approach, particularly achieving an impressive 8.83% improvement in AUC on highly compressed datasets when learning from weak supervision.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1724-1738"},"PeriodicalIF":6.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143393042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Visible-Infrared Person Re-Identification: Synergizing Visual-Textual Reasoning and Cross-Modal Feature Alignment
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-11 DOI: 10.1109/TIFS.2025.3539946
Yuxuan Qiu;Liyang Wang;Wei Song;Jiawei Liu;Zhiping Shi;Na Jiang
Visible-infrared person re-identification (VI-ReID) is a critical cross-modality fine-grained classification task with significant implications for public safety and security applications. Existing VI-ReID methods primarily focus on extracting modality-invariant features for person retrieval. However, due to the inherent lack of texture information in infrared images, these modality-invariant features tend to emphasize global contexts. Consequently, individuals with similar silhouettes are often misidentified, posing potential risks to security systems and forensic investigations. To address this problem, this paper innovatively introduces natural language descriptions to learn the global-local contexts for VI-ReID. Specifically, we design a framework that jointly optimizes visible-infrared alignment plus (VIAP) and visual-textual reasoning (VTR), and introduces local-global joint measure (LJM) to enhance the metric, while proposing a human-LLM collaborative approach to incorporate textual descriptions into existing cross-modal person re-identification datasets. VIAP achieves cross-modal alignment between RGB and IR. It can explicitly utilize designed frequency-aware modality alignment and relationship-reinforced fusion to explore the potential of local cues in global features and modality-invariant information. VTR proposes pooling selection and dual-level reasoning mechanisms to force the image encoder to pay attention to significant regions based on textual descriptions. LJM proposes introducing local feature distances into the measure stage metric to enhance the relevance of matching using fine-grained information. Extensive experimental results on the popular SYSU-MM01 and RegDB datasets show that the proposed method significantly outperforms state-of-the-art approaches. The dataset is publicly available at https://github.com/qyx596/vireid-caption.
{"title":"Advancing Visible-Infrared Person Re-Identification: Synergizing Visual-Textual Reasoning and Cross-Modal Feature Alignment","authors":"Yuxuan Qiu;Liyang Wang;Wei Song;Jiawei Liu;Zhiping Shi;Na Jiang","doi":"10.1109/TIFS.2025.3539946","DOIUrl":"10.1109/TIFS.2025.3539946","url":null,"abstract":"Visible-infrared person re-identification (VI-ReID) is a critical cross-modality fine-grained classification task with significant implications for public safety and security applications. Existing VI-ReID methods primarily focus on extracting modality-invariant features for person retrieval. However, due to the inherent lack of texture information in infrared images, these modality-invariant features tend to emphasize global contexts. Consequently, individuals with similar silhouettes are often misidentified, posing potential risks to security systems and forensic investigations. To address this problem, this paper innovatively introduces natural language descriptions to learn the global-local contexts for VI-ReID. Specifically, we design a framework that jointly optimizes visible-infrared alignment plus (VIAP) and visual-textual reasoning (VTR), and introduces local-global joint measure (LJM) to enhance the metric, while proposing a human-LLM collaborative approach to incorporate textual descriptions into existing cross-modal person re-identification datasets. VIAP achieves cross-modal alignment between RGB and IR. It can explicitly utilize designed frequency-aware modality alignment and relationship-reinforced fusion to explore the potential of local cues in global features and modality-invariant information. VTR proposes pooling selection and dual-level reasoning mechanisms to force the image encoder to pay attention to significant regions based on textual descriptions. LJM proposes introducing local feature distances into the measure stage metric to enhance the relevance of matching using fine-grained information. Extensive experimental results on the popular SYSU-MM01 and RegDB datasets show that the proposed method significantly outperforms state-of-the-art approaches. The dataset is publicly available at <uri>https://github.com/qyx596/vireid-caption</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2184-2196"},"PeriodicalIF":6.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143393041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No Time for Remodulation: A PHY Steganographic Symbiotic Channel Over Constant Envelope
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1109/TIFS.2025.3540290
Jiahao Liu;Caihui Du;Jihong Yu;Jiangchuan Liu;Huan Qi
Physical layer steganography plays a key role in physical layer security. Yet most works are strongly modulation-sensitive and have to modify the modulation at the baseband. However, these methods cannot work with wireless devices whose baseband modulations cannot be software-defined. To overcome these drawbacks, we propose an analog solution that uses a symbiotic hardware component designed, called Pluggable Cloak, connecting to the radio frequency front end (RFFE) to establish a steganographic symbiotic channel (SSC) over constant envelope physical layer (CE-PHY) in 2.4GHz ISM band, such as Bluetooth, ZigBee and 802.11b Wi-Fi, to hide information. The advantage lies in enabling secure transmission of the deployed devices that are not software-defined with this pluggable hardware. Specifically, Pluggable Cloak analogously modulates the amplitude of CE-PHY, so that sensitive information can be securely sent to a customized receiver without being detected by regular CE receivers. To further protect hidden information from the detection of a malicious adversary, we propose methods to randomize the SSC. We develop a lightweight prototype to evaluate symbiosis, undetectability, and throughput. The results show that the symbol error rates (SERs) of the sensitive data received and regular CE data are lower than $10^{-5}$ at the customized receiver. In contrast, the SER of the sensitive data is close to 1 in the adversary, confirming the effectiveness of the SSC technique.
{"title":"No Time for Remodulation: A PHY Steganographic Symbiotic Channel Over Constant Envelope","authors":"Jiahao Liu;Caihui Du;Jihong Yu;Jiangchuan Liu;Huan Qi","doi":"10.1109/TIFS.2025.3540290","DOIUrl":"10.1109/TIFS.2025.3540290","url":null,"abstract":"Physical layer steganography plays a key role in physical layer security. Yet most works are strongly modulation-sensitive and have to modify the modulation at the baseband. However, these methods cannot work with wireless devices whose baseband modulations cannot be software-defined. To overcome these drawbacks, we propose an analog solution that uses a symbiotic hardware component designed, called Pluggable Cloak, connecting to the radio frequency front end (RFFE) to establish a steganographic symbiotic channel (SSC) over constant envelope physical layer (CE-PHY) in 2.4GHz ISM band, such as Bluetooth, ZigBee and 802.11b Wi-Fi, to hide information. The advantage lies in enabling secure transmission of the deployed devices that are not software-defined with this pluggable hardware. Specifically, Pluggable Cloak analogously modulates the amplitude of CE-PHY, so that sensitive information can be securely sent to a customized receiver without being detected by regular CE receivers. To further protect hidden information from the detection of a malicious adversary, we propose methods to randomize the SSC. We develop a lightweight prototype to evaluate symbiosis, undetectability, and throughput. The results show that the symbol error rates (SERs) of the sensitive data received and regular CE data are lower than <inline-formula> <tex-math>$10^{-5}$ </tex-math></inline-formula> at the customized receiver. In contrast, the SER of the sensitive data is close to 1 in the adversary, confirming the effectiveness of the SSC technique.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2197-2211"},"PeriodicalIF":6.3,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143385650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Consistency Regularization for Generalized Face Anti-Spoofing
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-02-10 DOI: 10.1109/TIFS.2025.3540659
Yongluo Liu;Zun Li;Lifang Wu
Recent Face Anti-Spoofing (FAS) methods have improved generalization to unseen domains by leveraging domain generalization techniques. However, they overlooked the semantic relationships between local features, resulting in suboptimal feature alignment and limited performance. To this end, pixel-wise supervision has been introduced to offer contextual guidance for better feature alignment. Unfortunately, the semantic ambiguity in coarsely designed pixel-wise supervision often leads to misalignment. This paper proposes a novel Dual Consistency Regularization Network (DCRN). It promotes the fine-grained alignment of local features with dense semantic correspondence for FAS. Specifically, a Dual Consistency Learning module (DCL) is devised to capture the inter- and intra-similarity between each region of sample pairs. In this module, a dual consistency regularization learning objective enhances the semantic consistency of local features by minimizing both the variance of inter-similarity and the distance between inter- and intra-similarity. Further, a weight matrix is estimated based on the inter-similarity, representing the possibility that each region belongs to the living class. Based on this weight matrix, WMSE loss is designed to guide the model in avoiding mapping the live regions to the spoofing class, thus alleviating semantic ambiguity in pixel-wise supervision. Extensive experiments on four widely used datasets clearly demonstrate the superiority and high generalization of the proposed DCRN.
{"title":"Dual Consistency Regularization for Generalized Face Anti-Spoofing","authors":"Yongluo Liu;Zun Li;Lifang Wu","doi":"10.1109/TIFS.2025.3540659","DOIUrl":"10.1109/TIFS.2025.3540659","url":null,"abstract":"Recent Face Anti-Spoofing (FAS) methods have improved generalization to unseen domains by leveraging domain generalization techniques. However, they overlooked the semantic relationships between local features, resulting in suboptimal feature alignment and limited performance. To this end, pixel-wise supervision has been introduced to offer contextual guidance for better feature alignment. Unfortunately, the semantic ambiguity in coarsely designed pixel-wise supervision often leads to misalignment. This paper proposes a novel Dual Consistency Regularization Network (DCRN). It promotes the fine-grained alignment of local features with dense semantic correspondence for FAS. Specifically, a Dual Consistency Learning module (DCL) is devised to capture the inter- and intra-similarity between each region of sample pairs. In this module, a dual consistency regularization learning objective enhances the semantic consistency of local features by minimizing both the variance of inter-similarity and the distance between inter- and intra-similarity. Further, a weight matrix is estimated based on the inter-similarity, representing the possibility that each region belongs to the living class. Based on this weight matrix, WMSE loss is designed to guide the model in avoiding mapping the live regions to the spoofing class, thus alleviating semantic ambiguity in pixel-wise supervision. Extensive experiments on four widely used datasets clearly demonstrate the superiority and high generalization of the proposed DCRN.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2171-2183"},"PeriodicalIF":6.3,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143385781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Information Forensics and Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1