Unsupervised Visible-Infrared Person Re-Identification (USL-VI-ReID) aims to match person images across visible and infrared modalities without identity annotations, addressing challenges such as cross-modal discrepancy and unlabeled data. Existing methods, however, often suffer from excessive sub-clusters, identity mixing, and unreliable cross-modal associations, which degrade matching performance. To overcome these issues, we propose MACHANet, a novel framework. The Memory Learning via Progressive Hybrid Clustering (MLPHC) module reduces excessive sub-clustering and enhances memory representations by first applying Harmonic Discrepancy Clustering with harmonic constraints and a core-edge mechanism, then gradually transitioning to DBSCAN as features become more discriminative. The Global Cross-Modal Positive Sample Alignment (GCPSA) module constructs a global set of cross-modal positive pairs, selecting the most similar visible–infrared samples of the same identity and computing alignment losses across intra- and inter-modalities. By maximizing mutual information and minimizing cross-modal distribution gaps, GCPSA effectively reduces modality discrepancies and suppresses noisy identity associations. Finally, the Multi-Modal Support Sample Expansion Alignment (MSSEA) module dynamically expands multi-modal support samples and incorporates residual-based representations to refine clusters, separate mixed identities, and progressively merge sub-identities. Extensive experiments on SYSU-MM01 and RegDB show that MACHANet outperforms existing state-of-the-art methods, including some supervised approaches. The source code will be publicly released.
{"title":"MACHANet: Memory-Augmented Cross Modal Hybrid Alignment Network for Unsupervised Visible-Infrared Person Re-Identification","authors":"Tingyu Yang;Weiqing Yan;Guanghui Yue;Wujie Zhou;Chang Tang","doi":"10.1109/TIFS.2026.3660597","DOIUrl":"10.1109/TIFS.2026.3660597","url":null,"abstract":"Unsupervised Visible-Infrared Person Re-Identification (USL-VI-ReID) aims to match person images across visible and infrared modalities without identity annotations, addressing challenges such as cross-modal discrepancy and unlabeled data. Existing methods, however, often suffer from excessive sub-clusters, identity mixing, and unreliable cross-modal associations, which degrade matching performance. To overcome these issues, we propose MACHANet, a novel framework. The Memory Learning via Progressive Hybrid Clustering (MLPHC) module reduces excessive sub-clustering and enhances memory representations by first applying Harmonic Discrepancy Clustering with harmonic constraints and a core-edge mechanism, then gradually transitioning to DBSCAN as features become more discriminative. The Global Cross-Modal Positive Sample Alignment (GCPSA) module constructs a global set of cross-modal positive pairs, selecting the most similar visible–infrared samples of the same identity and computing alignment losses across intra- and inter-modalities. By maximizing mutual information and minimizing cross-modal distribution gaps, GCPSA effectively reduces modality discrepancies and suppresses noisy identity associations. Finally, the Multi-Modal Support Sample Expansion Alignment (MSSEA) module dynamically expands multi-modal support samples and incorporates residual-based representations to refine clusters, separate mixed identities, and progressively merge sub-identities. Extensive experiments on SYSU-MM01 and RegDB show that MACHANet outperforms existing state-of-the-art methods, including some supervised approaches. The source code will be publicly released.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"21 ","pages":"1914-1925"},"PeriodicalIF":8.0,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146110443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concept drift refers to the deviation in data distribution over time, driven by dynamic changes in attackers or environments. This phenomenon poses a significant challenge for deploying machine learning models in cybersecurity. Existing approaches rely heavily on frequent retraining or distribution-level analyses, which are costly, labor-intensive, and often lack interpretability. To address these limitations, we propose DriftTrace, a novel system designed to detect, explain, and adapt to concept drift in security applications. Through comprehensive analysis, we uncover associations, consistencies, and diversities in security application features. Inspired by these findings, we detect drift at the sample level using a contrastive learning-based autoencoder, enabling fine-grained detection without requiring extensive labeling. For explanation, we employ a greedy feature selection strategy that links detection decisions to semantically relevant input features. To address data imbalance during adaptation, DriftTrace leverages sample interpolation techniques. We evaluate DriftTrace on Android malware datasets (Drebin and MalDroid2020) and a network intrusion dataset (IDS2018). Our system achieves an average detection $F_{1}$ score of more than 0.94, which is superior to the advanced baseline TRANSCENDENT, and improves the explanation fidelity by an average of 76% compared with CADE. These results highlight the practicality of DriftTrace for security scenarios.
{"title":"DriftTrace: Combating Concept Drift in Security Applications Through Detection and Explanation","authors":"Yuedong Pan;Lixin Zhao;Tao Leng;Zhexi Luo;Lijun Cai;Aimin Yu;Dan Meng","doi":"10.1109/TIFS.2026.3659398","DOIUrl":"10.1109/TIFS.2026.3659398","url":null,"abstract":"Concept drift refers to the deviation in data distribution over time, driven by dynamic changes in attackers or environments. This phenomenon poses a significant challenge for deploying machine learning models in cybersecurity. Existing approaches rely heavily on frequent retraining or distribution-level analyses, which are costly, labor-intensive, and often lack interpretability. To address these limitations, we propose DriftTrace, a novel system designed to detect, explain, and adapt to concept drift in security applications. Through comprehensive analysis, we uncover associations, consistencies, and diversities in security application features. Inspired by these findings, we detect drift at the sample level using a contrastive learning-based autoencoder, enabling fine-grained detection without requiring extensive labeling. For explanation, we employ a greedy feature selection strategy that links detection decisions to semantically relevant input features. To address data imbalance during adaptation, DriftTrace leverages sample interpolation techniques. We evaluate DriftTrace on Android malware datasets (Drebin and MalDroid2020) and a network intrusion dataset (IDS2018). Our system achieves an average detection <inline-formula> <tex-math>$F_{1}$ </tex-math></inline-formula> score of more than 0.94, which is superior to the advanced baseline TRANSCENDENT, and improves the explanation fidelity by an average of 76% compared with CADE. These results highlight the practicality of DriftTrace for security scenarios.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"21 ","pages":"1957-1972"},"PeriodicalIF":8.0,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning (FL) enables collaborative model training across distributed participants without sharing raw data, offering a privacy-preserving paradigm. However, recent studies on gradient inversion attacks have demonstrated the vulnerability of FL to adversaries who can reconstruct sensitive local training data from shared gradients. To mitigate this threat, we propose Gradient Dropout, a novel defense mechanism that disrupts reconstruction attempts while preserving model utility. Specifically, Gradient Dropout perturbs gradients by randomly scaling a subset of components and replacing the remainder with Gaussian noise, thereby creating a transformed gradient space that significantly impedes reconstruction attempts. Moreover, this mechanism is applied across all layers of the model, ensuring that attackers cannot exploit any unperturbed gradients. Theoretical analysis reveals that the perturbed gradients can be kept sufficiently distant from their true values, thereby providing safety guarantees for the proposed algorithm. Furthermore, we demonstrate that this protection mechanism minimally impacts model performance, as gradient dropout and the original training dynamics remain effectively bounded under certain convexity conditions. These findings are substantiated through experimental evaluations, where we show that various attack methods yield low-quality reconstructed images while model performance is largely preserved, with less than 2% accuracy reduction relative to the baseline. As such, Gradient Dropout is presented as an effective solution for safeguarding privacy in FL, providing a balanced trade-off between privacy protection, computational efficiency, and model accuracy.
{"title":"Safeguarding Federated Learning From Data Reconstruction Attacks via Gradient Dropout","authors":"Ekanut Sotthiwat;Chi Zhang;Xiaokui Xiao;Liangli Zhen","doi":"10.1109/TIFS.2026.3659401","DOIUrl":"10.1109/TIFS.2026.3659401","url":null,"abstract":"Federated Learning (FL) enables collaborative model training across distributed participants without sharing raw data, offering a privacy-preserving paradigm. However, recent studies on gradient inversion attacks have demonstrated the vulnerability of FL to adversaries who can reconstruct sensitive local training data from shared gradients. To mitigate this threat, we propose Gradient Dropout, a novel defense mechanism that disrupts reconstruction attempts while preserving model utility. Specifically, Gradient Dropout perturbs gradients by randomly scaling a subset of components and replacing the remainder with Gaussian noise, thereby creating a transformed gradient space that significantly impedes reconstruction attempts. Moreover, this mechanism is applied across all layers of the model, ensuring that attackers cannot exploit any unperturbed gradients. Theoretical analysis reveals that the perturbed gradients can be kept sufficiently distant from their true values, thereby providing safety guarantees for the proposed algorithm. Furthermore, we demonstrate that this protection mechanism minimally impacts model performance, as gradient dropout and the original training dynamics remain effectively bounded under certain convexity conditions. These findings are substantiated through experimental evaluations, where we show that various attack methods yield low-quality reconstructed images while model performance is largely preserved, with less than 2% accuracy reduction relative to the baseline. As such, Gradient Dropout is presented as an effective solution for safeguarding privacy in FL, providing a balanced trade-off between privacy protection, computational efficiency, and model accuracy.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"21 ","pages":"1874-1888"},"PeriodicalIF":8.0,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.1109/tifs.2026.3657184
Yijia Guo, Junqing Zhang, Y.-W. Peter Hong, Stefano Tomasin
{"title":"Model-Driven Learning-Based Physical Layer Authentication for Mobile Wi-Fi Devices","authors":"Yijia Guo, Junqing Zhang, Y.-W. Peter Hong, Stefano Tomasin","doi":"10.1109/tifs.2026.3657184","DOIUrl":"https://doi.org/10.1109/tifs.2026.3657184","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"8 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146089839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.1109/tifs.2026.3659003
Xiaoping Lou, Zidong Wang
{"title":"A Novel Quantum-Based Mutual Authentication and Key Agreement Scheme for Smart Grid","authors":"Xiaoping Lou, Zidong Wang","doi":"10.1109/tifs.2026.3659003","DOIUrl":"https://doi.org/10.1109/tifs.2026.3659003","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"79 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/tifs.2026.3659000
Yujun Kim, Young-Gab Kim
{"title":"ISFL-AE: Insider-Specific Feature Learning Autoencoder for Lightweight Insider Threat Detection","authors":"Yujun Kim, Young-Gab Kim","doi":"10.1109/tifs.2026.3659000","DOIUrl":"https://doi.org/10.1109/tifs.2026.3659000","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"5 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/tifs.2026.3658987
Chengji Wang, Weizhi Nie, Hongbo Zhang, Hao Sun, Mang Ye
{"title":"Consensus Labelling: Prompt-Guided Clustering Refinement for Weakly Supervised Text-based Person Re-Identification","authors":"Chengji Wang, Weizhi Nie, Hongbo Zhang, Hao Sun, Mang Ye","doi":"10.1109/tifs.2026.3658987","DOIUrl":"https://doi.org/10.1109/tifs.2026.3658987","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"44 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/TIFS.2026.3658994
Wenjiao Dong;Xi Yang;Nannan Wang
Event camera-based person re-identification (Re-ID) effectively addresses the challenges faced by traditional Re-ID systems, such as privacy leakage, low-light imaging degradation, and motion blur. However, traditional Convolutional Neural Networks (CNNs) struggle to model long-range spatio-temporal dependencies, while the Transformer architecture encounters fundamental conflicts with second-order computational complexity and the high temporal resolution of event streams. Additionally, sparse data leads to wasted computational resources and diluted effective data. In contrast, the Mamba architecture, with its long-term modeling capability and linear complexity, is better suited for event stream data. Therefore, we innovatively explore the potential of VMamba in event camera-based person Re-ID; however, directly using VMamba does not fully leverage the temporal asynchronicity and spatial sparsity inherent in event data. To address this, we design a novel Sparse VMamba framework to construct a more robust spatio-temporal information extraction mechanism. First, we develop a Spatio-Temporal Information Modeling (STIM) module that simultaneously employs CNNs and Gated Recurrent Units (GRUs) for modeling spatial and temporal information. Then, we enhance the robustness of sparse data feature extraction using two strategies: on one hand, we utilize Anti-Noise Contour Enhancement (ANCE) module to improve motion contour features and mitigate sensor pulse noise; on the other hand, we implement Direction-Aware Sparse Perception (DASP) module to encourage the model to extract robust person descriptors. Results on the Event-ReID-v1 and Event-ReID-v2 datasets validate the effectiveness of our approach.
{"title":"Sparse VMamba: Robust Spatio-Temporal Information Modeling for Event Camera Person Re-Identification","authors":"Wenjiao Dong;Xi Yang;Nannan Wang","doi":"10.1109/TIFS.2026.3658994","DOIUrl":"10.1109/TIFS.2026.3658994","url":null,"abstract":"Event camera-based person re-identification (Re-ID) effectively addresses the challenges faced by traditional Re-ID systems, such as privacy leakage, low-light imaging degradation, and motion blur. However, traditional Convolutional Neural Networks (CNNs) struggle to model long-range spatio-temporal dependencies, while the Transformer architecture encounters fundamental conflicts with second-order computational complexity and the high temporal resolution of event streams. Additionally, sparse data leads to wasted computational resources and diluted effective data. In contrast, the Mamba architecture, with its long-term modeling capability and linear complexity, is better suited for event stream data. Therefore, we innovatively explore the potential of VMamba in event camera-based person Re-ID; however, directly using VMamba does not fully leverage the temporal asynchronicity and spatial sparsity inherent in event data. To address this, we design a novel Sparse VMamba framework to construct a more robust spatio-temporal information extraction mechanism. First, we develop a Spatio-Temporal Information Modeling (STIM) module that simultaneously employs CNNs and Gated Recurrent Units (GRUs) for modeling spatial and temporal information. Then, we enhance the robustness of sparse data feature extraction using two strategies: on one hand, we utilize Anti-Noise Contour Enhancement (ANCE) module to improve motion contour features and mitigate sensor pulse noise; on the other hand, we implement Direction-Aware Sparse Perception (DASP) module to encourage the model to extract robust person descriptors. Results on the Event-ReID-v1 and Event-ReID-v2 datasets validate the effectiveness of our approach.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"21 ","pages":"1889-1901"},"PeriodicalIF":8.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/tifs.2026.3658989
Guozhen Peng, Yunhong Wang, Zhuguanyu Wu, Shaoxiong Zhang, Yuwei Zhao, Ruiyi Zhan, Annan Li
{"title":"From Gradient Analysis to Norm Control: Rethinking Triplet Loss for Gait Recognition","authors":"Guozhen Peng, Yunhong Wang, Zhuguanyu Wu, Shaoxiong Zhang, Yuwei Zhao, Ruiyi Zhan, Annan Li","doi":"10.1109/tifs.2026.3658989","DOIUrl":"https://doi.org/10.1109/tifs.2026.3658989","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"272 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}