首页 > 最新文献

IEEE Transactions on Information Forensics and Security最新文献

英文 中文
Privacy-Preserving Generative Modeling With Sliced Wasserstein Distance 基于Wasserstein距离的隐私保护生成建模
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516549
Ziniu Liu;Han Yu;Kai Chen;Aiping Li
Large models require larger datasets. While people gain from using massive amounts of data to train large models, they must be concerned about privacy issues. To address this issue, we propose a novel approach for private generative modeling using the Sliced Wasserstein Distance (SWD) metric in a Differential Private (DP) manner. We propose Normalized Clipping, a parameter-free clipping technique that generates higher-quality images. We demonstrate the advantages of Normalized Clipping over the traditional clipping method in parameter tuning and model performance through experiments. Moreover, experimental results indicate that our model outperforms previous methods in differentially private image generation tasks.
大型模型需要更大的数据集。虽然人们从使用大量数据来训练大型模型中获益,但他们必须关注隐私问题。为了解决这个问题,我们提出了一种新的私有生成建模方法,使用差分私有(DP)方式的切片沃瑟斯坦距离(SWD)度量。我们提出了归一化裁剪,这是一种无参数的裁剪技术,可以生成更高质量的图像。通过实验证明了归一化裁剪方法在参数整定和模型性能方面优于传统的裁剪方法。此外,实验结果表明,我们的模型在差分私有图像生成任务中优于以往的方法。
{"title":"Privacy-Preserving Generative Modeling With Sliced Wasserstein Distance","authors":"Ziniu Liu;Han Yu;Kai Chen;Aiping Li","doi":"10.1109/TIFS.2024.3516549","DOIUrl":"10.1109/TIFS.2024.3516549","url":null,"abstract":"Large models require larger datasets. While people gain from using massive amounts of data to train large models, they must be concerned about privacy issues. To address this issue, we propose a novel approach for private generative modeling using the Sliced Wasserstein Distance (SWD) metric in a Differential Private (DP) manner. We propose Normalized Clipping, a parameter-free clipping technique that generates higher-quality images. We demonstrate the advantages of Normalized Clipping over the traditional clipping method in parameter tuning and model performance through experiments. Moreover, experimental results indicate that our model outperforms previous methods in differentially private image generation tasks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1011-1022"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hard Adversarial Example Mining for Improving Robust Fairness 提高鲁棒公平性的硬对抗示例挖掘
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516554
Chenhao Lin;Xiang Ji;Yulong Yang;Qian Li;Zhengyu Zhao;Zhe Peng;Run Wang;Liming Fang;Chao Shen
Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AEs). Nevertheless, recent studies have revealed that adversarially trained models are prone to unfairness problems. Recent works in this field usually apply class-wise regularization methods to enhance the fairness of AT. However, this paper discovers that these paradigms can be sub-optimal in improving robust fairness. Specifically, we empirically observe that the AEs that are already robust (referred to as “easy AEs” in this paper) are useless and even harmful in improving robust fairness. To this end, we propose the hard adversarial example mining (HAM) technique which concentrates on mining hard AEs while discarding the easy AEs in AT. Specifically, HAM identifies the easy AEs and hard AEs with a fast adversarial attack method. By discarding the easy AEs and reweighting the hard AEs, the robust fairness of the model can be efficiently and effectively improved. Extensive experimental results on four image classification datasets demonstrate the improvement of HAM in robust fairness and training efficiency compared to several state-of-the-art fair adversarial training methods. Our code is available at https://github.com/yyl-github-1896/HAM.
对抗性训练(AT)被广泛认为是提高深度神经网络(dnn)对对抗性示例(ae)鲁棒性的最新技术。然而,最近的研究表明,对抗训练的模型容易出现不公平问题。该领域的最新研究通常采用类正则化方法来提高自动识别的公平性。然而,本文发现这些范式在提高鲁棒公平性方面可能是次优的。具体来说,我们通过经验观察到,已经鲁棒的ae(本文中称为“简单ae”)在提高鲁棒公平性方面是无用的,甚至是有害的。为此,我们提出了硬对抗示例挖掘(HAM)技术,该技术专注于挖掘硬ae,而丢弃AT中的容易ae。具体来说,HAM通过一种快速的对抗性攻击方法来识别易AEs和难AEs。通过丢弃简单ae,对困难ae重新加权,可以有效地提高模型的鲁棒公平性。在四种图像分类数据集上的大量实验结果表明,与几种最先进的公平对抗训练方法相比,HAM方法在鲁棒公平性和训练效率方面有所提高。我们的代码可在https://github.com/yyl-github-1896/HAM上获得。
{"title":"Hard Adversarial Example Mining for Improving Robust Fairness","authors":"Chenhao Lin;Xiang Ji;Yulong Yang;Qian Li;Zhengyu Zhao;Zhe Peng;Run Wang;Liming Fang;Chao Shen","doi":"10.1109/TIFS.2024.3516554","DOIUrl":"10.1109/TIFS.2024.3516554","url":null,"abstract":"Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AEs). Nevertheless, recent studies have revealed that adversarially trained models are prone to unfairness problems. Recent works in this field usually apply class-wise regularization methods to enhance the fairness of AT. However, this paper discovers that these paradigms can be sub-optimal in improving robust fairness. Specifically, we empirically observe that the AEs that are already robust (referred to as “easy AEs” in this paper) are useless and even harmful in improving robust fairness. To this end, we propose the hard adversarial example mining (HAM) technique which concentrates on mining hard AEs while discarding the easy AEs in AT. Specifically, HAM identifies the easy AEs and hard AEs with a fast adversarial attack method. By discarding the easy AEs and reweighting the hard AEs, the robust fairness of the model can be efficiently and effectively improved. Extensive experimental results on four image classification datasets demonstrate the improvement of HAM in robust fairness and training efficiency compared to several state-of-the-art fair adversarial training methods. Our code is available at \u0000<uri>https://github.com/yyl-github-1896/HAM</uri>\u0000.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"350-363"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention Consistency Refined Masked Frequency Forgery Representation for Generalizing Face Forgery Detection 关注一致性改进掩码频率伪造表示泛化人脸伪造检测
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516561
Decheng Liu;Tao Chen;Chunlei Peng;Nannan Wang;Ruimin Hu;Xinbo Gao
Due to the successful development of deep image generation technology, visual data forgery detection would play a more important role in social and economic security. Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain. In this paper, we propose a novel Attention Consistency Refined masked frequency forgery representation model toward a generalizing face forgery detection algorithm (ACMF). Most forgery technologies always bring in high-frequency aware cues, which make it easy to distinguish source authenticity but difficult to generalize to unseen artifact types. The masked frequency forgery representation module is designed to explore robust forgery cues by randomly discarding high-frequency information. In addition, we find that the forgery saliency map inconsistency through the detection network could affect the generalizability. Thus, the forgery attention consistency is introduced to force detectors to focus on similar attention regions for better generalization ability. Experiment results on several public face forgery datasets (FaceForensic++, DFD, Celeb-DF, WDF and DFDC datasets) demonstrate the superior performance of the proposed method compared with the state-of-the-art methods. The source code and models are publicly available at https://github.com/chenboluo/ACMF.
随着深度图像生成技术的成功发展,视觉数据伪造检测将在社会经济安全领域发挥越来越重要的作用。现有的伪造检测方法在不可见域的真伪判定泛化能力不理想。针对广义人脸伪造检测算法(ACMF),提出了一种新的注意力一致性改进掩码频率伪造表示模型。大多数伪造技术都引入了高频感知线索,这使得识别源真伪变得容易,但很难推广到不可见的伪制品类型。掩蔽频率伪造表示模块通过随机丢弃高频信息来探索鲁棒伪造线索。此外,通过检测网络发现伪造显著性图的不一致性会影响检测结果的泛化性。因此,将伪造注意一致性引入到力检测器中,使其集中在相似的注意区域,从而提高泛化能力。在多个公开人脸伪造数据集(facefrence++、DFD、Celeb-DF、WDF和DFDC数据集)上的实验结果表明,与现有方法相比,该方法具有优越的性能。源代码和模型可在https://github.com/chenboluo/ACMF上公开获得。
{"title":"Attention Consistency Refined Masked Frequency Forgery Representation for Generalizing Face Forgery Detection","authors":"Decheng Liu;Tao Chen;Chunlei Peng;Nannan Wang;Ruimin Hu;Xinbo Gao","doi":"10.1109/TIFS.2024.3516561","DOIUrl":"10.1109/TIFS.2024.3516561","url":null,"abstract":"Due to the successful development of deep image generation technology, visual data forgery detection would play a more important role in social and economic security. Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain. In this paper, we propose a novel Attention Consistency Refined masked frequency forgery representation model toward a generalizing face forgery detection algorithm (ACMF). Most forgery technologies always bring in high-frequency aware cues, which make it easy to distinguish source authenticity but difficult to generalize to unseen artifact types. The masked frequency forgery representation module is designed to explore robust forgery cues by randomly discarding high-frequency information. In addition, we find that the forgery saliency map inconsistency through the detection network could affect the generalizability. Thus, the forgery attention consistency is introduced to force detectors to focus on similar attention regions for better generalization ability. Experiment results on several public face forgery datasets (FaceForensic++, DFD, Celeb-DF, WDF and DFDC datasets) demonstrate the superior performance of the proposed method compared with the state-of-the-art methods. The source code and models are publicly available at \u0000<uri>https://github.com/chenboluo/ACMF</uri>\u0000.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"504-515"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Leakage Measures for Imperfect Statistical Information: Application to Non-Bayesian Framework 不完全统计信息的信息泄漏度量:非贝叶斯框架的应用
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516585
Shahnewaz Karim Sakib;George T. Amariucai;Yong Guan
This paper analyzes the problem of estimating information leakage when the complete statistics of the privacy mechanism are not known, and the only available information consists of several input-output pairs obtained through interaction with the system or through some side channel. Several metrics, such as subjective leakage, objective leakage, and confidence boost, were introduced before for this purpose, but by design only work in a Bayesian framework. However, it is known that Bayesian inference can quickly become intractable if the domains of the involved variables are large. In this paper, we focus on this exact problem and propose a novel approach to perform an estimation of the leakage measures when the true knowledge of the privacy mechanism is beyond the reach of the user for a non-Bayesian framework using machine learning. Initially, we adapt the definition of leakage metrics to a non-Bayesian framework and derive their statistical bounds, and afterward, we evaluate the performance of those metrics via various experiments using Neural Networks, Random Forest Classifiers, and Support Vector Machines. We have also evaluated their performance on an image dataset to demonstrate the versatility of the metrics. Finally, we provide a comparative analysis between our proposed metrics and the metrics of the Bayesian framework.
本文分析了在不知道隐私机制的完整统计信息,且唯一可用的信息是通过与系统交互或通过某些侧通道获得的几个输入输出对的情况下估计信息泄漏的问题。为了达到这个目的,我们之前引入了一些度量,如主观泄漏、客观泄漏和信心增强,但设计上只在贝叶斯框架中起作用。然而,众所周知,如果涉及变量的域很大,贝叶斯推理很快就会变得难以处理。在本文中,我们专注于这个确切的问题,并提出了一种新的方法来执行泄漏措施的估计,当隐私机制的真实知识超出用户使用机器学习的非贝叶斯框架的范围时。首先,我们将泄漏度量的定义适应于非贝叶斯框架,并推导出其统计界限,然后,我们通过使用神经网络、随机森林分类器和支持向量机的各种实验来评估这些度量的性能。我们还在图像数据集上评估了它们的性能,以展示指标的多功能性。最后,我们提供了我们提出的指标和贝叶斯框架的指标之间的比较分析。
{"title":"Information Leakage Measures for Imperfect Statistical Information: Application to Non-Bayesian Framework","authors":"Shahnewaz Karim Sakib;George T. Amariucai;Yong Guan","doi":"10.1109/TIFS.2024.3516585","DOIUrl":"10.1109/TIFS.2024.3516585","url":null,"abstract":"This paper analyzes the problem of estimating information leakage when the complete statistics of the privacy mechanism are not known, and the only available information consists of several input-output pairs obtained through interaction with the system or through some side channel. Several metrics, such as subjective leakage, objective leakage, and confidence boost, were introduced before for this purpose, but by design only work in a Bayesian framework. However, it is known that Bayesian inference can quickly become intractable if the domains of the involved variables are large. In this paper, we focus on this exact problem and propose a novel approach to perform an estimation of the leakage measures when the true knowledge of the privacy mechanism is beyond the reach of the user for a non-Bayesian framework using machine learning. Initially, we adapt the definition of leakage metrics to a non-Bayesian framework and derive their statistical bounds, and afterward, we evaluate the performance of those metrics via various experiments using Neural Networks, Random Forest Classifiers, and Support Vector Machines. We have also evaluated their performance on an image dataset to demonstrate the versatility of the metrics. Finally, we provide a comparative analysis between our proposed metrics and the metrics of the Bayesian framework.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1065-1080"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decaf: Data Distribution Decompose Attack Against Federated Learning Decaf:针对联邦学习的数据分布分解攻击
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516545
Zhiyang Dai;Yansong Gao;Chunyi Zhou;Anmin Fu;Zhi Zhang;Minhui Xue;Yifeng Zheng;Yuqing Zhang
In contrast to prevalent Federated Learning (FL) privacy inference techniques such as generative adversarial networks attacks, membership inference attacks, property inference attacks, and model inversion attacks, we devise an innovative privacy threat: the Data Distribution Decompose Attack on FL, termed Decaf. This attack enables an honest-but-curious FL server to meticulously profile the proportion of each class owned by the victim FL user, divulging sensitive information like local market item distribution and business competitiveness. The crux of Decaf lies in the profound observation that the magnitude of local model gradient changes closely mirrors the underlying data distribution, including the proportion of each class. Decaf addresses two crucial challenges: accurately identify the missing/null class(es) given by any victim user as a premise and then quantify the precise relationship between gradient changes and each remaining non-null class. Notably, Decaf operates stealthily, rendering it entirely passive and undetectable to victim users regarding the infringement of their data distribution privacy. Experimental validation on five benchmark datasets (MNIST, FASHION-MNIST, CIFAR-10, FER-2013, and SkinCancer) employing diverse model architectures, including customized convolutional networks, standardized VGG16, and ResNet18, demonstrates Decaf’s efficacy. Results indicate its ability to accurately decompose local user data distribution, regardless of whether it is IID or non-IID distributed. Specifically, the dissimilarity measured using $L_{infty }$ distance between the distribution decomposed by Decaf and ground truth is consistently below 5% when no null classes exist. Moreover, Decaf achieves 100% accuracy in determining any victim user’s null classes, validated through formal proof.
与流行的联邦学习(FL)隐私推理技术(如生成对抗网络攻击、成员推理攻击、属性推理攻击和模型反转攻击)相比,我们设计了一种创新的隐私威胁:针对FL的数据分布分解攻击,称为Decaf。这种攻击使诚实但好奇的FL服务器能够细致地描述受害者FL用户拥有的每个类的比例,泄露敏感信息,如当地市场商品分布和业务竞争力。Decaf的关键在于深刻的观察,即局部模型梯度变化的大小密切反映了底层数据分布,包括每个类别的比例。Decaf解决了两个关键的挑战:准确识别任何受害者用户给出的缺失/空类(es)作为前提,然后量化梯度变化与每个剩余非空类之间的精确关系。值得注意的是,Decaf是秘密运作的,使其完全被动,无法检测到受害者用户对其数据分发隐私的侵犯。在5个基准数据集(MNIST、FASHION-MNIST、CIFAR-10、fe -2013和皮肤癌)上进行实验验证,采用不同的模型架构,包括定制卷积网络、标准化VGG16和ResNet18,证明了Decaf的有效性。结果表明,无论本地用户数据分布是IID分布还是非IID分布,该算法都能够准确分解本地用户数据分布。具体来说,使用$L_{infty }$测量的不相似度,由Decaf分解的分布与ground truth之间的距离始终低于5% when no null classes exist. Moreover, Decaf achieves 100% accuracy in determining any victim user’s null classes, validated through formal proof.
{"title":"Decaf: Data Distribution Decompose Attack Against Federated Learning","authors":"Zhiyang Dai;Yansong Gao;Chunyi Zhou;Anmin Fu;Zhi Zhang;Minhui Xue;Yifeng Zheng;Yuqing Zhang","doi":"10.1109/TIFS.2024.3516545","DOIUrl":"10.1109/TIFS.2024.3516545","url":null,"abstract":"In contrast to prevalent Federated Learning (FL) privacy inference techniques such as generative adversarial networks attacks, membership inference attacks, property inference attacks, and model inversion attacks, we devise an innovative privacy threat: the Data Distribution Decompose Attack on FL, termed \u0000<monospace>Decaf</monospace>\u0000. This attack enables an honest-but-curious FL server to meticulously profile the proportion of each class owned by the victim FL user, divulging sensitive information like local market item distribution and business competitiveness. The crux of \u0000<monospace>Decaf</monospace>\u0000 lies in the profound observation that the magnitude of local model gradient changes closely mirrors the underlying data distribution, including the proportion of each class. \u0000<monospace>Decaf</monospace>\u0000 addresses two crucial challenges: accurately identify the missing/null class(es) given by any victim user as a premise and then quantify the precise relationship between gradient changes and each remaining non-null class. Notably, \u0000<monospace>Decaf</monospace>\u0000 operates stealthily, rendering it entirely passive and undetectable to victim users regarding the infringement of their data distribution privacy. Experimental validation on five benchmark datasets (MNIST, FASHION-MNIST, CIFAR-10, FER-2013, and SkinCancer) employing diverse model architectures, including customized convolutional networks, standardized VGG16, and ResNet18, demonstrates \u0000<monospace>Decaf</monospace>\u0000’s efficacy. Results indicate its ability to accurately decompose local user data distribution, regardless of whether it is IID or non-IID distributed. Specifically, the dissimilarity measured using \u0000<inline-formula> <tex-math>$L_{infty }$ </tex-math></inline-formula>\u0000 distance between the distribution decomposed by \u0000<monospace>Decaf</monospace>\u0000 and ground truth is consistently below 5% when no null classes exist. Moreover, \u0000<monospace>Decaf</monospace>\u0000 achieves 100% accuracy in determining any victim user’s null classes, validated through formal proof.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"405-420"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dualistic Disentangled Meta-Learning Model for Generalizable Person Re-Identification 广义人物再识别的二元解纠缠元学习模型
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516540
Jia Sun;Yanfeng Li;Luyifu Chen;Houjin Chen;Minjun Wang
Person re-identification (re-ID) is a research hotspot in the field of intelligent monitoring and security. Domain generalizable (DG) person re-identification transfers the trained model directly to the unseen target domain for testing, which is closer to the practical application than supervised or unsupervised person re-ID. Meta-learning strategy is an effective way to solve the DG problem, nevertheless, existing meta-learning-based DG re-ID methods mainly simulates the test process in a single aspect such as identity or style, while ignoring the completely different person identities and styles in the unseen target domain. As to this problem, we consider a double disentangling from two levels of training strategy and feature learning, and propose a novel dualistic disentangled meta-learning (D $^{mathbf {2}}$ ML) model. D $^{mathbf {2}}$ ML is composed of two disentangling stages, one is for learning strategy, which spreads one-stage meta-test into two-stage, including an identity meta-test stage and a style meta-test stage. The other is for feature representation, which decouples the shallow layer features into identity-related features and style-related features. Specifically, we first conduct identity meta-test stage on different person identities of the images, and then employ a feature-level style perturbation module (SPM) based on Fourier spectrum transformation to conduct the style meta-test stage on the image with diversified styles. With these two stages, abundant changes in the unseen domain can be simulated during the meta-test phase. Besides, to learn more identity-related features, a feature disentangling module (FDM) is inserted at each stage of meta-learning and a disentangled triplet loss is developed. Through constraining the relationship between identity-related features and style-related features, the generalization ability of the model can be further improved. Experimental results on four public datasets show that our D $^{mathbf {2}}$ ML model achieves superior generalization performance compared to the state-of-the-art methods.
人员身份再识别(re-ID)是智能监控与安防领域的研究热点。域泛化(DG)人员再识别将训练好的模型直接转移到未知的目标域进行测试,比有监督或无监督的人员再识别更接近实际应用。元学习策略是解决DG问题的有效方法,但现有的基于元学习的DG重识别方法主要是从身份或风格等单一方面模拟测试过程,而忽略了未知目标域中完全不同的人的身份和风格。针对这一问题,我们从两个层次的训练策略和特征学习考虑双重解纠缠,提出了一种新的二元解纠缠元学习(D $^{mathbf {2}}$ ML)模型。D $^{mathbf {2}}$ ML由两个解缠阶段组成,一个是学习策略,将一阶段元测试扩展为两个阶段,包括身份元测试阶段和风格元测试阶段。另一个是特征表示,它将浅层特征解耦为与身份相关的特征和与样式相关的特征。具体而言,我们首先对图像的不同人物身份进行身份元检验阶段,然后采用基于傅立叶谱变换的特征级风格摄动模块(SPM)对具有多样化风格的图像进行风格元检验阶段。通过这两个阶段,可以在元测试阶段模拟未知领域中的大量变化。此外,为了学习更多与身份相关的特征,在元学习的每个阶段插入特征解纠缠模块(feature disentangling module, FDM),并开发了解纠缠三元组损失。通过约束身份相关特征和风格相关特征之间的关系,可以进一步提高模型的泛化能力。在四个公开数据集上的实验结果表明,我们的D $^{mathbf {2}}$ ML模型与目前最先进的方法相比具有更好的泛化性能。
{"title":"Dualistic Disentangled Meta-Learning Model for Generalizable Person Re-Identification","authors":"Jia Sun;Yanfeng Li;Luyifu Chen;Houjin Chen;Minjun Wang","doi":"10.1109/TIFS.2024.3516540","DOIUrl":"10.1109/TIFS.2024.3516540","url":null,"abstract":"Person re-identification (re-ID) is a research hotspot in the field of intelligent monitoring and security. Domain generalizable (DG) person re-identification transfers the trained model directly to the unseen target domain for testing, which is closer to the practical application than supervised or unsupervised person re-ID. Meta-learning strategy is an effective way to solve the DG problem, nevertheless, existing meta-learning-based DG re-ID methods mainly simulates the test process in a single aspect such as identity or style, while ignoring the completely different person identities and styles in the unseen target domain. As to this problem, we consider a double disentangling from two levels of training strategy and feature learning, and propose a novel dualistic disentangled meta-learning (D<inline-formula> <tex-math>$^{mathbf {2}}$ </tex-math></inline-formula>ML) model. D<inline-formula> <tex-math>$^{mathbf {2}}$ </tex-math></inline-formula>ML is composed of two disentangling stages, one is for learning strategy, which spreads one-stage meta-test into two-stage, including an identity meta-test stage and a style meta-test stage. The other is for feature representation, which decouples the shallow layer features into identity-related features and style-related features. Specifically, we first conduct identity meta-test stage on different person identities of the images, and then employ a feature-level style perturbation module (SPM) based on Fourier spectrum transformation to conduct the style meta-test stage on the image with diversified styles. With these two stages, abundant changes in the unseen domain can be simulated during the meta-test phase. Besides, to learn more identity-related features, a feature disentangling module (FDM) is inserted at each stage of meta-learning and a disentangled triplet loss is developed. Through constraining the relationship between identity-related features and style-related features, the generalization ability of the model can be further improved. Experimental results on four public datasets show that our D<inline-formula> <tex-math>$^{mathbf {2}}$ </tex-math></inline-formula>ML model achieves superior generalization performance compared to the state-of-the-art methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1106-1118"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MalFSCIL: A Few-Shot Class-Incremental Learning Approach for Malware Detection MalFSCIL:用于恶意软件检测的少量类增量学习方法
IF 6.8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/tifs.2024.3516565
Yuhan Chai, Ximing Chen, Jing Qiu, Lei Du, Yanjun Xiao, Qiying Feng, Shouling Ji, Zhihong Tian
{"title":"MalFSCIL: A Few-Shot Class-Incremental Learning Approach for Malware Detection","authors":"Yuhan Chai, Ximing Chen, Jing Qiu, Lei Du, Yanjun Xiao, Qiying Feng, Shouling Ji, Zhihong Tian","doi":"10.1109/tifs.2024.3516565","DOIUrl":"https://doi.org/10.1109/tifs.2024.3516565","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"234 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Two-Stage Channel-Based Lightweight Authentication Method for Time-Varying Scenarios 时变场景下基于在线两阶段通道的轻量级认证方法
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516575
Yuhong Xue;Zhutian Yang;Zhilu Wu;Hu Wang;Guan Gui
Physical Layer Authentication (PLA) emerges as a promising security solution, offering efficient identity verification for the Internet of Things (IoT). The advent of 5G/6G technologies has ushered in an era of extensive device connectivity, diverse networks, and complex application scenarios within IoT ecosystems. These advancements necessitate PLA systems that are highly secure, robust, capable of online processing, and adaptable to unknown channel conditions. In this paper, we introduce a novel two-stage PLA framework that synergizes channel prediction with power-delay attributes, ensuring superior performance in mobile and time-varying channel environments. Specifically, our approach employs Sparse Variational Gaussian Processes (SVGP) to accurately model and track real-time channel variations, leveraging historical data for online predictions without incurring significant computational or storage overhead. The second stage of our framework enhances the robustness of the authentication process by incorporating power-delay features, which are inherently resistant to temporal fluctuations, thereby eliminating the need for additional feature extraction in noisy settings. Moreover, our authentication scheme is designed to be distribution-agnostic, utilizing Kernel Density Estimation (KDE) for non-parametric threshold determination in hypothesis testing. Theoretical analysis underpins the generalization capabilities of our proposed method. Simulation results in mobile scenarios reveal that our two-stage PLA framework reduces complexity and significantly improves identity authentication performance, particularly in scenarios with low signal-to-noise ratios.
物理层身份验证(PLA)作为一种有前途的安全解决方案出现,为物联网(IoT)提供了有效的身份验证。5G/6G技术的出现,开启了物联网生态系统内广泛设备连接、多样化网络和复杂应用场景的时代。这些进步需要高度安全、稳健、能够在线处理和适应未知信道条件的PLA系统。在本文中,我们介绍了一种新的两阶段PLA框架,该框架将信道预测与功率延迟属性协同起来,确保在移动和时变信道环境中具有卓越的性能。具体来说,我们的方法采用稀疏变分高斯过程(SVGP)来准确地建模和跟踪实时信道变化,利用历史数据进行在线预测,而不会产生显著的计算或存储开销。我们的框架的第二阶段通过纳入功率延迟特征来增强身份验证过程的鲁棒性,功率延迟特征固有地抵抗时间波动,从而消除了在噪声设置中提取额外特征的需要。此外,我们的认证方案被设计成分布不可知的,利用核密度估计(KDE)来确定假设检验中的非参数阈值。理论分析支撑了我们提出的方法的泛化能力。移动场景中的仿真结果表明,我们的两阶段PLA框架降低了复杂性,显著提高了身份认证性能,特别是在低信噪比的场景中。
{"title":"Online Two-Stage Channel-Based Lightweight Authentication Method for Time-Varying Scenarios","authors":"Yuhong Xue;Zhutian Yang;Zhilu Wu;Hu Wang;Guan Gui","doi":"10.1109/TIFS.2024.3516575","DOIUrl":"10.1109/TIFS.2024.3516575","url":null,"abstract":"Physical Layer Authentication (PLA) emerges as a promising security solution, offering efficient identity verification for the Internet of Things (IoT). The advent of 5G/6G technologies has ushered in an era of extensive device connectivity, diverse networks, and complex application scenarios within IoT ecosystems. These advancements necessitate PLA systems that are highly secure, robust, capable of online processing, and adaptable to unknown channel conditions. In this paper, we introduce a novel two-stage PLA framework that synergizes channel prediction with power-delay attributes, ensuring superior performance in mobile and time-varying channel environments. Specifically, our approach employs Sparse Variational Gaussian Processes (SVGP) to accurately model and track real-time channel variations, leveraging historical data for online predictions without incurring significant computational or storage overhead. The second stage of our framework enhances the robustness of the authentication process by incorporating power-delay features, which are inherently resistant to temporal fluctuations, thereby eliminating the need for additional feature extraction in noisy settings. Moreover, our authentication scheme is designed to be distribution-agnostic, utilizing Kernel Density Estimation (KDE) for non-parametric threshold determination in hypothesis testing. Theoretical analysis underpins the generalization capabilities of our proposed method. Simulation results in mobile scenarios reveal that our two-stage PLA framework reduces complexity and significantly improves identity authentication performance, particularly in scenarios with low signal-to-noise ratios.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"781-795"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Identity Verification and Pose Alignment for Partial Fingerprints 部分指纹的联合身份验证和姿态对齐
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516566
Xiongjun Guan;Zhiyu Pan;Jianjiang Feng;Jie Zhou
Currently, portable electronic devices are becoming more and more popular. For lightweight considerations, their fingerprint recognition modules usually use limited-size sensors. However, partial fingerprints have few matchable features, especially when there are differences in finger pressing posture or image quality, which makes partial fingerprint verification challenging. Most existing methods regard fingerprint position rectification and identity verification as independent tasks, ignoring the coupling relationship between them—relative pose estimation typically relies on paired features as anchors, and authentication accuracy tends to improve with more precise pose alignment. In this paper, we propose a novel framework for joint identity verification and pose alignment of partial fingerprint pairs, aiming to leverage their inherent correlation to improve each other. To achieve this, we present a multi-task CNN (Convolutional Neural Network)-Transformer hybrid network, and design a pre-training task to enhance the feature extraction capability. Experiments on multiple public datasets (NIST SD14, FVC2002 DB1_A & DB3_A, FVC2004 DB1_A & DB2_A, FVC2006 DB1_A) and an in-house dataset demonstrate that our method achieves state-of-the-art performance in both partial fingerprint verification and relative pose estimation, while being more efficient than previous methods. Code is available at: https://github.com/XiongjunGuan/JIPNet.
目前,便携式电子设备变得越来越受欢迎。出于轻量化考虑,他们的指纹识别模块通常使用有限尺寸的传感器。然而,部分指纹的匹配特征很少,特别是当手指按压姿势或图像质量存在差异时,这给部分指纹验证带来了挑战。现有方法大多将指纹位置校正和身份验证作为独立的任务,忽略了两者之间的耦合关系,相对姿态估计通常依赖于配对特征作为锚点,姿态对齐越精确,身份验证精度越高。在本文中,我们提出了一个新的框架来联合身份验证和部分指纹对的姿态对齐,旨在利用它们的内在相关性来相互改进。为了实现这一目标,我们提出了一个多任务CNN(卷积神经网络)-Transformer混合网络,并设计了一个预训练任务来增强特征提取能力。在多个公共数据集(NIST SD14, FVC2002 DB1_A & DB3_A, FVC2004 DB1_A & DB2_A, FVC2006 DB1_A)和内部数据集上的实验表明,我们的方法在部分指纹验证和相对姿态估计方面都达到了最先进的性能,同时比以前的方法更有效。代码可从https://github.com/XiongjunGuan/JIPNet获得。
{"title":"Joint Identity Verification and Pose Alignment for Partial Fingerprints","authors":"Xiongjun Guan;Zhiyu Pan;Jianjiang Feng;Jie Zhou","doi":"10.1109/TIFS.2024.3516566","DOIUrl":"10.1109/TIFS.2024.3516566","url":null,"abstract":"Currently, portable electronic devices are becoming more and more popular. For lightweight considerations, their fingerprint recognition modules usually use limited-size sensors. However, partial fingerprints have few matchable features, especially when there are differences in finger pressing posture or image quality, which makes partial fingerprint verification challenging. Most existing methods regard fingerprint position rectification and identity verification as independent tasks, ignoring the coupling relationship between them—relative pose estimation typically relies on paired features as anchors, and authentication accuracy tends to improve with more precise pose alignment. In this paper, we propose a novel framework for joint identity verification and pose alignment of partial fingerprint pairs, aiming to leverage their inherent correlation to improve each other. To achieve this, we present a multi-task CNN (Convolutional Neural Network)-Transformer hybrid network, and design a pre-training task to enhance the feature extraction capability. Experiments on multiple public datasets (NIST SD14, FVC2002 DB1_A & DB3_A, FVC2004 DB1_A & DB2_A, FVC2006 DB1_A) and an in-house dataset demonstrate that our method achieves state-of-the-art performance in both partial fingerprint verification and relative pose estimation, while being more efficient than previous methods. Code is available at: \u0000<uri>https://github.com/XiongjunGuan/JIPNet</uri>\u0000.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"249-263"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization 通过分布式优化的分散联邦学习的可证明的隐私优势
IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516564
Wenrui Yu;Qiongxiu Li;Milan Lopuhaä-Zwakenberg;Mads Græsbøll Christensen;Richard Heusdens
Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source, thus embedding privacy as a core consideration in FL architectures, whether centralized or decentralized. Contrasting with recent findings by Pasquini et al., which suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models, our study provides compelling evidence to the contrary. We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection - both theoretically and empirically - compared to centralized approaches. The challenge of quantifying privacy loss through iterative processes has traditionally constrained the theoretical exploration of FL protocols. We overcome this by conducting a pioneering in-depth information-theoretical privacy analysis for both frameworks. Our analysis, considering both eavesdropping and passive adversary models, successfully establishes bounds on privacy leakage. In particular, we show information theoretically that the privacy loss in decentralized FL is upper bounded by the loss in centralized FL. Compared to the centralized case where local gradients of individual participants are directly revealed, a key distinction of optimization-based decentralized FL is that the relevant information includes differences of local gradients over successive iterations and the aggregated sum of different nodes’ gradients over the network. This information complicates the adversary’s attempt to infer private data. To bridge our theoretical insights with practical applications, we present detailed case studies involving logistic regression and deep neural networks. These examples demonstrate that while privacy leakage remains comparable in simpler models, complex models like deep neural networks exhibit lower privacy risks under decentralized FL. Extensive numerical tests further validate that decentralized FL is more resistant to privacy attacks, aligning with our theoretical findings.
联邦学习(FL)作为一种范例出现,旨在通过使数据驻留在其来源来改善数据隐私,从而将隐私作为FL架构(无论是集中式还是分散式)的核心考虑因素。Pasquini等人最近的研究结果表明,与集中式模型相比,去中心化的FL在经验上并没有提供任何额外的隐私或安全好处,与此相反,我们的研究提供了令人信服的证据。我们证明,在部署分布式优化时,与集中式方法相比,分散式FL在理论上和经验上都提供了增强的隐私保护。通过迭代过程量化隐私损失的挑战传统上限制了FL协议的理论探索。我们通过对这两个框架进行开创性的深入信息理论隐私分析来克服这个问题。我们的分析考虑了窃听和被动对手模型,成功地建立了隐私泄漏的界限。特别是,我们从理论上展示了去中心化FL的隐私损失上限为集中式FL的隐私损失。与直接显示单个参与者局部梯度的集中式情况相比,基于优化的去中心化FL的一个关键区别在于,相关信息包括连续迭代的局部梯度差异和网络上不同节点梯度的总和。这些信息使攻击者推断私人数据的尝试变得复杂。为了将我们的理论见解与实际应用联系起来,我们提出了涉及逻辑回归和深度神经网络的详细案例研究。这些例子表明,虽然隐私泄露在更简单的模型中仍然具有可比性,但像深度神经网络这样的复杂模型在去中心化FL下表现出更低的隐私风险。大量的数值测试进一步验证了去中心化FL更能抵抗隐私攻击,这与我们的理论发现一致。
{"title":"Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization","authors":"Wenrui Yu;Qiongxiu Li;Milan Lopuhaä-Zwakenberg;Mads Græsbøll Christensen;Richard Heusdens","doi":"10.1109/TIFS.2024.3516564","DOIUrl":"10.1109/TIFS.2024.3516564","url":null,"abstract":"Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source, thus embedding privacy as a core consideration in FL architectures, whether centralized or decentralized. Contrasting with recent findings by Pasquini et al., which suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models, our study provides compelling evidence to the contrary. We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection - both theoretically and empirically - compared to centralized approaches. The challenge of quantifying privacy loss through iterative processes has traditionally constrained the theoretical exploration of FL protocols. We overcome this by conducting a pioneering in-depth information-theoretical privacy analysis for both frameworks. Our analysis, considering both eavesdropping and passive adversary models, successfully establishes bounds on privacy leakage. In particular, we show information theoretically that the privacy loss in decentralized FL is upper bounded by the loss in centralized FL. Compared to the centralized case where local gradients of individual participants are directly revealed, a key distinction of optimization-based decentralized FL is that the relevant information includes differences of local gradients over successive iterations and the aggregated sum of different nodes’ gradients over the network. This information complicates the adversary’s attempt to infer private data. To bridge our theoretical insights with practical applications, we present detailed case studies involving logistic regression and deep neural networks. These examples demonstrate that while privacy leakage remains comparable in simpler models, complex models like deep neural networks exhibit lower privacy risks under decentralized FL. Extensive numerical tests further validate that decentralized FL is more resistant to privacy attacks, aligning with our theoretical findings.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"822-838"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Information Forensics and Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1