首页 > 最新文献

Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia最新文献

英文 中文
Comparative Study of Adversarial Training Methods for Cold-Start Recommendation 冷启动推荐对抗训练方法的比较研究
Haokai Ma, Xiangxian Li, Lei Meng, Xiangxu Meng
Adversarial training in recommendation is originated to improve the robustness of recommenders to attack signals and has recently shown promising results to alleviate cold-start recommendation. However, existing methods usually should make a trade-off between model robustness and performance, and the underlying reasons why using adversarial samples for training works has not been sufficiently verified. To address this issue, this paper identifies the key components of existing adversarial training methods and presents a taxonomy that defines these methods using three levels of components for perturbation generation, perturbation incorporation, and model optimization. Based on this taxonomy, different variants of existing methods are created, and a comparative study is conducted to verify the influence of each component in cold-start recommendation. Experimental results on two benchmarking datasets show that existing state-of-the-art algorithms can be further improved by a proper pairing of the key components as listed in the taxonomy. Moreover, using case studies and visualization, the influence of the content information of items on cold-start recommendation has been analyzed, and the explanations for the working mechanism of different components as proposed in the taxonomy have been offered. These verify the effectiveness of the proposed taxonomy as a design paradigm for adversarial training.
推荐中的对抗训练最初是为了提高推荐器对攻击信号的鲁棒性,最近在缓解冷启动推荐方面显示出有希望的结果。然而,现有的方法通常需要在模型鲁棒性和性能之间做出权衡,而使用对抗性样本进行训练工作的潜在原因尚未得到充分验证。为了解决这个问题,本文确定了现有对抗性训练方法的关键组成部分,并提出了一个分类法,该分类法使用扰动生成、扰动合并和模型优化的三个层次的组成部分来定义这些方法。在此分类法的基础上,创建了现有方法的不同变体,并进行了对比研究,验证了各成分在冷启动推荐中的影响。在两个基准数据集上的实验结果表明,通过对分类中列出的关键组件进行适当的配对,可以进一步改进现有的最先进算法。此外,采用案例研究和可视化技术,分析了项目内容信息对冷启动推荐的影响,并对分类中提出的不同组件的工作机制进行了解释。这些验证了所提出的分类法作为对抗性训练设计范例的有效性。
{"title":"Comparative Study of Adversarial Training Methods for Cold-Start Recommendation","authors":"Haokai Ma, Xiangxian Li, Lei Meng, Xiangxu Meng","doi":"10.1145/3475724.3483600","DOIUrl":"https://doi.org/10.1145/3475724.3483600","url":null,"abstract":"Adversarial training in recommendation is originated to improve the robustness of recommenders to attack signals and has recently shown promising results to alleviate cold-start recommendation. However, existing methods usually should make a trade-off between model robustness and performance, and the underlying reasons why using adversarial samples for training works has not been sufficiently verified. To address this issue, this paper identifies the key components of existing adversarial training methods and presents a taxonomy that defines these methods using three levels of components for perturbation generation, perturbation incorporation, and model optimization. Based on this taxonomy, different variants of existing methods are created, and a comparative study is conducted to verify the influence of each component in cold-start recommendation. Experimental results on two benchmarking datasets show that existing state-of-the-art algorithms can be further improved by a proper pairing of the key components as listed in the taxonomy. Moreover, using case studies and visualization, the influence of the content information of items on cold-start recommendation has been analyzed, and the explanations for the working mechanism of different components as proposed in the taxonomy have been offered. These verify the effectiveness of the proposed taxonomy as a design paradigm for adversarial training.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116490167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Comparative Study of Adversarial Training Methods for Long-tailed Classification 长尾分类对抗性训练方法的比较研究
Xiangxian Li, Haokai Ma, Lei Meng, Xiangxu Meng
Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in alleviating the long-tailed classification problem, where an imbalanced size of classes makes the model has much lower performance on small classes. However, existing methods typically focus on the methods to generate perturbations for data, while the contributions of different perturbations to long-tailed classification have not been well analyzed. To this end, this paper presents an investigation on the perturbation generation and incorporation components of existing adversarial training methods and proposes a taxonomy that defines these methods using three levels of components, in terms of information, methodology, and optimization. This taxonomy may serve as a design paradigm where an adversarial training algorithm can be created by combining different components in the taxonomy. A comparative study is conducted to verify the influence of each component in long-tailed classification. Experimental results on two benchmarking datasets show that a combination of statistical perturbations and hybrid optimization achieves a promising performance, and the gradient-based method typically improves the performance of both the head and tail classes. More importantly, it is verified that a reasonable combination of the components in our taxonomy may create an algorithm that outperforms the state-of-the-art.
对抗性训练起源于图像分类,以解决对抗性攻击问题,其中图像中不可见的扰动会导致模型决策的显着变化。最近已经观察到它在缓解长尾分类问题上是有效的,在长尾分类问题上,类的大小不平衡使得模型在小类上的性能低得多。然而,现有的方法通常侧重于对数据产生扰动的方法,而不同扰动对长尾分类的贡献尚未得到很好的分析。为此,本文对现有对抗性训练方法的摄动产生和合并组件进行了调查,并提出了一种分类法,该分类法使用三个级别的组件来定义这些方法,分别是信息、方法和优化。这种分类法可以作为一种设计范例,其中可以通过组合分类法中的不同组件来创建对抗性训练算法。通过对比研究验证了各成分对长尾分类的影响。在两个基准数据集上的实验结果表明,统计扰动和混合优化相结合的方法取得了很好的性能,基于梯度的方法通常可以提高头类和尾类的性能。更重要的是,它验证了分类法中组件的合理组合可以创建优于最先进算法的算法。
{"title":"Comparative Study of Adversarial Training Methods for Long-tailed Classification","authors":"Xiangxian Li, Haokai Ma, Lei Meng, Xiangxu Meng","doi":"10.1145/3475724.3483601","DOIUrl":"https://doi.org/10.1145/3475724.3483601","url":null,"abstract":"Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in alleviating the long-tailed classification problem, where an imbalanced size of classes makes the model has much lower performance on small classes. However, existing methods typically focus on the methods to generate perturbations for data, while the contributions of different perturbations to long-tailed classification have not been well analyzed. To this end, this paper presents an investigation on the perturbation generation and incorporation components of existing adversarial training methods and proposes a taxonomy that defines these methods using three levels of components, in terms of information, methodology, and optimization. This taxonomy may serve as a design paradigm where an adversarial training algorithm can be created by combining different components in the taxonomy. A comparative study is conducted to verify the influence of each component in long-tailed classification. Experimental results on two benchmarking datasets show that a combination of statistical perturbations and hybrid optimization achieves a promising performance, and the gradient-based method typically improves the performance of both the head and tail classes. More importantly, it is verified that a reasonable combination of the components in our taxonomy may create an algorithm that outperforms the state-of-the-art.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122005389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds 通过集成特征流形增强对抗性示例的可转移性
Dongdong Yang, Wenjie Li, R. Ni, Yao Zhao
The adversarial attack is a technique that causes intended misclassification by adding imperceptible perturbations to benign inputs. It provides a way to evaluate the robustness of models. Many existing adversarial attacks have achieved good performance in the white-box settings. However, these adversarial examples generated by various attacks typically overfit the particular architecture of the source model, resulting in low transferability in the black-box scenarios. In this work, we propose a novel feature attack method called Features-Ensemble Generative Adversarial Network (FEGAN), which ensembles multiple feature manifolds to capture intrinsic adversarial information that is most likely to cause misclassification of many models, thereby improving the transferability of adversarial examples. Accordingly, a generator trained based on various latent feature vectors of benign inputs can produce adversarial examples containing this adversarial information. Extensive experiments on the MNIST and CIFAR10 datasets demonstrate that the proposed method improves the transferability of adversarial examples while ensuring the attack success rate in the white-box scenario. In addition, the generated adversarial examples are more realistic with distribution close to that of the actual data.
对抗性攻击是一种通过在良性输入中添加难以察觉的扰动而导致预期错误分类的技术。它提供了一种评估模型鲁棒性的方法。许多现有的对抗性攻击在白盒设置中都取得了很好的性能。然而,这些由各种攻击生成的对抗性示例通常会过度拟合源模型的特定架构,从而导致黑箱场景中的低可移植性。在这项工作中,我们提出了一种新的特征攻击方法,称为特征集成生成对抗网络(FEGAN),它集成了多个特征流形来捕获最有可能导致许多模型错误分类的内在对抗信息,从而提高了对抗示例的可转移性。因此,基于良性输入的各种潜在特征向量训练的生成器可以产生包含该对抗性信息的对抗性示例。在MNIST和CIFAR10数据集上的大量实验表明,该方法提高了对抗性示例的可移植性,同时保证了白盒场景下的攻击成功率。此外,生成的对抗样例更真实,分布接近实际数据。
{"title":"Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds","authors":"Dongdong Yang, Wenjie Li, R. Ni, Yao Zhao","doi":"10.1145/3475724.3483608","DOIUrl":"https://doi.org/10.1145/3475724.3483608","url":null,"abstract":"The adversarial attack is a technique that causes intended misclassification by adding imperceptible perturbations to benign inputs. It provides a way to evaluate the robustness of models. Many existing adversarial attacks have achieved good performance in the white-box settings. However, these adversarial examples generated by various attacks typically overfit the particular architecture of the source model, resulting in low transferability in the black-box scenarios. In this work, we propose a novel feature attack method called Features-Ensemble Generative Adversarial Network (FEGAN), which ensembles multiple feature manifolds to capture intrinsic adversarial information that is most likely to cause misclassification of many models, thereby improving the transferability of adversarial examples. Accordingly, a generator trained based on various latent feature vectors of benign inputs can produce adversarial examples containing this adversarial information. Extensive experiments on the MNIST and CIFAR10 datasets demonstrate that the proposed method improves the transferability of adversarial examples while ensuring the attack success rate in the white-box scenario. In addition, the generated adversarial examples are more realistic with distribution close to that of the actual data.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"2 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113932037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Investigation on Sparsity of CapsNets for Adversarial Robustness 针对对抗鲁棒性的capnet稀疏性研究
Lei Zhao, Lei Huang
The routing-by-agreement mechanism in capsule networks (CapsNets) is used to build visual hierarchical relationships with a characteristic of assigning parts to wholes. The connections between capsules of different layers become sparser with more iterations of routing. This paper proposes techniques in measuring, controlling, and visualizing the sparsity of CapsNets. One essential observation in this paper is that the sparser CapsNets are possibly more robust to the adversarial attacks. We believe this observation will provide insights into designing more robust models.
利用胶囊网络中的协议路由机制(routing-by-agreement mechanism, CapsNets)来构建具有局部对整体分配特征的可视化层次关系。不同层的胶囊之间的连接随着路由迭代的增加而变得稀疏。本文提出了测量、控制和可视化capnet稀疏性的技术。本文的一个重要观察是,稀疏的capnet可能对对抗性攻击更健壮。我们相信这一观察结果将为设计更健壮的模型提供见解。
{"title":"An Investigation on Sparsity of CapsNets for Adversarial Robustness","authors":"Lei Zhao, Lei Huang","doi":"10.1145/3475724.3483609","DOIUrl":"https://doi.org/10.1145/3475724.3483609","url":null,"abstract":"The routing-by-agreement mechanism in capsule networks (CapsNets) is used to build visual hierarchical relationships with a characteristic of assigning parts to wholes. The connections between capsules of different layers become sparser with more iterations of routing. This paper proposes techniques in measuring, controlling, and visualizing the sparsity of CapsNets. One essential observation in this paper is that the sparser CapsNets are possibly more robust to the adversarial attacks. We believe this observation will provide insights into designing more robust models.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130806518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving Generalization of Deepfake Detection with Domain Adaptive Batch Normalization 基于域自适应批归一化的深度伪造检测泛化改进
Zixin Yin, Jiakai Wang, Yifu Ding, Yisong Xiao, Jun Guo, Renshuai Tao, Haotong Qin
Deepfake, a well-known face forgery technique, has raised serious concerns about personal privacy and social media security. Therefore, a plenty of deepfake detection methods come out and achieve outstanding performance in the single dataset case. However, current deepfake detection methods fail to perform strong generalization ability in cross-dataset case due to the domain gap. To tackle this issue, we propose Domain Adaptive Batch Normalization (DABN) strategy to mitigate the domain distribution gap on different datasets. Specifically, DABN utilizes the distribution statistics of the testing dataset in replace of the original counterparts so as to avoid distribution mismatch and restore the effectiveness of BN layers. Equipped with our DABN, detection method can be more robust when generalized into a broader usage. Note that our method is flexible and can be further employed on most existing deepfake detection methods during testing, which shows a great practical value. Extensive experiments on multiple datasets and models demonstrate the effectiveness of DABN. The proposed method achieves an average accuracy improvement by nearly 20% of existing strategies on Celeb-DF dataset under black-box settings, indicating strong enhancement of generalization ability of the deepfake detection models.
众所周知的人脸伪造技术Deepfake引发了人们对个人隐私和社交媒体安全的严重担忧。因此,出现了大量的深度伪造检测方法,并在单数据集情况下取得了优异的性能。然而,目前的深度伪造检测方法由于存在域间隙,在跨数据集情况下泛化能力不强。为了解决这个问题,我们提出了域自适应批处理归一化(DABN)策略来缓解不同数据集上的域分布差距。具体来说,DABN利用测试数据集的分布统计来代替原始数据集的分布统计,以避免分布不匹配,恢复BN层的有效性。利用我们的DABN,检测方法在推广到更广泛的应用时可以更加稳健。值得注意的是,我们的方法是灵活的,并且可以在测试过程中进一步应用于大多数现有的深度伪造检测方法,显示出很大的实用价值。在多个数据集和模型上的大量实验证明了DABN的有效性。该方法在Celeb-DF数据集上的平均准确率比现有策略提高了近20%,表明深度伪造检测模型的泛化能力得到了较强的增强。
{"title":"Improving Generalization of Deepfake Detection with Domain Adaptive Batch Normalization","authors":"Zixin Yin, Jiakai Wang, Yifu Ding, Yisong Xiao, Jun Guo, Renshuai Tao, Haotong Qin","doi":"10.1145/3475724.3483603","DOIUrl":"https://doi.org/10.1145/3475724.3483603","url":null,"abstract":"Deepfake, a well-known face forgery technique, has raised serious concerns about personal privacy and social media security. Therefore, a plenty of deepfake detection methods come out and achieve outstanding performance in the single dataset case. However, current deepfake detection methods fail to perform strong generalization ability in cross-dataset case due to the domain gap. To tackle this issue, we propose Domain Adaptive Batch Normalization (DABN) strategy to mitigate the domain distribution gap on different datasets. Specifically, DABN utilizes the distribution statistics of the testing dataset in replace of the original counterparts so as to avoid distribution mismatch and restore the effectiveness of BN layers. Equipped with our DABN, detection method can be more robust when generalized into a broader usage. Note that our method is flexible and can be further employed on most existing deepfake detection methods during testing, which shows a great practical value. Extensive experiments on multiple datasets and models demonstrate the effectiveness of DABN. The proposed method achieves an average accuracy improvement by nearly 20% of existing strategies on Celeb-DF dataset under black-box settings, indicating strong enhancement of generalization ability of the deepfake detection models.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114568959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Detecting Adversarial Patch Attacks through Global-local Consistency 利用全局-局部一致性检测对抗性补丁攻击
Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Jilin Li, Feiyue Huang
Recent works have well-demonstrated the threat of adversarial patch attacks to real-world vision media systems. By arbitrarily modifying pixels within a small restricted area in the image, adversarial patches can mislead neural-network-based image classifiers. In this paper, we propose a simple but very effective approach to detect adversarial patches based on an interesting observation called global-local consistency. We verify this insight and propose to use Random-Local-Ensemble (RLE) strategy to further enhance it in the detection. The proposed method is trivial to implement and can be applied to protect any image classification models. Experiments on two popular datasets show that our algorithm can accurately detect the adversarial patches while maintaining high clean accuracy. Moreover, unlike the prior detection approaches which can be easily broken by adaptive attacks, our method is proved to have high robustness when facing adaptive attacks.
最近的工作已经很好地证明了对抗性补丁攻击对现实世界视觉媒体系统的威胁。通过在图像的一个小限制区域内任意修改像素,对抗性补丁可以误导基于神经网络的图像分类器。在本文中,我们提出了一个简单但非常有效的方法来检测对抗性补丁基于一个有趣的观察称为全局-局部一致性。我们验证了这一见解,并提出使用随机局部集成(RLE)策略来进一步增强其在检测中的应用。该方法实现简单,可用于保护任何图像分类模型。在两个流行的数据集上的实验表明,该算法可以准确地检测出对抗斑块,同时保持较高的清洁精度。此外,与之前的检测方法容易被自适应攻击破坏不同,我们的方法在面对自适应攻击时具有很高的鲁棒性。
{"title":"Detecting Adversarial Patch Attacks through Global-local Consistency","authors":"Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Jilin Li, Feiyue Huang","doi":"10.1145/3475724.3483606","DOIUrl":"https://doi.org/10.1145/3475724.3483606","url":null,"abstract":"Recent works have well-demonstrated the threat of adversarial patch attacks to real-world vision media systems. By arbitrarily modifying pixels within a small restricted area in the image, adversarial patches can mislead neural-network-based image classifiers. In this paper, we propose a simple but very effective approach to detect adversarial patches based on an interesting observation called global-local consistency. We verify this insight and propose to use Random-Local-Ensemble (RLE) strategy to further enhance it in the detection. The proposed method is trivial to implement and can be applied to protect any image classification models. Experiments on two popular datasets show that our algorithm can accurately detect the adversarial patches while maintaining high clean accuracy. Moreover, unlike the prior detection approaches which can be easily broken by adaptive attacks, our method is proved to have high robustness when facing adaptive attacks.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125405182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Frequency Centric Defense Mechanisms against Adversarial Examples 针对对抗性例子的频率中心防御机制
Sanket B. Shah, Param Raval, Harin Khakhi, M. Raval
Adversarial example(AE) aims at fooling a Convolution Neural Network by introducing small perturbations in the input image. The proposed work uses the magnitude and phase of the Fourier Spectrum and the entropy of the image to defend against AE. We demonstrate the defense in two ways: by training an adversarial detector and denoising the adversarial effect. Experiments were conducted on the low-resolution CIFAR-10 and high-resolution ImageNet datasets. The adversarial detector has 99% accuracy for FGSM and PGD attacks on the CIFAR-10 dataset. However, the detection accuracy falls to 50% for sophisticated DeepFool and Carlini & Wagner attacks on ImageNet. We overcome the limitation by using autoencoder and show that 70% of AEs are correctly classified after denoising.
对抗性示例(AE)旨在通过在输入图像中引入小扰动来欺骗卷积神经网络。提出的工作利用傅里叶谱的幅度和相位以及图像的熵来防御声发射。我们通过两种方式来演示防御:通过训练对抗检测器和去噪对抗效应。实验分别在低分辨率CIFAR-10和高分辨率ImageNet数据集上进行。对抗性检测器对CIFAR-10数据集的FGSM和PGD攻击具有99%的准确率。然而,对于复杂的DeepFool和Carlini & Wagner对ImageNet的攻击,检测准确率下降到50%。我们通过使用自动编码器克服了这一限制,并表明在去噪后,70%的ae被正确分类。
{"title":"Frequency Centric Defense Mechanisms against Adversarial Examples","authors":"Sanket B. Shah, Param Raval, Harin Khakhi, M. Raval","doi":"10.1145/3475724.3483610","DOIUrl":"https://doi.org/10.1145/3475724.3483610","url":null,"abstract":"Adversarial example(AE) aims at fooling a Convolution Neural Network by introducing small perturbations in the input image. The proposed work uses the magnitude and phase of the Fourier Spectrum and the entropy of the image to defend against AE. We demonstrate the defense in two ways: by training an adversarial detector and denoising the adversarial effect. Experiments were conducted on the low-resolution CIFAR-10 and high-resolution ImageNet datasets. The adversarial detector has 99% accuracy for FGSM and PGD attacks on the CIFAR-10 dataset. However, the detection accuracy falls to 50% for sophisticated DeepFool and Carlini & Wagner attacks on ImageNet. We overcome the limitation by using autoencoder and show that 70% of AEs are correctly classified after denoising.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125357767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Generating Adversarial Remote Sensing Images via Pan-Sharpening Technique 基于泛锐化技术的对抗遥感图像生成
Maoxun Yuan, Xingxing Wei
Pan-sharpening is one of the most commonly used techniques in remote sensing, which fuses panchromatic (PAN) and multispectral (MS) images to obtain both the high spectral and high spatial resolution images. Due to these advantages, researchers usually apply object detectors on these pan-sharpened images to achieve reliable detection results. However, recent studies have shown that deep learning-based object detection methods are vulnerable to adversarial examples, i.e., adding imperceptible noises on clean images can fool well-trained deep neural networks. It is interesting to combine the pan-sharpening technique and adversarial examples to attack object detectors in remote sensing. In this paper, we propose a method to generate adversarial pan-sharpened images. We utilize a generative network to generate the pan-sharpened images, and then propose the shape loss and label loss to perform the attack task. To guarantee the quality of pan-sharpened images, a perceptual loss is utilized to balance spectral preserving and attacking performance. The proposed method is applied to attack two object detectors: Faster R-CNN and Feature Pyramid Networks (FPN). Experimental results on GaoFen-1 satellite images demonstrate that the proposed method can generate effective adversarial images. The mAP of Faster R-CNN with VGG16 drops significantly from 0.870 to 0.014.
泛锐化是遥感中最常用的技术之一,它将全色(PAN)和多光谱(MS)图像融合在一起,获得高光谱和高空间分辨率的图像。由于这些优点,研究人员通常在这些泛锐化图像上应用目标检测器来获得可靠的检测结果。然而,最近的研究表明,基于深度学习的目标检测方法容易受到对抗性示例的影响,即在干净图像上添加难以察觉的噪声可以欺骗训练良好的深度神经网络。将泛锐化技术与对抗样例相结合,对遥感目标检测器进行攻击是一个有趣的研究方向。本文提出了一种生成对抗性泛锐化图像的方法。我们利用生成网络生成泛锐化图像,然后提出形状损失和标签损失来执行攻击任务。为了保证泛锐化图像的质量,利用感知损失来平衡光谱保持和攻击性能。将该方法应用于攻击两种目标检测器:更快的R-CNN和特征金字塔网络(FPN)。在高分一号卫星图像上的实验结果表明,该方法可以生成有效的对抗图像。添加VGG16后,Faster R-CNN的mAP由0.870显著下降至0.014。
{"title":"Generating Adversarial Remote Sensing Images via Pan-Sharpening Technique","authors":"Maoxun Yuan, Xingxing Wei","doi":"10.1145/3475724.3483602","DOIUrl":"https://doi.org/10.1145/3475724.3483602","url":null,"abstract":"Pan-sharpening is one of the most commonly used techniques in remote sensing, which fuses panchromatic (PAN) and multispectral (MS) images to obtain both the high spectral and high spatial resolution images. Due to these advantages, researchers usually apply object detectors on these pan-sharpened images to achieve reliable detection results. However, recent studies have shown that deep learning-based object detection methods are vulnerable to adversarial examples, i.e., adding imperceptible noises on clean images can fool well-trained deep neural networks. It is interesting to combine the pan-sharpening technique and adversarial examples to attack object detectors in remote sensing. In this paper, we propose a method to generate adversarial pan-sharpened images. We utilize a generative network to generate the pan-sharpened images, and then propose the shape loss and label loss to perform the attack task. To guarantee the quality of pan-sharpened images, a perceptual loss is utilized to balance spectral preserving and attacking performance. The proposed method is applied to attack two object detectors: Faster R-CNN and Feature Pyramid Networks (FPN). Experimental results on GaoFen-1 satellite images demonstrate that the proposed method can generate effective adversarial images. The mAP of Faster R-CNN with VGG16 drops significantly from 0.870 to 0.014.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123186134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real World Robustness from Systematic Noise 系统噪声的真实世界鲁棒性
Yan Wang, Yuhang Li, Ruihao Gong
Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%$sim$5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.
系统误差不是偶然决定的,通常是指系统固有的不准确性(包括观察或测量过程)。在本文中,我们展示了一些长期被忽视但经常发生的由系统误差引起的对抗性例子。更具体地说,我们发现训练好的神经网络分类器可能会被不一致的图像解码和大小调整实现所愚弄。这些实现之间的微小差异通常会导致从训练到部署的准确性下降。为了对这些真实世界的对抗性示例进行基准测试,我们提出了ImageNet-S数据集,它使研究人员能够测量分类器对系统误差的鲁棒性。例如,我们发现在ImageNet上训练的正常ResNet-50由于系统误差可能有1%$sim$5%的精度差异。我们的评估和数据集可以帮助未来的工作向现实世界的鲁棒性和实际推广。
{"title":"Real World Robustness from Systematic Noise","authors":"Yan Wang, Yuhang Li, Ruihao Gong","doi":"10.1145/3475724.3483607","DOIUrl":"https://doi.org/10.1145/3475724.3483607","url":null,"abstract":"Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%$sim$5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132650734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Imperceptible Adversarial Examples by Spatial Chroma-Shift 空间色度移位的难以察觉的对抗例子
A. Aydin, Deniz Sen, Berat Tuna Karli, Oguz Hanoglu, A. Temi̇zel
Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on input images. While spatial transformation based adversarial examples look more natural to human observers due to absence of additive noise, they still possess visible distortions caused by spatial transformations. Since the human vision is more sensitive to the distortions in the luminance compared to those in chrominance channels, which is one of the main ideas behind the lossy visual multimedia compression standards, we propose a spatial transformation based perturbation method to create adversarial examples by only modifying the color components of an input image. While having competitive fooling rates on CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets, examples created with the proposed method have better scores with regards to various perceptual quality metrics. Human visual perception studies validate that the examples are more natural looking and often indistinguishable from their original counterparts.
深度神经网络已被证明易受各种对抗性扰动的影响。除了广泛研究的基于加性噪声的扰动外,还可以通过在输入图像上应用每像素空间漂移来创建对抗性示例。虽然由于没有加性噪声,基于空间变换的对抗性示例对人类观察者来说看起来更自然,但它们仍然具有由空间变换引起的可见扭曲。由于人类视觉对亮度的失真比色度通道的失真更敏感,这是有损视觉多媒体压缩标准背后的主要思想之一,我们提出了一种基于空间变换的摄动方法,通过仅修改输入图像的颜色分量来创建对抗性示例。虽然在CIFAR-10和NIPS2017对抗性学习挑战数据集上具有竞争性的欺骗率,但使用该方法创建的示例在各种感知质量指标方面具有更好的分数。人类视觉感知研究证实,这些例子看起来更自然,而且往往与原始的对应物难以区分。
{"title":"Imperceptible Adversarial Examples by Spatial Chroma-Shift","authors":"A. Aydin, Deniz Sen, Berat Tuna Karli, Oguz Hanoglu, A. Temi̇zel","doi":"10.1145/3475724.3483604","DOIUrl":"https://doi.org/10.1145/3475724.3483604","url":null,"abstract":"Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on input images. While spatial transformation based adversarial examples look more natural to human observers due to absence of additive noise, they still possess visible distortions caused by spatial transformations. Since the human vision is more sensitive to the distortions in the luminance compared to those in chrominance channels, which is one of the main ideas behind the lossy visual multimedia compression standards, we propose a spatial transformation based perturbation method to create adversarial examples by only modifying the color components of an input image. While having competitive fooling rates on CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets, examples created with the proposed method have better scores with regards to various perceptual quality metrics. Human visual perception studies validate that the examples are more natural looking and often indistinguishable from their original counterparts.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124457345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1