Deep learning is widely used in many fields, but the emergence of adversarial examples threatens the application of deep learning. Various methods have been proposed to defend against adversarial attacks. However, existing defense methods either can only detect adversarial examples without restoring their original classes or merely focus on verifying the input category and attempting to recover the classes of adversarial examples while lacking awareness of whether the input has been perturbed. To develop defense approaches that simultaneously achieve both detection and correction capabilities, a heterogeneous model combinatorial defense framework (HMCDF) is proposed for adversarial attacks in this paper. In particular, we first summarize the fundamental operations, block structures, and compositional patterns that constitute the model, while analyzing how these factors influence both the functionality and robustness of the model. According to the differences in the structure of the models, the models can be divided into isomorphic models and heterogeneous models. Then, we combine heterogeneous models to construct a heterogeneous model defense framework. Within this framework, as long as a majority of models can detect adversarial examples and restore their original labels, the voting mechanism used in the framework can determine whether the input has been perturbed, ultimately outputting legitimate labels through collective decision-making. To validate the performance, we conduct extensive experiments on three public datasets: CIFAR-10, SVHN, and Mini-ImageNet. After sufficient analysis of the simulation results, we find that our proposed method outperforms the others for the detection of adversarial attacks generated by the considered attack methods and can recover the classes of the adversarial examples.
{"title":"Heterogeneous Model Combinatorial Defense Framework (HMCDF) for Adversarial Attacks","authors":"Yiqin Lu, Xiong Shen, Zhe Cheng, Zhongshu Mao, Yang Zhang, Jiancheng Qin","doi":"10.1155/int/7868904","DOIUrl":"https://doi.org/10.1155/int/7868904","url":null,"abstract":"<p>Deep learning is widely used in many fields, but the emergence of adversarial examples threatens the application of deep learning. Various methods have been proposed to defend against adversarial attacks. However, existing defense methods either can only detect adversarial examples without restoring their original classes or merely focus on verifying the input category and attempting to recover the classes of adversarial examples while lacking awareness of whether the input has been perturbed. To develop defense approaches that simultaneously achieve both detection and correction capabilities, a heterogeneous model combinatorial defense framework (HMCDF) is proposed for adversarial attacks in this paper. In particular, we first summarize the fundamental operations, block structures, and compositional patterns that constitute the model, while analyzing how these factors influence both the functionality and robustness of the model. According to the differences in the structure of the models, the models can be divided into isomorphic models and heterogeneous models. Then, we combine heterogeneous models to construct a heterogeneous model defense framework. Within this framework, as long as a majority of models can detect adversarial examples and restore their original labels, the voting mechanism used in the framework can determine whether the input has been perturbed, ultimately outputting legitimate labels through collective decision-making. To validate the performance, we conduct extensive experiments on three public datasets: CIFAR-10, SVHN, and Mini-ImageNet. After sufficient analysis of the simulation results, we find that our proposed method outperforms the others for the detection of adversarial attacks generated by the considered attack methods and can recover the classes of the adversarial examples.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/7868904","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}