{"title":"Generative Collision Attack on Deep Image Hashing","authors":"Luyang Ying;Cheng Xiong;Chuan Qin;Xiangyang Luo;Zhenxing Qian;Xinpeng Zhang","doi":"10.1109/TIFS.2025.3547566","DOIUrl":null,"url":null,"abstract":"Due to the powerful feature extraction capabilities of deep neural networks (DNNs), deep image hashing has extensive applications in the fields such as image authentication, copy detection and content retrieval, making its security a critical concern. Among various security metrics, collision resistance serves as a crucial indicator of deep image hashing methods. Research on collision attacks not only reveals the potential vulnerabilities of deep image hashing but also can promote the development of more robust and secure hashing methods. In this paper, we propose a novel generative collision attack scheme, which achieves several advantages over existing attack schemes based on adversarial examples. Our scheme requires no additional perturbations added to the image, and can simultaneously generate multiple hash collision images of different classes specified by the attacker. To the best of our knowledge, this is the first generative collision attack scheme effective across various deep image hashing methods. Specifically, our attack framework consists of three parts, i.e., a Hash-to-Noise Network (HTNN), a pretrained BigGAN generator and a conditional discriminator. The designed HTNN embeds the hash code of the target image and the attacker-specified generation class information into a “noise” vector. By optimizing various hash distance loss functions between the generated and target images, this “noise” guides the generator to directly generate images that meet the collision requirement. At the same time, the discriminator ensures that the generated images are visually realistic. Extensive experimental results verify that our scheme can effectively generate multiple high-quality images with attacker-specified classes, achieving the high success rate of hash collision attack and the applicability across state-of-the-art deep hashing methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2748-2762"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10909298/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Due to the powerful feature extraction capabilities of deep neural networks (DNNs), deep image hashing has extensive applications in the fields such as image authentication, copy detection and content retrieval, making its security a critical concern. Among various security metrics, collision resistance serves as a crucial indicator of deep image hashing methods. Research on collision attacks not only reveals the potential vulnerabilities of deep image hashing but also can promote the development of more robust and secure hashing methods. In this paper, we propose a novel generative collision attack scheme, which achieves several advantages over existing attack schemes based on adversarial examples. Our scheme requires no additional perturbations added to the image, and can simultaneously generate multiple hash collision images of different classes specified by the attacker. To the best of our knowledge, this is the first generative collision attack scheme effective across various deep image hashing methods. Specifically, our attack framework consists of three parts, i.e., a Hash-to-Noise Network (HTNN), a pretrained BigGAN generator and a conditional discriminator. The designed HTNN embeds the hash code of the target image and the attacker-specified generation class information into a “noise” vector. By optimizing various hash distance loss functions between the generated and target images, this “noise” guides the generator to directly generate images that meet the collision requirement. At the same time, the discriminator ensures that the generated images are visually realistic. Extensive experimental results verify that our scheme can effectively generate multiple high-quality images with attacker-specified classes, achieving the high success rate of hash collision attack and the applicability across state-of-the-art deep hashing methods.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features