{"title":"Local imperceptible adversarial attacks against human pose estimation networks.","authors":"Fuchang Liu, Shen Zhang, Hao Wang, Caiping Yan, Yongwei Miao","doi":"10.1186/s42492-023-00148-1","DOIUrl":null,"url":null,"abstract":"<p><p>Deep neural networks are vulnerable to attacks from adversarial inputs. Corresponding attack research on human pose estimation (HPE), particularly for body joint detection, has been largely unexplored. Transferring classification-based attack methods to body joint regression tasks is not straightforward. Another issue is that the attack effectiveness and imperceptibility contradict each other. To solve these issues, we propose local imperceptible attacks on HPE networks. In particular, we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack. Furthermore, we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection. Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness. We conducted a series of imperceptible attacks against state-of-the-art HPE methods, including HigherHRNet, DEKR, and ViTPose. The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels. Approximately 4% of the pixels can achieve sufficient attacks on HPE.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10661673/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1186/s42492-023-00148-1","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks are vulnerable to attacks from adversarial inputs. Corresponding attack research on human pose estimation (HPE), particularly for body joint detection, has been largely unexplored. Transferring classification-based attack methods to body joint regression tasks is not straightforward. Another issue is that the attack effectiveness and imperceptibility contradict each other. To solve these issues, we propose local imperceptible attacks on HPE networks. In particular, we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack. Furthermore, we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection. Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness. We conducted a series of imperceptible attacks against state-of-the-art HPE methods, including HigherHRNet, DEKR, and ViTPose. The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels. Approximately 4% of the pixels can achieve sufficient attacks on HPE.