Transfer-based attack generates adversarial examples on a surrogate model and exploits the intriguing property of transferability to deceive other unknown models, making it practical for real-world scenarios. Recent research has sought to optimize the loss surface by minimizing its maximum loss, which in practice cannot be computed exactly and is instead approximated through gradient ascent. However, the loss landscape becomes increasingly non-linear during later attack stages, making the gradient ascent less effective. To address this challenge, we propose a novel attack called Curvature-Aware Penalization (CAP), which incorporates the gradient norm and the curvature-aware term as regularization terms to maintain the flatness of the loss surface. Since directly computing the Hessian matrix is computationally expensive, we utilize the finite difference method to reduce computational complexity. Specifically, we randomly sample an example from the neighborhood and interpolate gradients at three neighboring points along the example’s gradient direction to approximate the Hessian. Additionally, to reduce the variance caused by random sampling, the combined gradients are averaged over multiple stochastic samples. Comprehensive experimental results demonstrate that our CAP can not only craft adversarial examples with enhanced transferability across various network architectures but also exhibit stronger resistance to state-of-the-art adversarial defense methods. Code is available at https://github.com/PC614/CAP.
扫码关注我们
求助内容:
应助结果提醒方式:
