Pub Date : 2024-03-06DOI: 10.1109/JSAIT.2024.3397187
Mark Beliaev;Payam Delgosha;Hamed Hassani;Ramtin Pedarsani
Over the past two decades, the rise in adoption of neural networks has surged in parallel with their performance. Concurrently, we have observed the inherent fragility of these prediction models: small changes to the inputs can induce classification errors across entire datasets. In the following study, we examine perturbations constrained by the $ell _{0}$