Yi Chen, Xiaoyang Dong, Jian Guo, Yantian Shen, Anyu Wang, Xiaoyun Wang
{"title":"Hard-Label Cryptanalytic Extraction of Neural Network Models","authors":"Yi Chen, Xiaoyang Dong, Jian Guo, Yantian Shen, Anyu Wang, Xiaoyun Wang","doi":"arxiv-2409.11646","DOIUrl":null,"url":null,"abstract":"The machine learning problem of extracting neural network parameters has been\nproposed for nearly three decades. Functionally equivalent extraction is a\ncrucial goal for research on this problem. When the adversary has access to the\nraw output of neural networks, various attacks, including those presented at\nCRYPTO 2020 and EUROCRYPT 2024, have successfully achieved this goal. However,\nthis goal is not achieved when neural networks operate under a hard-label\nsetting where the raw output is inaccessible. In this paper, we propose the first attack that theoretically achieves\nfunctionally equivalent extraction under the hard-label setting, which applies\nto ReLU neural networks. The effectiveness of our attack is validated through\npractical experiments on a wide range of ReLU neural networks, including neural\nnetworks trained on two real benchmarking datasets (MNIST, CIFAR10) widely used\nin computer vision. For a neural network consisting of $10^5$ parameters, our\nattack only requires several hours on a single core.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"72 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The machine learning problem of extracting neural network parameters has been
proposed for nearly three decades. Functionally equivalent extraction is a
crucial goal for research on this problem. When the adversary has access to the
raw output of neural networks, various attacks, including those presented at
CRYPTO 2020 and EUROCRYPT 2024, have successfully achieved this goal. However,
this goal is not achieved when neural networks operate under a hard-label
setting where the raw output is inaccessible. In this paper, we propose the first attack that theoretically achieves
functionally equivalent extraction under the hard-label setting, which applies
to ReLU neural networks. The effectiveness of our attack is validated through
practical experiments on a wide range of ReLU neural networks, including neural
networks trained on two real benchmarking datasets (MNIST, CIFAR10) widely used
in computer vision. For a neural network consisting of $10^5$ parameters, our
attack only requires several hours on a single core.