Hidde Lycklama, Alexander Viand, Nikolay Avramov, Nicolas Küchler, Anwar Hithnawi
{"title":"Artemis: Efficient Commit-and-Prove SNARKs for zkML","authors":"Hidde Lycklama, Alexander Viand, Nikolay Avramov, Nicolas Küchler, Anwar Hithnawi","doi":"arxiv-2409.12055","DOIUrl":null,"url":null,"abstract":"The widespread adoption of machine learning (ML) in various critical\napplications, from healthcare to autonomous systems, has raised significant\nconcerns about privacy, accountability, and trustworthiness. To address these\nconcerns, recent research has focused on developing zero-knowledge machine\nlearning (zkML) techniques that enable the verification of various aspects of\nML models without revealing sensitive information. Recent advances in zkML have\nsubstantially improved efficiency; however, these efforts have primarily\noptimized the process of proving ML computations correct, often overlooking the\nsubstantial overhead associated with verifying the necessary commitments to the\nmodel and data. To address this gap, this paper introduces two new\nCommit-and-Prove SNARK (CP-SNARK) constructions (Apollo and Artemis) that\neffectively address the emerging challenge of commitment verification in zkML\npipelines. Apollo operates on KZG commitments and requires white-box use of the\nunderlying proof system, whereas Artemis is compatible with any homomorphic\npolynomial commitment and only makes black-box use of the proof system. As a\nresult, Artemis is compatible with state-of-the-art proof systems without\ntrusted setup. We present the first implementation of these CP-SNARKs, evaluate\ntheir performance on a diverse set of ML models, and show substantial\nimprovements over existing methods, achieving significant reductions in prover\ncosts and maintaining efficiency even for large-scale models. For example, for\nthe VGG model, we reduce the overhead associated with commitment checks from\n11.5x to 1.2x. Our results suggest that these contributions can move zkML\ntowards practical deployment, particularly in scenarios involving large and\ncomplex ML models.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The widespread adoption of machine learning (ML) in various critical
applications, from healthcare to autonomous systems, has raised significant
concerns about privacy, accountability, and trustworthiness. To address these
concerns, recent research has focused on developing zero-knowledge machine
learning (zkML) techniques that enable the verification of various aspects of
ML models without revealing sensitive information. Recent advances in zkML have
substantially improved efficiency; however, these efforts have primarily
optimized the process of proving ML computations correct, often overlooking the
substantial overhead associated with verifying the necessary commitments to the
model and data. To address this gap, this paper introduces two new
Commit-and-Prove SNARK (CP-SNARK) constructions (Apollo and Artemis) that
effectively address the emerging challenge of commitment verification in zkML
pipelines. Apollo operates on KZG commitments and requires white-box use of the
underlying proof system, whereas Artemis is compatible with any homomorphic
polynomial commitment and only makes black-box use of the proof system. As a
result, Artemis is compatible with state-of-the-art proof systems without
trusted setup. We present the first implementation of these CP-SNARKs, evaluate
their performance on a diverse set of ML models, and show substantial
improvements over existing methods, achieving significant reductions in prover
costs and maintaining efficiency even for large-scale models. For example, for
the VGG model, we reduce the overhead associated with commitment checks from
11.5x to 1.2x. Our results suggest that these contributions can move zkML
towards practical deployment, particularly in scenarios involving large and
complex ML models.