Artemis: Efficient Commit-and-Prove SNARKs for zkML

Hidde Lycklama, Alexander Viand, Nikolay Avramov, Nicolas Küchler, Anwar Hithnawi
{"title":"Artemis: Efficient Commit-and-Prove SNARKs for zkML","authors":"Hidde Lycklama, Alexander Viand, Nikolay Avramov, Nicolas Küchler, Anwar Hithnawi","doi":"arxiv-2409.12055","DOIUrl":null,"url":null,"abstract":"The widespread adoption of machine learning (ML) in various critical\napplications, from healthcare to autonomous systems, has raised significant\nconcerns about privacy, accountability, and trustworthiness. To address these\nconcerns, recent research has focused on developing zero-knowledge machine\nlearning (zkML) techniques that enable the verification of various aspects of\nML models without revealing sensitive information. Recent advances in zkML have\nsubstantially improved efficiency; however, these efforts have primarily\noptimized the process of proving ML computations correct, often overlooking the\nsubstantial overhead associated with verifying the necessary commitments to the\nmodel and data. To address this gap, this paper introduces two new\nCommit-and-Prove SNARK (CP-SNARK) constructions (Apollo and Artemis) that\neffectively address the emerging challenge of commitment verification in zkML\npipelines. Apollo operates on KZG commitments and requires white-box use of the\nunderlying proof system, whereas Artemis is compatible with any homomorphic\npolynomial commitment and only makes black-box use of the proof system. As a\nresult, Artemis is compatible with state-of-the-art proof systems without\ntrusted setup. We present the first implementation of these CP-SNARKs, evaluate\ntheir performance on a diverse set of ML models, and show substantial\nimprovements over existing methods, achieving significant reductions in prover\ncosts and maintaining efficiency even for large-scale models. For example, for\nthe VGG model, we reduce the overhead associated with commitment checks from\n11.5x to 1.2x. Our results suggest that these contributions can move zkML\ntowards practical deployment, particularly in scenarios involving large and\ncomplex ML models.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The widespread adoption of machine learning (ML) in various critical applications, from healthcare to autonomous systems, has raised significant concerns about privacy, accountability, and trustworthiness. To address these concerns, recent research has focused on developing zero-knowledge machine learning (zkML) techniques that enable the verification of various aspects of ML models without revealing sensitive information. Recent advances in zkML have substantially improved efficiency; however, these efforts have primarily optimized the process of proving ML computations correct, often overlooking the substantial overhead associated with verifying the necessary commitments to the model and data. To address this gap, this paper introduces two new Commit-and-Prove SNARK (CP-SNARK) constructions (Apollo and Artemis) that effectively address the emerging challenge of commitment verification in zkML pipelines. Apollo operates on KZG commitments and requires white-box use of the underlying proof system, whereas Artemis is compatible with any homomorphic polynomial commitment and only makes black-box use of the proof system. As a result, Artemis is compatible with state-of-the-art proof systems without trusted setup. We present the first implementation of these CP-SNARKs, evaluate their performance on a diverse set of ML models, and show substantial improvements over existing methods, achieving significant reductions in prover costs and maintaining efficiency even for large-scale models. For example, for the VGG model, we reduce the overhead associated with commitment checks from 11.5x to 1.2x. Our results suggest that these contributions can move zkML towards practical deployment, particularly in scenarios involving large and complex ML models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artemis:针对 zkML 的高效承诺与证明 SNARKs
从医疗保健到自主系统,机器学习(ML)在各种关键应用中的广泛应用引起了人们对隐私、责任和可信度的极大关注。为了解决这些问题,最近的研究重点是开发零知识机器学习(zkML)技术,在不泄露敏感信息的情况下验证 ML 模型的各个方面。zkML 的最新进展大大提高了效率;然而,这些努力主要优化了证明 ML 计算正确性的过程,往往忽略了与验证对模型和数据的必要承诺相关的巨大开销。为了弥补这一不足,本文介绍了两种新的承诺与证明 SNARK(CP-SNARK)结构(Apollo 和 Artemis),它们能有效解决 zkML 管道中承诺验证这一新兴挑战。Apollo 在 KZG 承诺上运行,需要白盒使用底层证明系统,而 Artemis 与任何同态多项式承诺兼容,只需黑盒使用证明系统。因此,Artemis 与最先进的证明系统兼容,无需信任设置。我们首次提出了这些 CP-SNARKs 的实现方法,评估了它们在一系列不同的 ML 模型上的性能,结果表明,与现有方法相比,Artemis 有了实质性的改进,显著降低了证明者的成本,即使在大规模模型上也能保持效率。例如,对于 VGG 模型,我们将与承诺检查相关的开销从 11.5 倍降低到 1.2 倍。我们的研究结果表明,这些贡献可以推动 zkML 走向实际部署,尤其是在涉及大型复杂 ML 模型的场景中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning Artemis: Efficient Commit-and-Prove SNARKs for zkML A Survey-Based Quantitative Analysis of Stress Factors and Their Impacts Among Cybersecurity Professionals Log2graphs: An Unsupervised Framework for Log Anomaly Detection with Efficient Feature Extraction Practical Investigation on the Distinguishability of Longa's Atomic Patterns
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1