{"title":"无需旋转的高效同态加密卷积神经网络","authors":"Sajjad Akherati, Xinmiao Zhang","doi":"arxiv-2409.05205","DOIUrl":null,"url":null,"abstract":"Privacy-preserving neural network (NN) inference can be achieved by utilizing\nhomomorphic encryption (HE), which allows computations to be directly carried\nout over ciphertexts. Popular HE schemes are built over large polynomial rings.\nTo allow simultaneous multiplications in the convolutional (Conv) and\nfully-connected (FC) layers, multiple input data are mapped to coefficients in\nthe same polynomial, so are the weights of NNs. However, ciphertext rotations\nare necessary to compute the sums of products and/or incorporate the outputs of\ndifferent channels into the same polynomials. Ciphertext rotations have much\nhigher complexity than ciphertext multiplications and contribute to the\nmajority of the latency of HE-evaluated Conv and FC layers. This paper proposes\na novel reformulated server-client joint computation procedure and a new filter\ncoefficient packing scheme to eliminate ciphertext rotations without affecting\nthe security of the HE scheme. Our proposed scheme also leads to substantial\nreductions on the number of coefficient multiplications needed and the\ncommunication cost between the server and client. For various plain-20\nclassifiers over the CIFAR-10/100 datasets, our design reduces the running time\nof the Conv and FC layers by 15.5% and the communication cost between client\nand server by more than 50%, compared to the best prior design.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient Homomorphically Encrypted Convolutional Neural Network Without Rotation\",\"authors\":\"Sajjad Akherati, Xinmiao Zhang\",\"doi\":\"arxiv-2409.05205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Privacy-preserving neural network (NN) inference can be achieved by utilizing\\nhomomorphic encryption (HE), which allows computations to be directly carried\\nout over ciphertexts. Popular HE schemes are built over large polynomial rings.\\nTo allow simultaneous multiplications in the convolutional (Conv) and\\nfully-connected (FC) layers, multiple input data are mapped to coefficients in\\nthe same polynomial, so are the weights of NNs. However, ciphertext rotations\\nare necessary to compute the sums of products and/or incorporate the outputs of\\ndifferent channels into the same polynomials. Ciphertext rotations have much\\nhigher complexity than ciphertext multiplications and contribute to the\\nmajority of the latency of HE-evaluated Conv and FC layers. This paper proposes\\na novel reformulated server-client joint computation procedure and a new filter\\ncoefficient packing scheme to eliminate ciphertext rotations without affecting\\nthe security of the HE scheme. Our proposed scheme also leads to substantial\\nreductions on the number of coefficient multiplications needed and the\\ncommunication cost between the server and client. For various plain-20\\nclassifiers over the CIFAR-10/100 datasets, our design reduces the running time\\nof the Conv and FC layers by 15.5% and the communication cost between client\\nand server by more than 50%, compared to the best prior design.\",\"PeriodicalId\":501332,\"journal\":{\"name\":\"arXiv - CS - Cryptography and Security\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Cryptography and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05205\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
利用同构加密(HE)可以实现保护隐私的神经网络(NN)推理,它允许直接在密码文本上进行计算。为了允许卷积层(Conv)和全连接层(FC)同时进行乘法运算,多个输入数据被映射为同一个多项式的系数,神经网络的权重也是如此。但是,为了计算乘积之和和/或将不同通道的输出纳入相同的多项式,需要对密文进行旋转。密文旋转的复杂度远高于密文乘法,是造成 HE 评估 Conv 和 FC 层延迟的主要原因。本文提出了一种新颖的重构服务器-客户端联合计算程序和一种新的滤波器高效打包方案,在不影响 HE 方案安全性的情况下消除了密文旋转。我们提出的方案还大大减少了所需的系数乘法次数以及服务器和客户端之间的通信成本。对于 CIFAR-10/100 数据集上的各种普通 20 分类器,与之前的最佳设计相比,我们的设计将 Conv 层和 FC 层的运行时间减少了 15.5%,将客户端与服务器之间的通信成本减少了 50%以上。
Efficient Homomorphically Encrypted Convolutional Neural Network Without Rotation
Privacy-preserving neural network (NN) inference can be achieved by utilizing
homomorphic encryption (HE), which allows computations to be directly carried
out over ciphertexts. Popular HE schemes are built over large polynomial rings.
To allow simultaneous multiplications in the convolutional (Conv) and
fully-connected (FC) layers, multiple input data are mapped to coefficients in
the same polynomial, so are the weights of NNs. However, ciphertext rotations
are necessary to compute the sums of products and/or incorporate the outputs of
different channels into the same polynomials. Ciphertext rotations have much
higher complexity than ciphertext multiplications and contribute to the
majority of the latency of HE-evaluated Conv and FC layers. This paper proposes
a novel reformulated server-client joint computation procedure and a new filter
coefficient packing scheme to eliminate ciphertext rotations without affecting
the security of the HE scheme. Our proposed scheme also leads to substantial
reductions on the number of coefficient multiplications needed and the
communication cost between the server and client. For various plain-20
classifiers over the CIFAR-10/100 datasets, our design reduces the running time
of the Conv and FC layers by 15.5% and the communication cost between client
and server by more than 50%, compared to the best prior design.