PSAQ-ViT V2: Toward Accurate and General Data-Free Quantization for Vision Transformers.

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2023-08-14 DOI:10.1109/TNNLS.2023.3301007
Zhikai Li, Mengjuan Chen, Junrui Xiao, Qingyi Gu
{"title":"PSAQ-ViT V2: Toward Accurate and General Data-Free Quantization for Vision Transformers.","authors":"Zhikai Li, Mengjuan Chen, Junrui Xiao, Qingyi Gu","doi":"10.1109/TNNLS.2023.3301007","DOIUrl":null,"url":null,"abstract":"<p><p>Data-free quantization can potentially address data privacy and security concerns in model compression and thus has been widely investigated. Recently, patch similarity aware data-free quantization for vision transformers (PSAQ-ViT) designs a relative value metric, patch similarity, to generate data from pretrained vision transformers (ViTs), achieving the first attempt at data-free quantization for ViTs. In this article, we propose PSAQ-ViT V2, a more accurate and general data-free quantization framework for ViTs, built on top of PSAQ-ViT. More specifically, following the patch similarity metric in PSAQ-ViT, we introduce an adaptive teacher-student strategy, which facilitates the constant cyclic evolution of the generated samples and the quantized model in a competitive and interactive fashion under the supervision of the full-precision (FP) model (teacher), thus significantly improving the accuracy of the quantized model. Moreover, without the auxiliary category guidance, we employ the task-and model-independent prior information, making the general-purpose scheme compatible with a broad range of vision tasks and models. Extensive experiments are conducted on various models on image classification, object detection, and semantic segmentation tasks, and PSAQ-ViT V2, with the naive quantization strategy and without access to real-world data, consistently achieves competitive results, showing potential as a powerful baseline on data-free quantization for ViTs. For instance, with Swin-S as the (backbone) model, 8-bit quantization reaches 82.13 top-1 accuracy on ImageNet, 50.9 box AP and 44.1 mask AP on COCO, and 47.2 mean Intersection over Union (mIoU) on ADE20K. We hope that accurate and general PSAQ-ViT V2 can serve as a potential and practice solution in real-world applications involving sensitive data. Code is released and merged at: https://github.com/zkkli/PSAQ-ViT.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2023.3301007","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Data-free quantization can potentially address data privacy and security concerns in model compression and thus has been widely investigated. Recently, patch similarity aware data-free quantization for vision transformers (PSAQ-ViT) designs a relative value metric, patch similarity, to generate data from pretrained vision transformers (ViTs), achieving the first attempt at data-free quantization for ViTs. In this article, we propose PSAQ-ViT V2, a more accurate and general data-free quantization framework for ViTs, built on top of PSAQ-ViT. More specifically, following the patch similarity metric in PSAQ-ViT, we introduce an adaptive teacher-student strategy, which facilitates the constant cyclic evolution of the generated samples and the quantized model in a competitive and interactive fashion under the supervision of the full-precision (FP) model (teacher), thus significantly improving the accuracy of the quantized model. Moreover, without the auxiliary category guidance, we employ the task-and model-independent prior information, making the general-purpose scheme compatible with a broad range of vision tasks and models. Extensive experiments are conducted on various models on image classification, object detection, and semantic segmentation tasks, and PSAQ-ViT V2, with the naive quantization strategy and without access to real-world data, consistently achieves competitive results, showing potential as a powerful baseline on data-free quantization for ViTs. For instance, with Swin-S as the (backbone) model, 8-bit quantization reaches 82.13 top-1 accuracy on ImageNet, 50.9 box AP and 44.1 mask AP on COCO, and 47.2 mean Intersection over Union (mIoU) on ADE20K. We hope that accurate and general PSAQ-ViT V2 can serve as a potential and practice solution in real-world applications involving sensitive data. Code is released and merged at: https://github.com/zkkli/PSAQ-ViT.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PSAQ-ViT V2:为视觉变换器实现精确和通用的无数据量化
无数据量化有可能解决模型压缩中的数据隐私和安全问题,因此被广泛研究。最近,用于视觉变换器的补丁相似性感知无数据量化(PSAQ-ViT)设计了一种相对值度量--补丁相似性,用于从预训练的视觉变换器(ViT)中生成数据,实现了视觉变换器无数据量化的首次尝试。在本文中,我们在 PSAQ-ViT 的基础上提出了 PSAQ-ViT V2,这是一种更精确、更通用的无数据量化 ViT 框架。更具体地说,根据 PSAQ-ViT 中的补丁相似度度量,我们引入了自适应师生策略,在全精度(FP)模型(教师)的监督下,以竞争和互动的方式促进生成样本和量化模型的不断循环演化,从而显著提高量化模型的精度。此外,在没有辅助类别指导的情况下,我们采用了与任务和模型无关的先验信息,从而使通用方案与广泛的视觉任务和模型兼容。我们对图像分类、物体检测和语义分割任务中的各种模型进行了广泛的实验,PSAQ-ViT V2 在采用天真量化策略和不访问真实世界数据的情况下,始终取得了具有竞争力的结果,显示出作为无数据量化 ViT 的强大基准的潜力。例如,以 Swin-S 为(骨干)模型,8 位量化在 ImageNet 上达到了 82.13 的 top-1 准确率,在 COCO 上达到了 50.9 的 box AP 和 44.1 的 mask AP,在 ADE20K 上达到了 47.2 的 mean Intersection over Union (mIoU)。我们希望准确而通用的 PSAQ-ViT V2 能在涉及敏感数据的实际应用中成为一种潜在的实践解决方案。代码发布和合并地址:https://github.com/zkkli/PSAQ-ViT。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Alleviate the Impact of Heterogeneity in Network Alignment From Community View Hierarchical Contrastive Learning for Semantic Segmentation Distributed Online Convex Optimization With Statistical Privacy Beyond Euclidean Structures: Collaborative Topological Graph Learning for Multiview Clustering Rethinking Image Skip Connections in StyleGAN2
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1