PoisonHD: Poison Attack on Brain-Inspired Hyperdimensional Computing

Ruixuan Wang, Xun Jiao
{"title":"PoisonHD: Poison Attack on Brain-Inspired Hyperdimensional Computing","authors":"Ruixuan Wang, Xun Jiao","doi":"10.23919/DATE54114.2022.9774641","DOIUrl":null,"url":null,"abstract":"While machine learning (ML) methods especially deep neural networks (DNNs) promise enormous societal and economic benefits, their deployments present daunting challenges due to intensive computational demands and high storage requirements. Brain-inspired hyperdimensional computing (HDC) has recently been introduced as an alternative computational model that mimics the “human brain” at the functionality level. HDC has already demonstrated promising accuracy and efficiency in multiple application domains including healthcare and robotics. However, the robustness and security aspects of HDC has not been systematically investigated and sufficiently examined. Poison attack is a commonly-seen attack on various ML models including DNNs. It injects noises to labels of training data to introduce classification error of ML models. This paper presents PoisonHD, an HDC-specific poison attack framework that maximizes its effectiveness in degrading the classification accuracy by leveraging the internal structural information of HDC models. By applying PoisonHD on three datasets, we show that PoisonHD can cause significantly greater accuracy drop on HDC model than a random label-flipping approach. We further develop a defense mechanism by designing an HDC-based data sanitization that can significantly recover the accuracy loss caused by poison attack. To the best of our knowledge, this is the first paper that studies the poison attack on HDC models.","PeriodicalId":232583,"journal":{"name":"2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/DATE54114.2022.9774641","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

While machine learning (ML) methods especially deep neural networks (DNNs) promise enormous societal and economic benefits, their deployments present daunting challenges due to intensive computational demands and high storage requirements. Brain-inspired hyperdimensional computing (HDC) has recently been introduced as an alternative computational model that mimics the “human brain” at the functionality level. HDC has already demonstrated promising accuracy and efficiency in multiple application domains including healthcare and robotics. However, the robustness and security aspects of HDC has not been systematically investigated and sufficiently examined. Poison attack is a commonly-seen attack on various ML models including DNNs. It injects noises to labels of training data to introduce classification error of ML models. This paper presents PoisonHD, an HDC-specific poison attack framework that maximizes its effectiveness in degrading the classification accuracy by leveraging the internal structural information of HDC models. By applying PoisonHD on three datasets, we show that PoisonHD can cause significantly greater accuracy drop on HDC model than a random label-flipping approach. We further develop a defense mechanism by designing an HDC-based data sanitization that can significantly recover the accuracy loss caused by poison attack. To the best of our knowledge, this is the first paper that studies the poison attack on HDC models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PoisonHD:对大脑启发的超维度计算的毒药攻击
虽然机器学习(ML)方法,尤其是深度神经网络(dnn)有望带来巨大的社会和经济效益,但由于密集的计算需求和高存储要求,它们的部署面临着艰巨的挑战。受大脑启发的超维计算(HDC)最近作为一种替代计算模型被引入,它在功能层面上模仿“人类大脑”。HDC已经在包括医疗保健和机器人在内的多个应用领域展示了良好的准确性和效率。然而,HDC的健壮性和安全性方面还没有得到系统的调查和充分的检查。中毒攻击是包括dnn在内的各种ML模型上常见的攻击。在训练数据的标签中注入噪声,引入机器学习模型的分类误差。本文提出了一种针对HDC的毒性攻击框架PoisonHD,该框架利用HDC模型的内部结构信息,最大限度地降低了分类精度。通过在三个数据集上应用PoisonHD,我们发现与随机标签翻转方法相比,PoisonHD在HDC模型上导致的精度下降明显更大。我们进一步开发了一种防御机制,通过设计一种基于hdc的数据清理,可以显著恢复中毒攻击造成的精度损失。据我们所知,这是第一篇研究HDC模型中毒攻击的论文。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
DIET: A Dynamic Energy Management Approach for Wearable Health Monitoring Devices NPU-Accelerated Imitation Learning for Thermal- and QoS-Aware Optimization of Heterogeneous Multi-Cores A Precision-Scalable Energy-Efficient Bit-Split-and-Combination Vector Systolic Accelerator for NAS-Optimized DNNs on Edge coxHE: A software-hardware co-design framework for FPGA acceleration of homomorphic computation HELCFL: High-Efficiency and Low-Cost Federated Learning in Heterogeneous Mobile-Edge Computing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1