Deep Bayesian active learning using in-memory computing hardware

IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Nature computational science Pub Date : 2024-12-23 DOI:10.1038/s43588-024-00744-y
Yudeng Lin, Bin Gao, Jianshi Tang, Qingtian Zhang, He Qian, Huaqiang Wu
{"title":"Deep Bayesian active learning using in-memory computing hardware","authors":"Yudeng Lin, Bin Gao, Jianshi Tang, Qingtian Zhang, He Qian, Huaqiang Wu","doi":"10.1038/s43588-024-00744-y","DOIUrl":null,"url":null,"abstract":"Labeling data is a time-consuming, labor-intensive and costly procedure for many artificial intelligence tasks. Deep Bayesian active learning (DBAL) boosts labeling efficiency exponentially, substantially reducing costs. However, DBAL demands high-bandwidth data transfer and probabilistic computing, posing great challenges for conventional deterministic hardware. Here we propose a memristor stochastic gradient Langevin dynamics in situ learning method that uses the stochastic of memristor modulation to learn efficiency, enabling DBAL within the computation-in-memory (CIM) framework. To prove the feasibility and effectiveness of the proposed method, we implemented in-memory DBAL on a memristor-based stochastic CIM system and successfully demonstrated a robot’s skill learning task. The inherent stochastic characteristics of memristors allow a four-layer memristor Bayesian deep neural network to efficiently identify and learn from uncertain samples. Compared with cutting-edge conventional complementary metal-oxide-semiconductor-based hardware implementation, the stochastic CIM system achieves a remarkable 44% boost in speed and could conserve 153 times more energy. This study introduces an in-memory deep Bayesian active learning framework that uses the stochastic properties of memristors for in situ probabilistic computations. This framework can greatly improve the efficiency and speed of artificial intelligence learning tasks, as demonstrated with a robot skill-learning task.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 1","pages":"27-36"},"PeriodicalIF":12.0000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11774754/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature computational science","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s43588-024-00744-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Labeling data is a time-consuming, labor-intensive and costly procedure for many artificial intelligence tasks. Deep Bayesian active learning (DBAL) boosts labeling efficiency exponentially, substantially reducing costs. However, DBAL demands high-bandwidth data transfer and probabilistic computing, posing great challenges for conventional deterministic hardware. Here we propose a memristor stochastic gradient Langevin dynamics in situ learning method that uses the stochastic of memristor modulation to learn efficiency, enabling DBAL within the computation-in-memory (CIM) framework. To prove the feasibility and effectiveness of the proposed method, we implemented in-memory DBAL on a memristor-based stochastic CIM system and successfully demonstrated a robot’s skill learning task. The inherent stochastic characteristics of memristors allow a four-layer memristor Bayesian deep neural network to efficiently identify and learn from uncertain samples. Compared with cutting-edge conventional complementary metal-oxide-semiconductor-based hardware implementation, the stochastic CIM system achieves a remarkable 44% boost in speed and could conserve 153 times more energy. This study introduces an in-memory deep Bayesian active learning framework that uses the stochastic properties of memristors for in situ probabilistic computations. This framework can greatly improve the efficiency and speed of artificial intelligence learning tasks, as demonstrated with a robot skill-learning task.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用内存计算硬件的深度贝叶斯主动学习。
对于许多人工智能任务来说,标记数据是一个耗时、劳动密集型和昂贵的过程。深度贝叶斯主动学习(DBAL)大大提高了标注效率,大大降低了成本。然而,DBAL需要高带宽的数据传输和概率计算,这对传统的确定性硬件提出了很大的挑战。在这里,我们提出了一种记忆电阻器随机梯度朗之万动态原位学习方法,该方法利用记忆电阻器调制的随机性来学习效率,使DBAL在内存计算(CIM)框架内实现。为了证明该方法的可行性和有效性,我们在一个基于忆阻器的随机CIM系统上实现了内存DBAL,并成功地演示了机器人的技能学习任务。记忆电阻器固有的随机特性使四层记忆电阻器贝叶斯深度神经网络能够有效地从不确定样本中识别和学习。与传统的基于互补金属氧化物半导体的硬件实现相比,随机CIM系统的速度提高了44%,节省了153倍的能源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
11.70
自引率
0.00%
发文量
0
期刊最新文献
Enhancing synthesis prediction via machine learning. Harnessing large language models for data-scarce learning of polymer properties. A statistical framework for multi-trait rare variant analysis in large-scale whole-genome sequencing studies. MultiSTAAR delivers multi-trait rare variant analysis of biobank-scale sequencing data. Balancing autonomy and expertise in autonomous synthesis laboratories.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1