Feeding Hungry Models Less: Deep Transfer Learning for Embedded Memory PPA Models : Special Session

F. Last, Ulf Schlichtmann
{"title":"Feeding Hungry Models Less: Deep Transfer Learning for Embedded Memory PPA Models : Special Session","authors":"F. Last, Ulf Schlichtmann","doi":"10.1109/MLCAD52597.2021.9531299","DOIUrl":null,"url":null,"abstract":"Supervised machine learning requires large amounts of labeled data for training. In power, performance and area (PPA) estimation of embedded memories, every new memory compiler version is considered independently of previous versions. Since the data of different memory compilers originate from similar domains, transfer learning may reduce the amount of supervised data required by pre-training PPA estimation neural networks on related domains. We show that provisioning times of PPA models for new compiler versions can be reduced significantly by exploiting similarities across versions and technology nodes. Through transfer learning, we shorten the time to provision PPA models for new compiler versions by 50% to 90%, which speeds up time-critical periods of the design cycle. This is achieved by requiring less than 6,500 ground truth samples for the target compiler to achieve average estimation errors of 0.35% instead of 13,000 samples. Using only 1,300 samples is sufficient to achieve an almost worst-case (98th percentile) error of approximately 3% and allows us to shorten model provisioning times from over 40 days to less than one week.","PeriodicalId":210763,"journal":{"name":"2021 ACM/IEEE 3rd Workshop on Machine Learning for CAD (MLCAD)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 ACM/IEEE 3rd Workshop on Machine Learning for CAD (MLCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLCAD52597.2021.9531299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Supervised machine learning requires large amounts of labeled data for training. In power, performance and area (PPA) estimation of embedded memories, every new memory compiler version is considered independently of previous versions. Since the data of different memory compilers originate from similar domains, transfer learning may reduce the amount of supervised data required by pre-training PPA estimation neural networks on related domains. We show that provisioning times of PPA models for new compiler versions can be reduced significantly by exploiting similarities across versions and technology nodes. Through transfer learning, we shorten the time to provision PPA models for new compiler versions by 50% to 90%, which speeds up time-critical periods of the design cycle. This is achieved by requiring less than 6,500 ground truth samples for the target compiler to achieve average estimation errors of 0.35% instead of 13,000 samples. Using only 1,300 samples is sufficient to achieve an almost worst-case (98th percentile) error of approximately 3% and allows us to shorten model provisioning times from over 40 days to less than one week.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
少喂饥饿模型:嵌入式记忆PPA模型的深度迁移学习:特别会议
监督式机器学习需要大量标记数据进行训练。在嵌入式内存的功耗、性能和面积(PPA)估计中,每个新的内存编译器版本都独立于以前的版本进行考虑。由于不同的记忆编译器的数据来源于相似的域,迁移学习可以减少相关域上预训练PPA估计神经网络所需的监督数据量。我们表明,通过利用版本和技术节点之间的相似性,可以显著减少新编译器版本的PPA模型的配置时间。通过迁移学习,我们将新编译器版本提供PPA模型的时间缩短了50% - 90%,加快了设计周期的时间关键阶段。这是通过目标编译器需要少于6,500个地面真值样本来实现0.35%的平均估计误差而不是13,000个样本来实现的。仅使用1300个样本就足以实现大约3%的最坏情况(98百分位数)误差,并允许我们将模型配置时间从40多天缩短到不到一周。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
ADAPT: An Adaptive Machine Learning Framework with Application to Lithography Hotspot Detection Approximate Divider Design Based on Counting-Based Stochastic Computing Division A Circuit Attention Network-Based Actor-Critic Learning Approach to Robust Analog Transistor Sizing Massive Figure Extraction and Classification in Electronic Component Datasheets for Accelerating PCB Design Preparation Fast Electrostatic Analysis For VLSI Aging based on Generative Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1