机器学习内存计算回顾:架构、选项

IF 2.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Web Information Systems Pub Date : 2023-12-22 DOI:10.1108/ijwis-08-2023-0131
Václav Snášel, Tran Khanh Dang, Josef Kueng, Lingping Kong
{"title":"机器学习内存计算回顾:架构、选项","authors":"Václav Snášel, Tran Khanh Dang, Josef Kueng, Lingping Kong","doi":"10.1108/ijwis-08-2023-0131","DOIUrl":null,"url":null,"abstract":"\nPurpose\nThis paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.\n\n\nDesign/methodology/approach\nCollecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.\n\n\nFindings\nML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.\n\n\nOriginality/value\nIMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.\n","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"34 25","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A review of in-memory computing for machine learning: architectures, options\",\"authors\":\"Václav Snášel, Tran Khanh Dang, Josef Kueng, Lingping Kong\",\"doi\":\"10.1108/ijwis-08-2023-0131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nThis paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.\\n\\n\\nDesign/methodology/approach\\nCollecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.\\n\\n\\nFindings\\nML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.\\n\\n\\nOriginality/value\\nIMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.\\n\",\"PeriodicalId\":44153,\"journal\":{\"name\":\"International Journal of Web Information Systems\",\"volume\":\"34 25\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Web Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/ijwis-08-2023-0131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Web Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/ijwis-08-2023-0131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

目的本文旨在从历史、架构和选项等方面回顾用于机器学习(ML)应用的内存计算(IMC)。在这篇综述中,作者对不同的架构方面进行了研究,并收集和提供了我们的比较评估。设计/方法/途径收集了近年来与硬件设计和优化技术相关的 40 多篇 IMC 论文,然后将它们分为三个优化选项类别:通过图形处理器(GPU)进行优化、通过降低精度进行优化和通过硬件加速器进行优化。然后,作者从应用何种数据集、如何设计以及这种设计的贡献等方面简要介绍了这些技术。虽然通用硬件(中央处理器和 GPU)可以提供明确的解决方案,但由于其支持的灵活性过高,其能效受到限制。另一方面,硬件加速器(现场可编程门阵列和特定应用集成电路)在能效方面胜出,但单个加速器往往只适用于单一的 ML 方法(系列)。从长期硬件演进的角度来看,混合平台的硬件/软件协作异构设计是研究人员的一个选择。原创性/价值IMC 的优化实现了高速处理,提高了性能,并能实时分析海量数据。本作品回顾了 IMC 及其发展历程。然后,作者对 IMC 架构的三种优化路径进行了分类,以提高性能指标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A review of in-memory computing for machine learning: architectures, options
Purpose This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design. Findings ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher. Originality/value IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Web Information Systems
International Journal of Web Information Systems COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
4.60
自引率
0.00%
发文量
19
期刊介绍: The Global Information Infrastructure is a daily reality. In spite of the many applications in all domains of our societies: e-business, e-commerce, e-learning, e-science, and e-government, for instance, and in spite of the tremendous advances by engineers and scientists, the seamless development of Web information systems and services remains a major challenge. The journal examines how current shared vision for the future is one of semantically-rich information and service oriented architecture for global information systems. This vision is at the convergence of progress in technologies such as XML, Web services, RDF, OWL, of multimedia, multimodal, and multilingual information retrieval, and of distributed, mobile and ubiquitous computing. Topicality While the International Journal of Web Information Systems covers a broad range of topics, the journal welcomes papers that provide a perspective on all aspects of Web information systems: Web semantics and Web dynamics, Web mining and searching, Web databases and Web data integration, Web-based commerce and e-business, Web collaboration and distributed computing, Internet computing and networks, performance of Web applications, and Web multimedia services and Web-based education.
期刊最新文献
ImageNet classification with Raspberry Pis: federated learning algorithms of local classifiers A review of in-memory computing for machine learning: architectures, options Efficient knowledge distillation for remote sensing image classification: a CNN-based approach FedACQ: adaptive clustering quantization of model parameters in federated learning A systematic literature review of authorization and access control requirements and current state of the art for different database models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1