研究和评估应用程序功能块的自动卸载

Yoji Yamato
{"title":"研究和评估应用程序功能块的自动卸载","authors":"Yoji Yamato","doi":"10.1080/00051144.2024.2301888","DOIUrl":null,"url":null,"abstract":"Systems using graphical processing units (GPUs) and field-programmable gate arrays (FPGAs) have increased due to their advantages over central processing units (CPUs). However, such systems require the understanding of hardware-specific technical specifications such as Hardware Description Language (HDL) and compute unified device architecture (CUDA), which is a high hurdle. Based on this background, we previously proposed environment-adaptive software that enables automatic conversion, configuration and high-performance operation of existing code according to the hardware to be placed. As an element of this concept, we also proposed a method of automatically offloading loop statements of application source code for CPUs to GPUs and FPGAs. In this paper, we propose a method for offloading a function block, which is a larger unit, instead of individual loop statements in an application to achieve higher speed by automatically offloading to GPUs and FPGAs. We implemented the proposed method and evaluated it using current applications offloading to GPUs and FPGAs.","PeriodicalId":503352,"journal":{"name":"Automatika","volume":"83 12","pages":"387 - 400"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Study and evaluation of automatic offloading for function blocks of applications\",\"authors\":\"Yoji Yamato\",\"doi\":\"10.1080/00051144.2024.2301888\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Systems using graphical processing units (GPUs) and field-programmable gate arrays (FPGAs) have increased due to their advantages over central processing units (CPUs). However, such systems require the understanding of hardware-specific technical specifications such as Hardware Description Language (HDL) and compute unified device architecture (CUDA), which is a high hurdle. Based on this background, we previously proposed environment-adaptive software that enables automatic conversion, configuration and high-performance operation of existing code according to the hardware to be placed. As an element of this concept, we also proposed a method of automatically offloading loop statements of application source code for CPUs to GPUs and FPGAs. In this paper, we propose a method for offloading a function block, which is a larger unit, instead of individual loop statements in an application to achieve higher speed by automatically offloading to GPUs and FPGAs. We implemented the proposed method and evaluated it using current applications offloading to GPUs and FPGAs.\",\"PeriodicalId\":503352,\"journal\":{\"name\":\"Automatika\",\"volume\":\"83 12\",\"pages\":\"387 - 400\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automatika\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/00051144.2024.2301888\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automatika","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00051144.2024.2301888","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

与中央处理器(CPU)相比,使用图形处理器(GPU)和现场可编程门阵列(FPGA)的系统具有更多优势。然而,这类系统需要了解特定硬件的技术规范,如硬件描述语言(HDL)和计算统一设备架构(CUDA),这是一个很高的障碍。基于这一背景,我们之前提出了环境自适应软件,该软件可根据要放置的硬件自动转换、配置和高性能运行现有代码。作为这一概念的一个要素,我们还提出了一种将 CPU 应用程序源代码的循环语句自动卸载到 GPU 和 FPGA 的方法。在本文中,我们提出了一种方法,用于卸载应用程序中的函数块,即一个较大的单元,而不是单个循环语句,从而通过自动卸载到 GPU 和 FPGA 来实现更高的速度。我们实现了所提出的方法,并使用当前卸载到 GPU 和 FPGA 的应用程序对其进行了评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Study and evaluation of automatic offloading for function blocks of applications
Systems using graphical processing units (GPUs) and field-programmable gate arrays (FPGAs) have increased due to their advantages over central processing units (CPUs). However, such systems require the understanding of hardware-specific technical specifications such as Hardware Description Language (HDL) and compute unified device architecture (CUDA), which is a high hurdle. Based on this background, we previously proposed environment-adaptive software that enables automatic conversion, configuration and high-performance operation of existing code according to the hardware to be placed. As an element of this concept, we also proposed a method of automatically offloading loop statements of application source code for CPUs to GPUs and FPGAs. In this paper, we propose a method for offloading a function block, which is a larger unit, instead of individual loop statements in an application to achieve higher speed by automatically offloading to GPUs and FPGAs. We implemented the proposed method and evaluated it using current applications offloading to GPUs and FPGAs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An attention-based neural network for lung cancer classification and gradient in MRI On solvability and optimal controls for impulsive stochastic integrodifferential varying-coefficient model The relaxed gradient based iterative algorithm for solving the generalized coupled complex conjugate and transpose Sylvester matrix equations A novel robust MPC scheme established on LMI formulation for surge instability of uncertain compressor system with actuator constraint and piping acoustic Recognition and analysis system of steel stamping character based on machine vision
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1