对标恩智浦i.MX8M+神经处理单元:智能停车案例研究

IF 0.1 Q4 MULTIDISCIPLINARY SCIENCES Tecnologia en Marcha Pub Date : 2022-11-28 DOI:10.18845/tm.v35i9.6487
Edgar Chaves-González, Luis G. León-Vega
{"title":"对标恩智浦i.MX8M+神经处理单元:智能停车案例研究","authors":"Edgar Chaves-González, Luis G. León-Vega","doi":"10.18845/tm.v35i9.6487","DOIUrl":null,"url":null,"abstract":"Nowadays, deep learning has become one of the most popular solutions for computer vision, and it has also included the Edge. It has influenced the System-on-Chip (SoC) vendors to integrate accelerators for inference tasks into their SoCs, including NVIDIA, NXP, and Texas Instruments embedded systems. This work explores the performance of the NXP i.MX8M Plus Neural Processing Unit (NPU) as one of the solutions for inference tasks. For measuring the performance, we propose an experiment that uses a GStreamer pipeline for inferring license plates, which is composed of two stages: license plate detection and character inference. The benchmark takes execution time and CPU usage samples within the metrics when running the inference serially and parallel. The results show that the key benefit of using the NPU is the CPU freeing for other tasks. After offloading the license plate detection to NPU, we lowered the overall CPU consumption by 10x. The performance obtained has an inference rate of 1 Hz, limited by the character inference.","PeriodicalId":42957,"journal":{"name":"Tecnologia en Marcha","volume":null,"pages":null},"PeriodicalIF":0.1000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Benchmarking the NXP i.MX8M+ neural processing unit: smart parking case study\",\"authors\":\"Edgar Chaves-González, Luis G. León-Vega\",\"doi\":\"10.18845/tm.v35i9.6487\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, deep learning has become one of the most popular solutions for computer vision, and it has also included the Edge. It has influenced the System-on-Chip (SoC) vendors to integrate accelerators for inference tasks into their SoCs, including NVIDIA, NXP, and Texas Instruments embedded systems. This work explores the performance of the NXP i.MX8M Plus Neural Processing Unit (NPU) as one of the solutions for inference tasks. For measuring the performance, we propose an experiment that uses a GStreamer pipeline for inferring license plates, which is composed of two stages: license plate detection and character inference. The benchmark takes execution time and CPU usage samples within the metrics when running the inference serially and parallel. The results show that the key benefit of using the NPU is the CPU freeing for other tasks. After offloading the license plate detection to NPU, we lowered the overall CPU consumption by 10x. The performance obtained has an inference rate of 1 Hz, limited by the character inference.\",\"PeriodicalId\":42957,\"journal\":{\"name\":\"Tecnologia en Marcha\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.1000,\"publicationDate\":\"2022-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Tecnologia en Marcha\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18845/tm.v35i9.6487\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tecnologia en Marcha","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18845/tm.v35i9.6487","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

如今,深度学习已经成为计算机视觉最流行的解决方案之一,它也包括Edge。它影响了片上系统(SoC)供应商将用于推理任务的加速器集成到他们的SoC中,包括NVIDIA, NXP和德州仪器的嵌入式系统。这项工作探讨了NXP i.MX8M Plus神经处理单元(NPU)作为推理任务解决方案之一的性能。为了测试性能,我们提出了一个使用GStreamer管道进行车牌推断的实验,该实验包括两个阶段:车牌检测和字符推理。在串行和并行运行推理时,基准测试在指标中获取执行时间和CPU使用示例。结果表明,使用NPU的主要好处是可以为其他任务腾出CPU。在将车牌检测任务卸载给NPU后,我们将总体CPU消耗降低了10倍。所获得的性能具有1 Hz的推理率,受字符推理的限制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Benchmarking the NXP i.MX8M+ neural processing unit: smart parking case study
Nowadays, deep learning has become one of the most popular solutions for computer vision, and it has also included the Edge. It has influenced the System-on-Chip (SoC) vendors to integrate accelerators for inference tasks into their SoCs, including NVIDIA, NXP, and Texas Instruments embedded systems. This work explores the performance of the NXP i.MX8M Plus Neural Processing Unit (NPU) as one of the solutions for inference tasks. For measuring the performance, we propose an experiment that uses a GStreamer pipeline for inferring license plates, which is composed of two stages: license plate detection and character inference. The benchmark takes execution time and CPU usage samples within the metrics when running the inference serially and parallel. The results show that the key benefit of using the NPU is the CPU freeing for other tasks. After offloading the license plate detection to NPU, we lowered the overall CPU consumption by 10x. The performance obtained has an inference rate of 1 Hz, limited by the character inference.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Tecnologia en Marcha
Tecnologia en Marcha MULTIDISCIPLINARY SCIENCES-
自引率
0.00%
发文量
93
审稿时长
28 weeks
期刊最新文献
Gestión de Residuos en proyectos de construcción de viviendas en Costa Rica: teoría versus práctica Enseñanza del ordenamiento territorial como herramienta en la gestión de proyectos de obra pública Transferencia de conocimiento desde las universidades a las empresas Virtualidad en la enseñanza de investigación en la maestría en Gerencia de Proyectos del Tecnológico de Costa Rica Impacto de la metodología BIM en la gestión de proyectos de construcción
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1