云机器人通信与机器推理协同设计

IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Autonomous Robots Pub Date : 2023-03-20 DOI:10.1007/s10514-023-10093-w
Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone
{"title":"云机器人通信与机器推理协同设计","authors":"Manabu Nakanoya,&nbsp;Sai Shankar Narasimhan,&nbsp;Sharachchandra Bhat,&nbsp;Alexandros Anemogiannis,&nbsp;Akul Datta,&nbsp;Sachin Katti,&nbsp;Sandeep Chinchali,&nbsp;Marco Pavone","doi":"10.1007/s10514-023-10093-w","DOIUrl":null,"url":null,"abstract":"<div><p>Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for <i>human, not robotic</i>, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn <i>task-relevant</i> representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11<span>\\(\\times \\)</span> more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10093-w.pdf","citationCount":"7","resultStr":"{\"title\":\"Co-design of communication and machine inference for cloud robotics\",\"authors\":\"Manabu Nakanoya,&nbsp;Sai Shankar Narasimhan,&nbsp;Sharachchandra Bhat,&nbsp;Alexandros Anemogiannis,&nbsp;Akul Datta,&nbsp;Sachin Katti,&nbsp;Sandeep Chinchali,&nbsp;Marco Pavone\",\"doi\":\"10.1007/s10514-023-10093-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for <i>human, not robotic</i>, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn <i>task-relevant</i> representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11<span>\\\\(\\\\times \\\\)</span> more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.</p></div>\",\"PeriodicalId\":55409,\"journal\":{\"name\":\"Autonomous Robots\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2023-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10514-023-10093-w.pdf\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Autonomous Robots\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10514-023-10093-w\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Autonomous Robots","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10514-023-10093-w","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 7

摘要

如今,即使是最受计算和功率限制的机器人也可以测量复杂、高数据率的视频和激光雷达传感流。通常,从低功率无人机到太空和地下漫游车,如果这些机器人不确定或无法在本地可伸缩地运行复杂的感知或地图任务,则需要将高比特率的感知数据传输到远程计算服务器。然而,今天的感官数据表示大多是为人类而非机器人的感知而设计的,因此经常浪费宝贵的计算或无线网络资源来传输场景中不重要的部分,而这些部分对于高级机器人任务来说是不必要的。本文提出了一种学习感知数据的任务相关表示的算法,该算法与预先训练的机器人感知模型的最终目标共同设计。我们的算法比竞争对手的方法压缩机器人的感官数据高出11倍。此外,它在各种任务上实现了高精度和稳健的泛化,包括使用低功耗深度学习加速器的火星地形分类、神经运动规划和环境时间序列分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Co-design of communication and machine inference for cloud robotics

Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for human, not robotic, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11\(\times \) more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Autonomous Robots
Autonomous Robots 工程技术-机器人学
CiteScore
7.90
自引率
5.70%
发文量
46
审稿时长
3 months
期刊介绍: Autonomous Robots reports on the theory and applications of robotic systems capable of some degree of self-sufficiency. It features papers that include performance data on actual robots in the real world. Coverage includes: control of autonomous robots · real-time vision · autonomous wheeled and tracked vehicles · legged vehicles · computational architectures for autonomous systems · distributed architectures for learning, control and adaptation · studies of autonomous robot systems · sensor fusion · theory of autonomous systems · terrain mapping and recognition · self-calibration and self-repair for robots · self-reproducing intelligent structures · genetic algorithms as models for robot development. The focus is on the ability to move and be self-sufficient, not on whether the system is an imitation of biology. Of course, biological models for robotic systems are of major interest to the journal since living systems are prototypes for autonomous behavior.
期刊最新文献
Optimal policies for autonomous navigation in strong currents using fast marching trees A concurrent learning approach to monocular vision range regulation of leader/follower systems Correction: Planning under uncertainty for safe robot exploration using gaussian process prediction Dynamic event-triggered integrated task and motion planning for process-aware source seeking Continuous planning for inertial-aided systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1