基于强化学习的无人机模拟器:调查、实践与挑战

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Artificial Intelligence Review Pub Date : 2024-09-05 DOI:10.1007/s10462-024-10933-w
Jun Hoong Chan, Kai Liu, Yu Chen, A. S. M. Sharifuzzaman Sagar, Yong-Guk Kim
{"title":"基于强化学习的无人机模拟器:调查、实践与挑战","authors":"Jun Hoong Chan,&nbsp;Kai Liu,&nbsp;Yu Chen,&nbsp;A. S. M. Sharifuzzaman Sagar,&nbsp;Yong-Guk Kim","doi":"10.1007/s10462-024-10933-w","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, machine learning has been very useful in solving diverse tasks with drones, such as autonomous navigation, visual surveillance, communication, disaster management, and agriculture. Among these machine learning, two representative paradigms have been widely utilized in such applications: supervised learning and reinforcement learning. Researchers prefer to use supervised learning, mostly based on convolutional neural networks, because of its robustness and ease of use but yet data labeling is laborious and time-consuming. On the other hand, when traditional reinforcement learning is combined with the deep neural network, it can be a very powerful tool to solve high-dimensional input problems such as image and video. Along with the fast development of reinforcement learning, many researchers utilize reinforcement learning in drone applications, and it often outperforms supervised learning. However, it usually requires the agent to explore the environment on a trial-and-error basis which is high cost and unrealistic in the real environment. Recent advances in simulated environments can allow an agent to learn by itself to overcome these drawbacks, although the gap between the real environment and the simulator has to be minimized in the end. In this sense, a realistic and reliable simulator is essential for reinforcement learning training. This paper investigates various drone simulators that work with diverse reinforcement learning architectures. The characteristics of the reinforcement learning-based drone simulators are analyzed and compared for the researchers who would like to employ them for their projects. Finally, we shed light on some challenges and potential directions for future drone simulators.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10933-w.pdf","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning-based drone simulators: survey, practice, and challenge\",\"authors\":\"Jun Hoong Chan,&nbsp;Kai Liu,&nbsp;Yu Chen,&nbsp;A. S. M. Sharifuzzaman Sagar,&nbsp;Yong-Guk Kim\",\"doi\":\"10.1007/s10462-024-10933-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recently, machine learning has been very useful in solving diverse tasks with drones, such as autonomous navigation, visual surveillance, communication, disaster management, and agriculture. Among these machine learning, two representative paradigms have been widely utilized in such applications: supervised learning and reinforcement learning. Researchers prefer to use supervised learning, mostly based on convolutional neural networks, because of its robustness and ease of use but yet data labeling is laborious and time-consuming. On the other hand, when traditional reinforcement learning is combined with the deep neural network, it can be a very powerful tool to solve high-dimensional input problems such as image and video. Along with the fast development of reinforcement learning, many researchers utilize reinforcement learning in drone applications, and it often outperforms supervised learning. However, it usually requires the agent to explore the environment on a trial-and-error basis which is high cost and unrealistic in the real environment. Recent advances in simulated environments can allow an agent to learn by itself to overcome these drawbacks, although the gap between the real environment and the simulator has to be minimized in the end. In this sense, a realistic and reliable simulator is essential for reinforcement learning training. This paper investigates various drone simulators that work with diverse reinforcement learning architectures. The characteristics of the reinforcement learning-based drone simulators are analyzed and compared for the researchers who would like to employ them for their projects. Finally, we shed light on some challenges and potential directions for future drone simulators.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"57 10\",\"pages\":\"\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-024-10933-w.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-024-10933-w\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-024-10933-w","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

最近,机器学习在利用无人机解决自主导航、视觉监控、通信、灾害管理和农业等各种任务中发挥了巨大作用。在这些机器学习中,有两种具有代表性的范式在此类应用中得到了广泛应用:监督学习和强化学习。研究人员更倾向于使用监督学习,主要是基于卷积神经网络的监督学习,因为它具有鲁棒性和易用性,但数据标注费时费力。另一方面,当传统的强化学习与深度神经网络相结合时,它可以成为解决图像和视频等高维输入问题的一个非常强大的工具。随着强化学习的快速发展,许多研究人员将强化学习应用于无人机领域,其效果往往优于监督学习。然而,它通常要求代理在试错的基础上探索环境,成本较高,在真实环境中也不现实。最近在模拟环境方面取得的进展可以让代理进行自我学习,从而克服这些弊端,不过最终必须尽量缩小真实环境与模拟器之间的差距。从这个意义上说,逼真可靠的模拟器对强化学习训练至关重要。本文研究了采用不同强化学习架构的各种无人机模拟器。本文对基于强化学习的无人机模拟器的特点进行了分析和比较,供希望在项目中使用这些模拟器的研究人员参考。最后,我们阐明了未来无人机模拟器面临的一些挑战和潜在方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reinforcement learning-based drone simulators: survey, practice, and challenge

Recently, machine learning has been very useful in solving diverse tasks with drones, such as autonomous navigation, visual surveillance, communication, disaster management, and agriculture. Among these machine learning, two representative paradigms have been widely utilized in such applications: supervised learning and reinforcement learning. Researchers prefer to use supervised learning, mostly based on convolutional neural networks, because of its robustness and ease of use but yet data labeling is laborious and time-consuming. On the other hand, when traditional reinforcement learning is combined with the deep neural network, it can be a very powerful tool to solve high-dimensional input problems such as image and video. Along with the fast development of reinforcement learning, many researchers utilize reinforcement learning in drone applications, and it often outperforms supervised learning. However, it usually requires the agent to explore the environment on a trial-and-error basis which is high cost and unrealistic in the real environment. Recent advances in simulated environments can allow an agent to learn by itself to overcome these drawbacks, although the gap between the real environment and the simulator has to be minimized in the end. In this sense, a realistic and reliable simulator is essential for reinforcement learning training. This paper investigates various drone simulators that work with diverse reinforcement learning architectures. The characteristics of the reinforcement learning-based drone simulators are analyzed and compared for the researchers who would like to employ them for their projects. Finally, we shed light on some challenges and potential directions for future drone simulators.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
期刊最新文献
Federated learning design and functional models: survey A systematic literature review of recent advances on context-aware recommender systems Escape: an optimization method based on crowd evacuation behaviors A multi-strategy boosted bald eagle search algorithm for global optimization and constrained engineering problems: case study on MLP classification problems Innovative solution suggestions for financing electric vehicle charging infrastructure investments with a novel artificial intelligence-based fuzzy decision-making modelling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1