为机器学习生成训练数据集:基于视觉的导航应用

Jérémy Lebreton, Ingo Ahrns, Roland Brochard, Christoph Haskamp, Matthieu Le Goff, Nicolas Menga, Nicolas Ollagnier, Ralf Regele, Francesco Capolupo, Massimo Casasco
{"title":"为机器学习生成训练数据集:基于视觉的导航应用","authors":"Jérémy Lebreton, Ingo Ahrns, Roland Brochard, Christoph Haskamp, Matthieu Le Goff, Nicolas Menga, Nicolas Ollagnier, Ralf Regele, Francesco Capolupo, Massimo Casasco","doi":"arxiv-2409.11383","DOIUrl":null,"url":null,"abstract":"Vision Based Navigation consists in utilizing cameras as precision sensors\nfor GNC after extracting information from images. To enable the adoption of\nmachine learning for space applications, one of obstacles is the demonstration\nthat available training datasets are adequate to validate the algorithms. The\nobjective of the study is to generate datasets of images and metadata suitable\nfor training machine learning algorithms. Two use cases were selected and a\nrobust methodology was developed to validate the datasets including the ground\ntruth. The first use case is in-orbit rendezvous with a man-made object: a\nmockup of satellite ENVISAT. The second use case is a Lunar landing scenario.\nDatasets were produced from archival datasets (Chang'e 3), from the laboratory\nat DLR TRON facility and at Airbus Robotic laboratory, from SurRender software\nhigh fidelity image simulator using Model Capture and from Generative\nAdversarial Networks. The use case definition included the selection of\nalgorithms as benchmark: an AI-based pose estimation algorithm and a dense\noptical flow algorithm were selected. Eventually it is demonstrated that\ndatasets produced with SurRender and selected laboratory facilities are\nadequate to train machine learning algorithms.","PeriodicalId":501209,"journal":{"name":"arXiv - PHYS - Earth and Planetary Astrophysics","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Training Datasets Generation for Machine Learning: Application to Vision Based Navigation\",\"authors\":\"Jérémy Lebreton, Ingo Ahrns, Roland Brochard, Christoph Haskamp, Matthieu Le Goff, Nicolas Menga, Nicolas Ollagnier, Ralf Regele, Francesco Capolupo, Massimo Casasco\",\"doi\":\"arxiv-2409.11383\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Vision Based Navigation consists in utilizing cameras as precision sensors\\nfor GNC after extracting information from images. To enable the adoption of\\nmachine learning for space applications, one of obstacles is the demonstration\\nthat available training datasets are adequate to validate the algorithms. The\\nobjective of the study is to generate datasets of images and metadata suitable\\nfor training machine learning algorithms. Two use cases were selected and a\\nrobust methodology was developed to validate the datasets including the ground\\ntruth. The first use case is in-orbit rendezvous with a man-made object: a\\nmockup of satellite ENVISAT. The second use case is a Lunar landing scenario.\\nDatasets were produced from archival datasets (Chang'e 3), from the laboratory\\nat DLR TRON facility and at Airbus Robotic laboratory, from SurRender software\\nhigh fidelity image simulator using Model Capture and from Generative\\nAdversarial Networks. The use case definition included the selection of\\nalgorithms as benchmark: an AI-based pose estimation algorithm and a dense\\noptical flow algorithm were selected. Eventually it is demonstrated that\\ndatasets produced with SurRender and selected laboratory facilities are\\nadequate to train machine learning algorithms.\",\"PeriodicalId\":501209,\"journal\":{\"name\":\"arXiv - PHYS - Earth and Planetary Astrophysics\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Earth and Planetary Astrophysics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11383\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Earth and Planetary Astrophysics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11383","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于视觉的导航包括在从图像中提取信息后,利用照相机作为精密传感器进行全球导航。要在空间应用中采用机器学习,障碍之一是证明现有的训练数据集足以验证算法。这项研究的目标是生成适合训练机器学习算法的图像和元数据数据集。我们选择了两个使用案例,并开发了一套可靠的方法来验证数据集,包括地面实况。第一个用例是在轨与人造物体交会:ENVISAT 卫星的模拟图。数据集来自档案数据集(嫦娥三号)、德国宇航中心 TRON 设施实验室和空中客车机器人实验室、使用模型捕捉的 SurRender 软件高保真图像模拟器以及生成式对抗网络。用例定义包括选择算法作为基准:选择了基于人工智能的姿态估计算法和密集光流算法。最终证明,使用 SurRender 和选定的实验室设施生成的数据集足以训练机器学习算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Training Datasets Generation for Machine Learning: Application to Vision Based Navigation
Vision Based Navigation consists in utilizing cameras as precision sensors for GNC after extracting information from images. To enable the adoption of machine learning for space applications, one of obstacles is the demonstration that available training datasets are adequate to validate the algorithms. The objective of the study is to generate datasets of images and metadata suitable for training machine learning algorithms. Two use cases were selected and a robust methodology was developed to validate the datasets including the ground truth. The first use case is in-orbit rendezvous with a man-made object: a mockup of satellite ENVISAT. The second use case is a Lunar landing scenario. Datasets were produced from archival datasets (Chang'e 3), from the laboratory at DLR TRON facility and at Airbus Robotic laboratory, from SurRender software high fidelity image simulator using Model Capture and from Generative Adversarial Networks. The use case definition included the selection of algorithms as benchmark: an AI-based pose estimation algorithm and a dense optical flow algorithm were selected. Eventually it is demonstrated that datasets produced with SurRender and selected laboratory facilities are adequate to train machine learning algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Probing the Possible Causes of the Transit Timing Variation for TrES-2b in TESS Era Drifts of the sub-stellar points of the TRAPPIST-1 planets Updated forecast for TRAPPIST-1 times of transit for all seven exoplanets incorporating JWST data Thermal Evolution of Lava Planets Quartz Clouds in the Dayside Atmosphere of the Quintessential Hot Jupiter HD 189733 b
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1