Dario Mantegazza, Jérôme Guzzi, L. Gambardella, A. Giusti
{"title":"学习基于视觉的四旋翼控制在用户接近","authors":"Dario Mantegazza, Jérôme Guzzi, L. Gambardella, A. Giusti","doi":"10.1109/HRI.2019.8673022","DOIUrl":null,"url":null,"abstract":"We consider a quadrotor equipped with a forward-facing camera, and an user freely moving in its proximity; we control the quadrotor in order to stay in front of the user, using only camera frames. To do so, we train a deep neural network to predict the drone controls given the camera image. Training data is acquired by running a simple hand-designed controller which relies on optical motion tracking data.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"118 1","pages":"369-369"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning Vision-Based Quadrotor Control in User Proximity\",\"authors\":\"Dario Mantegazza, Jérôme Guzzi, L. Gambardella, A. Giusti\",\"doi\":\"10.1109/HRI.2019.8673022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider a quadrotor equipped with a forward-facing camera, and an user freely moving in its proximity; we control the quadrotor in order to stay in front of the user, using only camera frames. To do so, we train a deep neural network to predict the drone controls given the camera image. Training data is acquired by running a simple hand-designed controller which relies on optical motion tracking data.\",\"PeriodicalId\":6600,\"journal\":{\"name\":\"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)\",\"volume\":\"118 1\",\"pages\":\"369-369\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HRI.2019.8673022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HRI.2019.8673022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Vision-Based Quadrotor Control in User Proximity
We consider a quadrotor equipped with a forward-facing camera, and an user freely moving in its proximity; we control the quadrotor in order to stay in front of the user, using only camera frames. To do so, we train a deep neural network to predict the drone controls given the camera image. Training data is acquired by running a simple hand-designed controller which relies on optical motion tracking data.