Andreas Seel, Florian Kreutzjans, B. Küster, M. Stonis, Ludger Overmeyer
{"title":"Dueling Double Deep Q-Network用于无人驾驶飞机系统在工厂环境中的室内探测","authors":"Andreas Seel, Florian Kreutzjans, B. Küster, M. Stonis, Ludger Overmeyer","doi":"10.1109/INFOTEH57020.2023.10094171","DOIUrl":null,"url":null,"abstract":"Although factory planning is widely recognized as a way to significantly enhance manufacturing productivity, the associated costs in terms of time and money can be prohibitive. In this paper, we present a solution to this challenge through the development of a Software-in-the-loop (SITL) framework that leverages an Unmanned Aircraft System (UAS) in an autonomous capacity. The framework incorporates simulated sensors, a UAS, and a virtual factory environment. Moreover, we propose a Deep Reinforcement Learning (DRL) agent that is capable of collision avoidance and exploration using the Dueling Double Deep Q-Network (3DQN) with prioritized experience replay.","PeriodicalId":287923,"journal":{"name":"2023 22nd International Symposium INFOTEH-JAHORINA (INFOTEH)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dueling Double Deep Q-Network for indoor exploration in factory environments with an unmanned aircraft system\",\"authors\":\"Andreas Seel, Florian Kreutzjans, B. Küster, M. Stonis, Ludger Overmeyer\",\"doi\":\"10.1109/INFOTEH57020.2023.10094171\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although factory planning is widely recognized as a way to significantly enhance manufacturing productivity, the associated costs in terms of time and money can be prohibitive. In this paper, we present a solution to this challenge through the development of a Software-in-the-loop (SITL) framework that leverages an Unmanned Aircraft System (UAS) in an autonomous capacity. The framework incorporates simulated sensors, a UAS, and a virtual factory environment. Moreover, we propose a Deep Reinforcement Learning (DRL) agent that is capable of collision avoidance and exploration using the Dueling Double Deep Q-Network (3DQN) with prioritized experience replay.\",\"PeriodicalId\":287923,\"journal\":{\"name\":\"2023 22nd International Symposium INFOTEH-JAHORINA (INFOTEH)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 22nd International Symposium INFOTEH-JAHORINA (INFOTEH)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOTEH57020.2023.10094171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 22nd International Symposium INFOTEH-JAHORINA (INFOTEH)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOTEH57020.2023.10094171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
虽然工厂规划被广泛认为是显著提高生产效率的一种方式,但相关的时间和金钱成本可能令人望而却步。在本文中,我们通过开发利用无人机系统(UAS)自主能力的软件在环(SITL)框架,提出了解决这一挑战的方案。该框架集成了模拟传感器、无人机系统和虚拟工厂环境。此外,我们提出了一个深度强化学习(DRL)代理,它能够使用Dueling Double Deep Q-Network (3DQN)进行碰撞避免和探索,并具有优先的经验回放。
Dueling Double Deep Q-Network for indoor exploration in factory environments with an unmanned aircraft system
Although factory planning is widely recognized as a way to significantly enhance manufacturing productivity, the associated costs in terms of time and money can be prohibitive. In this paper, we present a solution to this challenge through the development of a Software-in-the-loop (SITL) framework that leverages an Unmanned Aircraft System (UAS) in an autonomous capacity. The framework incorporates simulated sensors, a UAS, and a virtual factory environment. Moreover, we propose a Deep Reinforcement Learning (DRL) agent that is capable of collision avoidance and exploration using the Dueling Double Deep Q-Network (3DQN) with prioritized experience replay.