Dongkyu Kim, Seokjun Lee, Nak-Myoung Sung, Chungjae Choe
{"title":"基于域迁移学习的资源受限边缘设备实时目标检测","authors":"Dongkyu Kim, Seokjun Lee, Nak-Myoung Sung, Chungjae Choe","doi":"10.1109/ICAIIC57133.2023.10067064","DOIUrl":null,"url":null,"abstract":"This paper presents a domain-based transfer learning method for deep learning-based object detection models where the method enables real-time computation in resource-constrained edge devices. Object detection is an essential task for intelligent platforms (e.g., drones, robots, and autonomous vehicles). However, edge devices could not afford to run huge object detection models due to insufficient resources. Although a compressed deep learning model increases inference speed, the accuracy of the model could be significantly deteriorate. In this paper, we propose an accurate object detection method while achieving real-time computation on edge devices. Our method aims to reduce marginal detection outputs of models according to application domains (e.g., city, park, factory, etc). We classify crucial objects (i.e., pedestrian, car, bench, etc) for a specific domain and adopt a transfer learning in which the learning is solely towards the selected objects. Such approach improves detection accuracy even for a compressed deep learning model like tiny versions of a YOLO (you only look once) framework. From the experiments, we validate that the method empowers the YOLOv7-tiny can provide the comparable detection accuracy with a YOLOv7 model despite of 83% less parameters than that of the original model. Besides, we confirm that our method achieves 389% faster inference on resource-constrained edge devices (i.e., NVIDIA Jetsons) than the YOLOv7.","PeriodicalId":105769,"journal":{"name":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Real-time object detection using a domain-based transfer learning method for resource-constrained edge devices\",\"authors\":\"Dongkyu Kim, Seokjun Lee, Nak-Myoung Sung, Chungjae Choe\",\"doi\":\"10.1109/ICAIIC57133.2023.10067064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a domain-based transfer learning method for deep learning-based object detection models where the method enables real-time computation in resource-constrained edge devices. Object detection is an essential task for intelligent platforms (e.g., drones, robots, and autonomous vehicles). However, edge devices could not afford to run huge object detection models due to insufficient resources. Although a compressed deep learning model increases inference speed, the accuracy of the model could be significantly deteriorate. In this paper, we propose an accurate object detection method while achieving real-time computation on edge devices. Our method aims to reduce marginal detection outputs of models according to application domains (e.g., city, park, factory, etc). We classify crucial objects (i.e., pedestrian, car, bench, etc) for a specific domain and adopt a transfer learning in which the learning is solely towards the selected objects. Such approach improves detection accuracy even for a compressed deep learning model like tiny versions of a YOLO (you only look once) framework. From the experiments, we validate that the method empowers the YOLOv7-tiny can provide the comparable detection accuracy with a YOLOv7 model despite of 83% less parameters than that of the original model. Besides, we confirm that our method achieves 389% faster inference on resource-constrained edge devices (i.e., NVIDIA Jetsons) than the YOLOv7.\",\"PeriodicalId\":105769,\"journal\":{\"name\":\"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAIIC57133.2023.10067064\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIIC57133.2023.10067064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Real-time object detection using a domain-based transfer learning method for resource-constrained edge devices
This paper presents a domain-based transfer learning method for deep learning-based object detection models where the method enables real-time computation in resource-constrained edge devices. Object detection is an essential task for intelligent platforms (e.g., drones, robots, and autonomous vehicles). However, edge devices could not afford to run huge object detection models due to insufficient resources. Although a compressed deep learning model increases inference speed, the accuracy of the model could be significantly deteriorate. In this paper, we propose an accurate object detection method while achieving real-time computation on edge devices. Our method aims to reduce marginal detection outputs of models according to application domains (e.g., city, park, factory, etc). We classify crucial objects (i.e., pedestrian, car, bench, etc) for a specific domain and adopt a transfer learning in which the learning is solely towards the selected objects. Such approach improves detection accuracy even for a compressed deep learning model like tiny versions of a YOLO (you only look once) framework. From the experiments, we validate that the method empowers the YOLOv7-tiny can provide the comparable detection accuracy with a YOLOv7 model despite of 83% less parameters than that of the original model. Besides, we confirm that our method achieves 389% faster inference on resource-constrained edge devices (i.e., NVIDIA Jetsons) than the YOLOv7.