Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268430
Eunchong Kim, Kanghyun Park, Hunmin Yang, Se-Yoon Oh
In tandem with growing deep learning technology, vehicle detection using convolutional neural network is now become a mainstream in the field of autonomous driving and ADAS. Taking advantage of this, lots of real image datasets have been produced in spite of the painstaking work of data collection and ground truth annotation. As an alternative, virtually generated images are introduced. This makes data collection and annotation much easier, but a different kind of problem called ‘domain gap’ is announced. For instance, in off-road vehicle detection, there is a difficulty in producing off-road image dataset not only by collecting real images, but also by synthesizing images sidestepping the domain gap. In this paper, focusing on the off-road army tank detection, we introduce a synthetic image generator using domain randomization on off-road scene context. We train a deep learning model on synthetic dataset using low level features form feature extractor pre-trained on real common object dataset. With proposed method, we improve the model accuracy to 0.86 AP@0.5IOU, outperforming naïve domain randomization approach.
{"title":"Training Deep Neural Networks with Synthetic Data for Off-Road Vehicle Detection","authors":"Eunchong Kim, Kanghyun Park, Hunmin Yang, Se-Yoon Oh","doi":"10.23919/ICCAS50221.2020.9268430","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268430","url":null,"abstract":"In tandem with growing deep learning technology, vehicle detection using convolutional neural network is now become a mainstream in the field of autonomous driving and ADAS. Taking advantage of this, lots of real image datasets have been produced in spite of the painstaking work of data collection and ground truth annotation. As an alternative, virtually generated images are introduced. This makes data collection and annotation much easier, but a different kind of problem called ‘domain gap’ is announced. For instance, in off-road vehicle detection, there is a difficulty in producing off-road image dataset not only by collecting real images, but also by synthesizing images sidestepping the domain gap. In this paper, focusing on the off-road army tank detection, we introduce a synthetic image generator using domain randomization on off-road scene context. We train a deep learning model on synthetic dataset using low level features form feature extractor pre-trained on real common object dataset. With proposed method, we improve the model accuracy to 0.86 AP@0.5IOU, outperforming naïve domain randomization approach.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"28 1","pages":"427-431"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90048676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268227
W. Sun, M. Yao
In the use of surface-type infrared decoys, a reasonable and effective jamming strategy is the key to successfully jam the infrared-guided missile. To solve this problem, a jamming strategy of the surface-type infrared decoys against the infrared-guided missile is obtained by doing theoretical analysis and simulation. This paper introducees a simulation model that the attack process is divided the attack process into pre-lock and post-lock. Use the hit rate to evaluate the success rate, the optimal jamming strategy under two stages is obtained, including the optimal release time of decoys, release interval, and the maneuvering action that should be taken by the carrier aircraft.
{"title":"Research on jamming strategy of surface-type infrared decoy against by infrared-guided simulation","authors":"W. Sun, M. Yao","doi":"10.23919/ICCAS50221.2020.9268227","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268227","url":null,"abstract":"In the use of surface-type infrared decoys, a reasonable and effective jamming strategy is the key to successfully jam the infrared-guided missile. To solve this problem, a jamming strategy of the surface-type infrared decoys against the infrared-guided missile is obtained by doing theoretical analysis and simulation. This paper introducees a simulation model that the attack process is divided the attack process into pre-lock and post-lock. Use the hit rate to evaluate the success rate, the optimal jamming strategy under two stages is obtained, including the optimal release time of decoys, release interval, and the maneuvering action that should be taken by the carrier aircraft.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"845-849"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83088481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268427
Hyonyoung Han, Heechul Bae, Hyunchul Kang, Jiyon Son, H. Kim
This study introduces intelligent task robot system based on process recipe extraction from standard 3D model files. In small quantity batch production and mixed flow manufacturing condition, lots of time is spent on process planning and device control such as path planning in a robot system. If these processes could be automated, mixed flow production of various products will be working efficiently. This paper suggests automated process recipe extraction module based product registration subsystem and visual servoing based intelligent assembly task robot subsystem. The recipe module extracts list of parts, each part size and position from standard 3D model file (STEP) and analyzes the structure of product between parts. The extracted product data is stored in the recipe knowledge base as a recipe format and also plan-view image of each part. Robot system consists of real-time part recognition module, part scheduling module and motion planner module. The part recognition module identifies parts by matching real-time RGB image and plan-view image in knowledge base. The part scheduling module plan the sequence of part for task using a decision tree method. The motion planner module controls assembly task robot according to process recipe depending on task type. Performance of the system was tested with five types of sample products.
{"title":"Intelligent task robot system based on process recipe extraction from product 3D modeling file","authors":"Hyonyoung Han, Heechul Bae, Hyunchul Kang, Jiyon Son, H. Kim","doi":"10.23919/ICCAS50221.2020.9268427","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268427","url":null,"abstract":"This study introduces intelligent task robot system based on process recipe extraction from standard 3D model files. In small quantity batch production and mixed flow manufacturing condition, lots of time is spent on process planning and device control such as path planning in a robot system. If these processes could be automated, mixed flow production of various products will be working efficiently. This paper suggests automated process recipe extraction module based product registration subsystem and visual servoing based intelligent assembly task robot subsystem. The recipe module extracts list of parts, each part size and position from standard 3D model file (STEP) and analyzes the structure of product between parts. The extracted product data is stored in the recipe knowledge base as a recipe format and also plan-view image of each part. Robot system consists of real-time part recognition module, part scheduling module and motion planner module. The part recognition module identifies parts by matching real-time RGB image and plan-view image in knowledge base. The part scheduling module plan the sequence of part for task using a decision tree method. The motion planner module controls assembly task robot according to process recipe depending on task type. Performance of the system was tested with five types of sample products.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"74 1","pages":"856-859"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83218781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268370
Sabir Hossain, Oualid Doukhi, Yeon-ho Jo, D. Lee
Nowadays, Deep reinforcement learning has become the front runner to solve problems in the field of robot navigation and avoidance. This paper presents a LiDAR-equipped RC car trained in the GAZEBO environment using the deep reinforcement learning method. This paper uses reshaped LiDAR data as the data input of the neural architecture of the training network. This paper also presents a unique way to convert the LiDAR data into a 2D grid map for the input of training neural architecture. It also presents the test result from the training network in different GAZEBO environment. It also shows the development of hardware and software systems of embedded RC car. The hardware system includes-Jetson AGX Xavier, teensyduino and Hokuyo LiDAR; the software system includes-ROS and Arduino C. Finally, this paper presents the test result in the real world using the model generated from training simulation.
{"title":"Deep Reinforcement Learning-based ROS-Controlled RC Car for Autonomous Path Exploration in the Unknown Environment","authors":"Sabir Hossain, Oualid Doukhi, Yeon-ho Jo, D. Lee","doi":"10.23919/ICCAS50221.2020.9268370","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268370","url":null,"abstract":"Nowadays, Deep reinforcement learning has become the front runner to solve problems in the field of robot navigation and avoidance. This paper presents a LiDAR-equipped RC car trained in the GAZEBO environment using the deep reinforcement learning method. This paper uses reshaped LiDAR data as the data input of the neural architecture of the training network. This paper also presents a unique way to convert the LiDAR data into a 2D grid map for the input of training neural architecture. It also presents the test result from the training network in different GAZEBO environment. It also shows the development of hardware and software systems of embedded RC car. The hardware system includes-Jetson AGX Xavier, teensyduino and Hokuyo LiDAR; the software system includes-ROS and Arduino C. Finally, this paper presents the test result in the real world using the model generated from training simulation.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"8 1","pages":"1231-1236"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88762419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268244
Hyun Lee
This paper proposes UAV (Unmanned Aerial Vehicle) engine control monitoring system using a dynamic ID application and a scheduling method of CAN network sensors which collect the temperatures, pressure, vibration, Fuel level of UAV engine through the network. This paper aims at developing an effective monitoring method for UAV engine control system, which is implemented based upon CAN (Controller Area Network) network. As the UAV engine control monitoring system requires various kinds of information, a lot of sensor nodes are distributed to several different places. The dynamic application mechanism of CAN protocol ensures the effective utilization of the bandwidth of the network, in which all nodes are sending the data to the bus according to the priority of node identifiers.
本文提出了一种基于动态ID应用的无人机发动机控制监控系统,并提出了一种CAN网络传感器通过网络采集无人机发动机的温度、压力、振动、油位等信息的调度方法。本文旨在开发一种基于CAN (Controller Area Network,控制器局域网)网络的无人机发动机控制系统的有效监控方法。由于无人机发动机控制监控系统需要的信息种类繁多,因此大量的传感器节点分布在多个不同的地方。CAN协议的动态应用机制保证了网络带宽的有效利用,各节点按照节点标识符的优先级向总线发送数据。
{"title":"UAV Engine Control Monitoring System based on CAN Network","authors":"Hyun Lee","doi":"10.23919/ICCAS50221.2020.9268244","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268244","url":null,"abstract":"This paper proposes UAV (Unmanned Aerial Vehicle) engine control monitoring system using a dynamic ID application and a scheduling method of CAN network sensors which collect the temperatures, pressure, vibration, Fuel level of UAV engine through the network. This paper aims at developing an effective monitoring method for UAV engine control system, which is implemented based upon CAN (Controller Area Network) network. As the UAV engine control monitoring system requires various kinds of information, a lot of sensor nodes are distributed to several different places. The dynamic application mechanism of CAN protocol ensures the effective utilization of the bandwidth of the network, in which all nodes are sending the data to the bus according to the priority of node identifiers.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"6 1","pages":"820-823"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87363322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268254
Jeong-Min Ma, Hyung-Gohn Lee, H. Ahn, K. Moore
In a directed graph a leader is a node that has no in-degree edges. If there are multiple leaders in a directed consensus network, the system will not reach consensus. In such systems the nodes will organize into clusters or groups of node that converge to the same value. These clusters are not dependent on initial conditions or edge weights. In this paper we study clusters in multi-leader directed consensus networks. Specifically, we present an algorithm to classify all clusters in the graph.
{"title":"Clusters in multi-leader directed consensus networks","authors":"Jeong-Min Ma, Hyung-Gohn Lee, H. Ahn, K. Moore","doi":"10.23919/ICCAS50221.2020.9268254","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268254","url":null,"abstract":"In a directed graph a leader is a node that has no in-degree edges. If there are multiple leaders in a directed consensus network, the system will not reach consensus. In such systems the nodes will organize into clusters or groups of node that converge to the same value. These clusters are not dependent on initial conditions or edge weights. In this paper we study clusters in multi-leader directed consensus networks. Specifically, we present an algorithm to classify all clusters in the graph.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"36 1","pages":"379-384"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84711366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268202
Eun-ho Kim, Sanghyeon Bae, T. Kuc
Mobile service robot multi-floor navigation is a challenging issue for in indoor robot navigation, especially when moving between floors, entering and leaving elevator. So, in this paper we propose detection and recognition method of elevator features and robot navigation for entering and leaving the elevator. Thus, in this paper we propose a method which uses deep learning. Based image recognition system to identify particular floor from an elevator display. Using this method robot determines whether particular floor has reached. We proposed two-fold methods to accomplish our goal. On the first method we performed the extraction of elevator button coordinates through traditional feature extractor such as adaptive thresholding, blob detection, template matching. The next part of our approach is by using DL- based recognition, done by YOLO 9000 on the floor count display panel of the elevator. From our analysis of these above mentioned methods we discovered that the feature extractor out-performs the DL-based recognition system even in the tricky conditions. Such as lighter reflection, motion blur etc. and proves to be more robust system for detection and recognition.
{"title":"Mobile service robot multi-floor navigation using visual detection and recognition of elevator features(ICCAS 2020)","authors":"Eun-ho Kim, Sanghyeon Bae, T. Kuc","doi":"10.23919/ICCAS50221.2020.9268202","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268202","url":null,"abstract":"Mobile service robot multi-floor navigation is a challenging issue for in indoor robot navigation, especially when moving between floors, entering and leaving elevator. So, in this paper we propose detection and recognition method of elevator features and robot navigation for entering and leaving the elevator. Thus, in this paper we propose a method which uses deep learning. Based image recognition system to identify particular floor from an elevator display. Using this method robot determines whether particular floor has reached. We proposed two-fold methods to accomplish our goal. On the first method we performed the extraction of elevator button coordinates through traditional feature extractor such as adaptive thresholding, blob detection, template matching. The next part of our approach is by using DL- based recognition, done by YOLO 9000 on the floor count display panel of the elevator. From our analysis of these above mentioned methods we discovered that the feature extractor out-performs the DL-based recognition system even in the tricky conditions. Such as lighter reflection, motion blur etc. and proves to be more robust system for detection and recognition.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"71 1","pages":"982-985"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74495992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268267
Chika Tanaka, Tohru Kamiya, T. Aoki
In recent years, the number of lung cancer deaths has been increasing. In Japan, CT (Computed Tomography) equipment is used for its visual screening. However, there is a problem that seeing huge number of images taken by CT is a burden on the doctor. To overcome this problem, the CAD (Computer Aided Diagnosis) system is introduced on medical fields. In CT screening, LDCT (Low Dose Computed Tomography) screening is desirable considering radiation exposure. However, the image quality which is caused the lower the dose is another problem on the screening. A CAD system that enables accurate diagnosis even at low doses is needed. Therefore, in this paper, we propose a registration method for generating temporal subtraction images that can be applied to low-quality chest LDCT images. Our approach consists of two major components. Firstly, global matching based on the center of gravity is performed on the preprocessed images, and the region of interest (ROI) is set. Secondly, local matching by free-form deformation (FFD) based on B-Spline is performed on the ROI as final registration. In this paper, we apply our proposed method to LDCT images of 6 cases, and reduce 57.29% in the calculation time, 26.1% in the half value width, and 29.6% in the sum of histogram of temporal subtraction images comparing with the conventional method.
{"title":"Image Registration Method from LDCT Image Using FFD Algorithm","authors":"Chika Tanaka, Tohru Kamiya, T. Aoki","doi":"10.23919/ICCAS50221.2020.9268267","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268267","url":null,"abstract":"In recent years, the number of lung cancer deaths has been increasing. In Japan, CT (Computed Tomography) equipment is used for its visual screening. However, there is a problem that seeing huge number of images taken by CT is a burden on the doctor. To overcome this problem, the CAD (Computer Aided Diagnosis) system is introduced on medical fields. In CT screening, LDCT (Low Dose Computed Tomography) screening is desirable considering radiation exposure. However, the image quality which is caused the lower the dose is another problem on the screening. A CAD system that enables accurate diagnosis even at low doses is needed. Therefore, in this paper, we propose a registration method for generating temporal subtraction images that can be applied to low-quality chest LDCT images. Our approach consists of two major components. Firstly, global matching based on the center of gravity is performed on the preprocessed images, and the region of interest (ROI) is set. Secondly, local matching by free-form deformation (FFD) based on B-Spline is performed on the ROI as final registration. In this paper, we apply our proposed method to LDCT images of 6 cases, and reduce 57.29% in the calculation time, 26.1% in the half value width, and 29.6% in the sum of histogram of temporal subtraction images comparing with the conventional method.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"7 1","pages":"411-414"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84800576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268204
Min. InJoon, Yoo. DongHa, Ahn. MinSung, Han. Jeakweon
The preview control walking method, which is commonly used in bipedal walking, requires jerk and ZMP errors as cost functions to generate body trajectory. Since the two inputs are dependent, optimization to form body trajectory is performed simultaneously with weight factors. Therefore, it is often seen that the resulting body trajectory rapidly changes on velocity according to the weight factors. This eventually requires a torque actuator in order to perform such action. In order to overcome this problem, we apply a method used on a quadruped to a bipedal robot. Since, it only targets to minimize the acceleration of the body trajectory, the body does not require rapid speed change. Also, this method can eliminate the computation time needed for preview control referred to preview time. When applying a quadruped robots walking method that has a relatively large support polygon than that of a bipedal robot, stability deterioration may occur. Therefore, we approached to secure ZMP constraints with relatively small support polygon area as within bipedal robots. In this paper we propose a body trajectory generation method that guarantees real-time stability while minimizing acceleration.
{"title":"Body Trajectory Generation Using Quadratic Programming in Bipedal Robots","authors":"Min. InJoon, Yoo. DongHa, Ahn. MinSung, Han. Jeakweon","doi":"10.23919/ICCAS50221.2020.9268204","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268204","url":null,"abstract":"The preview control walking method, which is commonly used in bipedal walking, requires jerk and ZMP errors as cost functions to generate body trajectory. Since the two inputs are dependent, optimization to form body trajectory is performed simultaneously with weight factors. Therefore, it is often seen that the resulting body trajectory rapidly changes on velocity according to the weight factors. This eventually requires a torque actuator in order to perform such action. In order to overcome this problem, we apply a method used on a quadruped to a bipedal robot. Since, it only targets to minimize the acceleration of the body trajectory, the body does not require rapid speed change. Also, this method can eliminate the computation time needed for preview control referred to preview time. When applying a quadruped robots walking method that has a relatively large support polygon than that of a bipedal robot, stability deterioration may occur. Therefore, we approached to secure ZMP constraints with relatively small support polygon area as within bipedal robots. In this paper we propose a body trajectory generation method that guarantees real-time stability while minimizing acceleration.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"25 1","pages":"251-257"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80172221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-13DOI: 10.23919/ICCAS50221.2020.9268311
Mi-Seon Kang, Pyong-Kun Kim, Kil-Taek Lim
Road traffic volume survey is a survey to determine the number and type of vehicles passing at a specific point for a certain period of time. Previously, a method of classifying the number of vehicles and vehicle types has been used while a person sees an image photographed using a camera with the naked eye, but this has a disadvantage in that a lot of manpower and cost are incurred. Recently, a method of applying an automated algorithm has been widely attempted, but has a disadvantage in that the accuracy is inferior to the existing method performed by manpower. To address these problems, we propose a method to automate road traffic volume surveys and a new method to verify the results. The proposed method extracts the number of vehicles and vehicle types from an image using deep learning, analyzes the results, and automatically informs the user of candidates with a high probability of error, so that highly reliable traffic volume survey information can be efficiently generated. The performance of the proposed method is tested using a data set collected by an actual road traffic survey company. The experiment proved that it is possible to verify the vehicle classification and route simply and quickly using the proposed method. The proposed method can not only reduce the investigation process and cost, but also increase the reliability due to more accurate results.
{"title":"Verification method to improve the efficiency of traffic survey","authors":"Mi-Seon Kang, Pyong-Kun Kim, Kil-Taek Lim","doi":"10.23919/ICCAS50221.2020.9268311","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268311","url":null,"abstract":"Road traffic volume survey is a survey to determine the number and type of vehicles passing at a specific point for a certain period of time. Previously, a method of classifying the number of vehicles and vehicle types has been used while a person sees an image photographed using a camera with the naked eye, but this has a disadvantage in that a lot of manpower and cost are incurred. Recently, a method of applying an automated algorithm has been widely attempted, but has a disadvantage in that the accuracy is inferior to the existing method performed by manpower. To address these problems, we propose a method to automate road traffic volume surveys and a new method to verify the results. The proposed method extracts the number of vehicles and vehicle types from an image using deep learning, analyzes the results, and automatically informs the user of candidates with a high probability of error, so that highly reliable traffic volume survey information can be efficiently generated. The performance of the proposed method is tested using a data set collected by an actual road traffic survey company. The experiment proved that it is possible to verify the vehicle classification and route simply and quickly using the proposed method. The proposed method can not only reduce the investigation process and cost, but also increase the reliability due to more accurate results.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"339-343"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83422067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}