首页 > 最新文献

2020 20th International Conference on Control, Automation and Systems (ICCAS)最新文献

英文 中文
Training Deep Neural Networks with Synthetic Data for Off-Road Vehicle Detection 基于合成数据的深度神经网络训练用于越野车辆检测
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268430
Eunchong Kim, Kanghyun Park, Hunmin Yang, Se-Yoon Oh
In tandem with growing deep learning technology, vehicle detection using convolutional neural network is now become a mainstream in the field of autonomous driving and ADAS. Taking advantage of this, lots of real image datasets have been produced in spite of the painstaking work of data collection and ground truth annotation. As an alternative, virtually generated images are introduced. This makes data collection and annotation much easier, but a different kind of problem called ‘domain gap’ is announced. For instance, in off-road vehicle detection, there is a difficulty in producing off-road image dataset not only by collecting real images, but also by synthesizing images sidestepping the domain gap. In this paper, focusing on the off-road army tank detection, we introduce a synthetic image generator using domain randomization on off-road scene context. We train a deep learning model on synthetic dataset using low level features form feature extractor pre-trained on real common object dataset. With proposed method, we improve the model accuracy to 0.86 AP@0.5IOU, outperforming naïve domain randomization approach.
随着深度学习技术的发展,使用卷积神经网络进行车辆检测已成为自动驾驶和ADAS领域的主流。利用这一点,尽管需要进行艰苦的数据采集和地面真值标注,但仍产生了大量的真实图像数据集。作为替代方案,引入了虚拟生成的图像。这使得数据收集和注释更加容易,但同时也带来了另一种问题,称为“领域差距”。例如,在越野车辆检测中,既要采集真实图像,又要避开域间隙合成图像,难以生成越野图像数据集。本文以越野陆军坦克检测为研究对象,引入了一种基于领域随机化的越野场景合成图像发生器。我们使用在真实通用对象数据集上预训练的特征提取器生成的低级特征在合成数据集上训练深度学习模型。使用该方法,我们将模型精度提高到0.86 AP@0.5IOU,优于naïve域随机化方法。
{"title":"Training Deep Neural Networks with Synthetic Data for Off-Road Vehicle Detection","authors":"Eunchong Kim, Kanghyun Park, Hunmin Yang, Se-Yoon Oh","doi":"10.23919/ICCAS50221.2020.9268430","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268430","url":null,"abstract":"In tandem with growing deep learning technology, vehicle detection using convolutional neural network is now become a mainstream in the field of autonomous driving and ADAS. Taking advantage of this, lots of real image datasets have been produced in spite of the painstaking work of data collection and ground truth annotation. As an alternative, virtually generated images are introduced. This makes data collection and annotation much easier, but a different kind of problem called ‘domain gap’ is announced. For instance, in off-road vehicle detection, there is a difficulty in producing off-road image dataset not only by collecting real images, but also by synthesizing images sidestepping the domain gap. In this paper, focusing on the off-road army tank detection, we introduce a synthetic image generator using domain randomization on off-road scene context. We train a deep learning model on synthetic dataset using low level features form feature extractor pre-trained on real common object dataset. With proposed method, we improve the model accuracy to 0.86 AP@0.5IOU, outperforming naïve domain randomization approach.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"28 1","pages":"427-431"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90048676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Research on jamming strategy of surface-type infrared decoy against by infrared-guided simulation 红外制导仿真对地面型红外诱饵的干扰策略研究
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268227
W. Sun, M. Yao
In the use of surface-type infrared decoys, a reasonable and effective jamming strategy is the key to successfully jam the infrared-guided missile. To solve this problem, a jamming strategy of the surface-type infrared decoys against the infrared-guided missile is obtained by doing theoretical analysis and simulation. This paper introducees a simulation model that the attack process is divided the attack process into pre-lock and post-lock. Use the hit rate to evaluate the success rate, the optimal jamming strategy under two stages is obtained, including the optimal release time of decoys, release interval, and the maneuvering action that should be taken by the carrier aircraft.
在地面型红外诱饵的使用中,合理有效的干扰策略是成功干扰红外制导导弹的关键。针对这一问题,通过理论分析和仿真,得出了地面型红外诱饵对红外制导导弹的干扰策略。本文介绍了一种将攻击过程分为预锁和后锁的仿真模型。用命中率来评价成功率,得到了两个阶段下的最优干扰策略,包括诱饵的最优释放时间、释放间隔以及舰载机应采取的机动动作。
{"title":"Research on jamming strategy of surface-type infrared decoy against by infrared-guided simulation","authors":"W. Sun, M. Yao","doi":"10.23919/ICCAS50221.2020.9268227","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268227","url":null,"abstract":"In the use of surface-type infrared decoys, a reasonable and effective jamming strategy is the key to successfully jam the infrared-guided missile. To solve this problem, a jamming strategy of the surface-type infrared decoys against the infrared-guided missile is obtained by doing theoretical analysis and simulation. This paper introducees a simulation model that the attack process is divided the attack process into pre-lock and post-lock. Use the hit rate to evaluate the success rate, the optimal jamming strategy under two stages is obtained, including the optimal release time of decoys, release interval, and the maneuvering action that should be taken by the carrier aircraft.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"845-849"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83088481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent task robot system based on process recipe extraction from product 3D modeling file 基于工艺配方提取产品三维建模文件的智能任务机器人系统
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268427
Hyonyoung Han, Heechul Bae, Hyunchul Kang, Jiyon Son, H. Kim
This study introduces intelligent task robot system based on process recipe extraction from standard 3D model files. In small quantity batch production and mixed flow manufacturing condition, lots of time is spent on process planning and device control such as path planning in a robot system. If these processes could be automated, mixed flow production of various products will be working efficiently. This paper suggests automated process recipe extraction module based product registration subsystem and visual servoing based intelligent assembly task robot subsystem. The recipe module extracts list of parts, each part size and position from standard 3D model file (STEP) and analyzes the structure of product between parts. The extracted product data is stored in the recipe knowledge base as a recipe format and also plan-view image of each part. Robot system consists of real-time part recognition module, part scheduling module and motion planner module. The part recognition module identifies parts by matching real-time RGB image and plan-view image in knowledge base. The part scheduling module plan the sequence of part for task using a decision tree method. The motion planner module controls assembly task robot according to process recipe depending on task type. Performance of the system was tested with five types of sample products.
本文介绍了基于标准三维模型文件的工艺配方提取的智能任务机器人系统。在小批量生产和混流制造条件下,机器人系统在工艺规划和路径规划等设备控制上花费了大量的时间。如果这些过程可以自动化,各种产品的混流生产将有效地工作。提出了基于产品注册子系统的自动化工艺配方提取模块和基于视觉伺服的智能装配任务机器人子系统。配方模块从标准3D模型文件(STEP)中提取零件列表、各个零件的尺寸和位置,并分析零件之间的产品结构。提取的产品数据以配方格式存储在配方知识库中,也存储在每个部件的平面视图图像中。机器人系统由实时零件识别模块、零件调度模块和运动规划模块组成。零件识别模块通过匹配知识库中的实时RGB图像和平面视图图像来识别零件。零件调度模块采用决策树方法规划零件的任务顺序。运动规划模块根据任务类型根据工艺配方控制装配任务机器人。用五种样品产品对系统的性能进行了测试。
{"title":"Intelligent task robot system based on process recipe extraction from product 3D modeling file","authors":"Hyonyoung Han, Heechul Bae, Hyunchul Kang, Jiyon Son, H. Kim","doi":"10.23919/ICCAS50221.2020.9268427","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268427","url":null,"abstract":"This study introduces intelligent task robot system based on process recipe extraction from standard 3D model files. In small quantity batch production and mixed flow manufacturing condition, lots of time is spent on process planning and device control such as path planning in a robot system. If these processes could be automated, mixed flow production of various products will be working efficiently. This paper suggests automated process recipe extraction module based product registration subsystem and visual servoing based intelligent assembly task robot subsystem. The recipe module extracts list of parts, each part size and position from standard 3D model file (STEP) and analyzes the structure of product between parts. The extracted product data is stored in the recipe knowledge base as a recipe format and also plan-view image of each part. Robot system consists of real-time part recognition module, part scheduling module and motion planner module. The part recognition module identifies parts by matching real-time RGB image and plan-view image in knowledge base. The part scheduling module plan the sequence of part for task using a decision tree method. The motion planner module controls assembly task robot according to process recipe depending on task type. Performance of the system was tested with five types of sample products.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"74 1","pages":"856-859"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83218781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Reinforcement Learning-based ROS-Controlled RC Car for Autonomous Path Exploration in the Unknown Environment 基于深度强化学习的ros控制RC车在未知环境下的自主路径探索
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268370
Sabir Hossain, Oualid Doukhi, Yeon-ho Jo, D. Lee
Nowadays, Deep reinforcement learning has become the front runner to solve problems in the field of robot navigation and avoidance. This paper presents a LiDAR-equipped RC car trained in the GAZEBO environment using the deep reinforcement learning method. This paper uses reshaped LiDAR data as the data input of the neural architecture of the training network. This paper also presents a unique way to convert the LiDAR data into a 2D grid map for the input of training neural architecture. It also presents the test result from the training network in different GAZEBO environment. It also shows the development of hardware and software systems of embedded RC car. The hardware system includes-Jetson AGX Xavier, teensyduino and Hokuyo LiDAR; the software system includes-ROS and Arduino C. Finally, this paper presents the test result in the real world using the model generated from training simulation.
目前,深度强化学习已经成为解决机器人导航和回避问题的领跑者。本文介绍了一辆在GAZEBO环境下使用深度强化学习方法训练的激光雷达遥控车。本文使用重塑后的激光雷达数据作为训练网络神经结构的数据输入。本文还提出了一种将激光雷达数据转换成二维网格图的独特方法,用于训练神经结构的输入。给出了训练网络在不同GAZEBO环境下的测试结果。介绍了嵌入式遥控车硬件系统和软件系统的开发。硬件系统包括:jetson AGX Xavier、teensyduino和Hokuyo LiDAR;软件系统包括- ros和Arduino c。最后,本文利用训练仿真生成的模型给出了在现实世界中的测试结果。
{"title":"Deep Reinforcement Learning-based ROS-Controlled RC Car for Autonomous Path Exploration in the Unknown Environment","authors":"Sabir Hossain, Oualid Doukhi, Yeon-ho Jo, D. Lee","doi":"10.23919/ICCAS50221.2020.9268370","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268370","url":null,"abstract":"Nowadays, Deep reinforcement learning has become the front runner to solve problems in the field of robot navigation and avoidance. This paper presents a LiDAR-equipped RC car trained in the GAZEBO environment using the deep reinforcement learning method. This paper uses reshaped LiDAR data as the data input of the neural architecture of the training network. This paper also presents a unique way to convert the LiDAR data into a 2D grid map for the input of training neural architecture. It also presents the test result from the training network in different GAZEBO environment. It also shows the development of hardware and software systems of embedded RC car. The hardware system includes-Jetson AGX Xavier, teensyduino and Hokuyo LiDAR; the software system includes-ROS and Arduino C. Finally, this paper presents the test result in the real world using the model generated from training simulation.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"8 1","pages":"1231-1236"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88762419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
UAV Engine Control Monitoring System based on CAN Network 基于CAN网络的无人机发动机控制监控系统
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268244
Hyun Lee
This paper proposes UAV (Unmanned Aerial Vehicle) engine control monitoring system using a dynamic ID application and a scheduling method of CAN network sensors which collect the temperatures, pressure, vibration, Fuel level of UAV engine through the network. This paper aims at developing an effective monitoring method for UAV engine control system, which is implemented based upon CAN (Controller Area Network) network. As the UAV engine control monitoring system requires various kinds of information, a lot of sensor nodes are distributed to several different places. The dynamic application mechanism of CAN protocol ensures the effective utilization of the bandwidth of the network, in which all nodes are sending the data to the bus according to the priority of node identifiers.
本文提出了一种基于动态ID应用的无人机发动机控制监控系统,并提出了一种CAN网络传感器通过网络采集无人机发动机的温度、压力、振动、油位等信息的调度方法。本文旨在开发一种基于CAN (Controller Area Network,控制器局域网)网络的无人机发动机控制系统的有效监控方法。由于无人机发动机控制监控系统需要的信息种类繁多,因此大量的传感器节点分布在多个不同的地方。CAN协议的动态应用机制保证了网络带宽的有效利用,各节点按照节点标识符的优先级向总线发送数据。
{"title":"UAV Engine Control Monitoring System based on CAN Network","authors":"Hyun Lee","doi":"10.23919/ICCAS50221.2020.9268244","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268244","url":null,"abstract":"This paper proposes UAV (Unmanned Aerial Vehicle) engine control monitoring system using a dynamic ID application and a scheduling method of CAN network sensors which collect the temperatures, pressure, vibration, Fuel level of UAV engine through the network. This paper aims at developing an effective monitoring method for UAV engine control system, which is implemented based upon CAN (Controller Area Network) network. As the UAV engine control monitoring system requires various kinds of information, a lot of sensor nodes are distributed to several different places. The dynamic application mechanism of CAN protocol ensures the effective utilization of the bandwidth of the network, in which all nodes are sending the data to the bus according to the priority of node identifiers.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"6 1","pages":"820-823"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87363322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Clusters in multi-leader directed consensus networks 多领导导向共识网络中的集群
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268254
Jeong-Min Ma, Hyung-Gohn Lee, H. Ahn, K. Moore
In a directed graph a leader is a node that has no in-degree edges. If there are multiple leaders in a directed consensus network, the system will not reach consensus. In such systems the nodes will organize into clusters or groups of node that converge to the same value. These clusters are not dependent on initial conditions or edge weights. In this paper we study clusters in multi-leader directed consensus networks. Specifically, we present an algorithm to classify all clusters in the graph.
在有向图中,先导是没有成度边的节点。如果在一个定向共识网络中有多个领导者,系统将无法达成共识。在这样的系统中,节点将组织成收敛于相同值的集群或节点组。这些聚类不依赖于初始条件或边权。本文研究了多领导定向共识网络中的集群问题。具体来说,我们提出了一种对图中所有聚类进行分类的算法。
{"title":"Clusters in multi-leader directed consensus networks","authors":"Jeong-Min Ma, Hyung-Gohn Lee, H. Ahn, K. Moore","doi":"10.23919/ICCAS50221.2020.9268254","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268254","url":null,"abstract":"In a directed graph a leader is a node that has no in-degree edges. If there are multiple leaders in a directed consensus network, the system will not reach consensus. In such systems the nodes will organize into clusters or groups of node that converge to the same value. These clusters are not dependent on initial conditions or edge weights. In this paper we study clusters in multi-leader directed consensus networks. Specifically, we present an algorithm to classify all clusters in the graph.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"36 1","pages":"379-384"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84711366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mobile service robot multi-floor navigation using visual detection and recognition of elevator features(ICCAS 2020) 基于电梯特征视觉检测与识别的移动服务机器人多层导航(ICCAS 2020)
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268202
Eun-ho Kim, Sanghyeon Bae, T. Kuc
Mobile service robot multi-floor navigation is a challenging issue for in indoor robot navigation, especially when moving between floors, entering and leaving elevator. So, in this paper we propose detection and recognition method of elevator features and robot navigation for entering and leaving the elevator. Thus, in this paper we propose a method which uses deep learning. Based image recognition system to identify particular floor from an elevator display. Using this method robot determines whether particular floor has reached. We proposed two-fold methods to accomplish our goal. On the first method we performed the extraction of elevator button coordinates through traditional feature extractor such as adaptive thresholding, blob detection, template matching. The next part of our approach is by using DL- based recognition, done by YOLO 9000 on the floor count display panel of the elevator. From our analysis of these above mentioned methods we discovered that the feature extractor out-performs the DL-based recognition system even in the tricky conditions. Such as lighter reflection, motion blur etc. and proves to be more robust system for detection and recognition.
移动服务机器人的多楼层导航是室内机器人导航中的一个难题,尤其是在楼层间移动、进出电梯时。因此,本文提出了电梯特征的检测与识别方法以及机器人进出电梯的导航方法。因此,在本文中我们提出了一种使用深度学习的方法。基于图像识别系统从电梯显示中识别特定楼层。利用这种方法,机器人判断是否到达了特定的楼层。我们提出了两种方法来实现我们的目标。在第一种方法上,通过自适应阈值分割、斑点检测、模板匹配等传统特征提取方法对电梯按钮坐标进行提取。我们的方法的下一部分是使用基于深度学习的识别,由YOLO 9000在电梯的楼层数显示面板上完成。通过对上述方法的分析,我们发现即使在复杂的条件下,特征提取器的性能也优于基于dl的识别系统。如较轻的反射,运动模糊等,并被证明是更强大的检测和识别系统。
{"title":"Mobile service robot multi-floor navigation using visual detection and recognition of elevator features(ICCAS 2020)","authors":"Eun-ho Kim, Sanghyeon Bae, T. Kuc","doi":"10.23919/ICCAS50221.2020.9268202","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268202","url":null,"abstract":"Mobile service robot multi-floor navigation is a challenging issue for in indoor robot navigation, especially when moving between floors, entering and leaving elevator. So, in this paper we propose detection and recognition method of elevator features and robot navigation for entering and leaving the elevator. Thus, in this paper we propose a method which uses deep learning. Based image recognition system to identify particular floor from an elevator display. Using this method robot determines whether particular floor has reached. We proposed two-fold methods to accomplish our goal. On the first method we performed the extraction of elevator button coordinates through traditional feature extractor such as adaptive thresholding, blob detection, template matching. The next part of our approach is by using DL- based recognition, done by YOLO 9000 on the floor count display panel of the elevator. From our analysis of these above mentioned methods we discovered that the feature extractor out-performs the DL-based recognition system even in the tricky conditions. Such as lighter reflection, motion blur etc. and proves to be more robust system for detection and recognition.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"71 1","pages":"982-985"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74495992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Image Registration Method from LDCT Image Using FFD Algorithm 基于FFD算法的LDCT图像配准方法
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268267
Chika Tanaka, Tohru Kamiya, T. Aoki
In recent years, the number of lung cancer deaths has been increasing. In Japan, CT (Computed Tomography) equipment is used for its visual screening. However, there is a problem that seeing huge number of images taken by CT is a burden on the doctor. To overcome this problem, the CAD (Computer Aided Diagnosis) system is introduced on medical fields. In CT screening, LDCT (Low Dose Computed Tomography) screening is desirable considering radiation exposure. However, the image quality which is caused the lower the dose is another problem on the screening. A CAD system that enables accurate diagnosis even at low doses is needed. Therefore, in this paper, we propose a registration method for generating temporal subtraction images that can be applied to low-quality chest LDCT images. Our approach consists of two major components. Firstly, global matching based on the center of gravity is performed on the preprocessed images, and the region of interest (ROI) is set. Secondly, local matching by free-form deformation (FFD) based on B-Spline is performed on the ROI as final registration. In this paper, we apply our proposed method to LDCT images of 6 cases, and reduce 57.29% in the calculation time, 26.1% in the half value width, and 29.6% in the sum of histogram of temporal subtraction images comparing with the conventional method.
近年来,肺癌死亡人数一直在增加。在日本,CT(计算机断层扫描)设备用于视觉筛查。但是,有一个问题是,看到大量的CT图像对医生来说是一种负担。为了解决这一问题,CAD(计算机辅助诊断)系统被引入到医疗领域。在CT筛查中,考虑到辐射暴露,LDCT(低剂量计算机断层扫描)筛查是可取的。但是,剂量越低所引起的图像质量问题是筛选上的另一个问题。需要一种即使在低剂量下也能进行准确诊断的CAD系统。因此,在本文中,我们提出了一种配准方法,用于生成可应用于低质量胸部LDCT图像的时间减法图像。我们的方法由两个主要部分组成。首先,对预处理后的图像进行基于重心的全局匹配,设置感兴趣区域(ROI);其次,对感兴趣区域进行基于b样条的自由变形(FFD)局部匹配作为最终配准;在本文中,我们将该方法应用于6例LDCT图像,与传统方法相比,计算时间缩短了57.29%,半值宽度缩短了26.1%,时间相减图像的直方图和缩短了29.6%。
{"title":"Image Registration Method from LDCT Image Using FFD Algorithm","authors":"Chika Tanaka, Tohru Kamiya, T. Aoki","doi":"10.23919/ICCAS50221.2020.9268267","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268267","url":null,"abstract":"In recent years, the number of lung cancer deaths has been increasing. In Japan, CT (Computed Tomography) equipment is used for its visual screening. However, there is a problem that seeing huge number of images taken by CT is a burden on the doctor. To overcome this problem, the CAD (Computer Aided Diagnosis) system is introduced on medical fields. In CT screening, LDCT (Low Dose Computed Tomography) screening is desirable considering radiation exposure. However, the image quality which is caused the lower the dose is another problem on the screening. A CAD system that enables accurate diagnosis even at low doses is needed. Therefore, in this paper, we propose a registration method for generating temporal subtraction images that can be applied to low-quality chest LDCT images. Our approach consists of two major components. Firstly, global matching based on the center of gravity is performed on the preprocessed images, and the region of interest (ROI) is set. Secondly, local matching by free-form deformation (FFD) based on B-Spline is performed on the ROI as final registration. In this paper, we apply our proposed method to LDCT images of 6 cases, and reduce 57.29% in the calculation time, 26.1% in the half value width, and 29.6% in the sum of histogram of temporal subtraction images comparing with the conventional method.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"7 1","pages":"411-414"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84800576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Body Trajectory Generation Using Quadratic Programming in Bipedal Robots 基于二次规划的两足机器人身体轨迹生成
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268204
Min. InJoon, Yoo. DongHa, Ahn. MinSung, Han. Jeakweon
The preview control walking method, which is commonly used in bipedal walking, requires jerk and ZMP errors as cost functions to generate body trajectory. Since the two inputs are dependent, optimization to form body trajectory is performed simultaneously with weight factors. Therefore, it is often seen that the resulting body trajectory rapidly changes on velocity according to the weight factors. This eventually requires a torque actuator in order to perform such action. In order to overcome this problem, we apply a method used on a quadruped to a bipedal robot. Since, it only targets to minimize the acceleration of the body trajectory, the body does not require rapid speed change. Also, this method can eliminate the computation time needed for preview control referred to preview time. When applying a quadruped robots walking method that has a relatively large support polygon than that of a bipedal robot, stability deterioration may occur. Therefore, we approached to secure ZMP constraints with relatively small support polygon area as within bipedal robots. In this paper we propose a body trajectory generation method that guarantees real-time stability while minimizing acceleration.
在两足行走中常用的预瞄控制步行方法,需要以jerak和ZMP误差作为代价函数来生成身体轨迹。由于这两个输入是相互依赖的,因此与权重因素同时进行优化以形成身体轨迹。因此,经常可以看到,根据权重因素,得到的物体轨迹在速度上迅速变化。这最终需要一个扭矩执行器来执行这样的动作。为了克服这个问题,我们将四足机器人的方法应用到两足机器人上。因为,它的目标只是最小化身体轨迹的加速度,所以身体不需要快速的速度变化。此外,该方法还可以消除基于预览时间的预览控制所需的计算时间。当采用四足机器人的行走方式时,其支撑多边形比两足机器人的行走方式大,可能会导致稳定性下降。因此,我们试图在两足机器人中使用相对较小的支持多边形区域来确保ZMP约束。在本文中,我们提出了一种保证实时稳定性同时最小化加速度的物体轨迹生成方法。
{"title":"Body Trajectory Generation Using Quadratic Programming in Bipedal Robots","authors":"Min. InJoon, Yoo. DongHa, Ahn. MinSung, Han. Jeakweon","doi":"10.23919/ICCAS50221.2020.9268204","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268204","url":null,"abstract":"The preview control walking method, which is commonly used in bipedal walking, requires jerk and ZMP errors as cost functions to generate body trajectory. Since the two inputs are dependent, optimization to form body trajectory is performed simultaneously with weight factors. Therefore, it is often seen that the resulting body trajectory rapidly changes on velocity according to the weight factors. This eventually requires a torque actuator in order to perform such action. In order to overcome this problem, we apply a method used on a quadruped to a bipedal robot. Since, it only targets to minimize the acceleration of the body trajectory, the body does not require rapid speed change. Also, this method can eliminate the computation time needed for preview control referred to preview time. When applying a quadruped robots walking method that has a relatively large support polygon than that of a bipedal robot, stability deterioration may occur. Therefore, we approached to secure ZMP constraints with relatively small support polygon area as within bipedal robots. In this paper we propose a body trajectory generation method that guarantees real-time stability while minimizing acceleration.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"25 1","pages":"251-257"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80172221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Verification method to improve the efficiency of traffic survey 验证方法,提高交通调查效率
Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268311
Mi-Seon Kang, Pyong-Kun Kim, Kil-Taek Lim
Road traffic volume survey is a survey to determine the number and type of vehicles passing at a specific point for a certain period of time. Previously, a method of classifying the number of vehicles and vehicle types has been used while a person sees an image photographed using a camera with the naked eye, but this has a disadvantage in that a lot of manpower and cost are incurred. Recently, a method of applying an automated algorithm has been widely attempted, but has a disadvantage in that the accuracy is inferior to the existing method performed by manpower. To address these problems, we propose a method to automate road traffic volume surveys and a new method to verify the results. The proposed method extracts the number of vehicles and vehicle types from an image using deep learning, analyzes the results, and automatically informs the user of candidates with a high probability of error, so that highly reliable traffic volume survey information can be efficiently generated. The performance of the proposed method is tested using a data set collected by an actual road traffic survey company. The experiment proved that it is possible to verify the vehicle classification and route simply and quickly using the proposed method. The proposed method can not only reduce the investigation process and cost, but also increase the reliability due to more accurate results.
道路交通量调查是确定在一定时间内某一特定地点通过的车辆数量和类型的调查。以前,用肉眼看到用相机拍摄的图像时,使用了分类车辆数量和车辆类型的方法,但这种方法的缺点是需要大量人力和费用。近年来,一种应用自动化算法的方法得到了广泛的尝试,但其缺点是精度低于现有的人工方法。为了解决这些问题,我们提出了一种自动化道路交通量调查的方法和一种验证结果的新方法。该方法利用深度学习技术从图像中提取车辆数量和车辆类型,对结果进行分析,并在误差大的情况下自动通知用户候选车辆,从而高效生成高可靠性的交通量调查信息。利用某实际道路交通调查公司收集的数据集对所提方法的性能进行了测试。实验证明,该方法可以简单、快速地对车辆分类和路线进行验证。该方法不仅减少了调查过程和成本,而且由于结果更准确,提高了可靠性。
{"title":"Verification method to improve the efficiency of traffic survey","authors":"Mi-Seon Kang, Pyong-Kun Kim, Kil-Taek Lim","doi":"10.23919/ICCAS50221.2020.9268311","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268311","url":null,"abstract":"Road traffic volume survey is a survey to determine the number and type of vehicles passing at a specific point for a certain period of time. Previously, a method of classifying the number of vehicles and vehicle types has been used while a person sees an image photographed using a camera with the naked eye, but this has a disadvantage in that a lot of manpower and cost are incurred. Recently, a method of applying an automated algorithm has been widely attempted, but has a disadvantage in that the accuracy is inferior to the existing method performed by manpower. To address these problems, we propose a method to automate road traffic volume surveys and a new method to verify the results. The proposed method extracts the number of vehicles and vehicle types from an image using deep learning, analyzes the results, and automatically informs the user of candidates with a high probability of error, so that highly reliable traffic volume survey information can be efficiently generated. The performance of the proposed method is tested using a data set collected by an actual road traffic survey company. The experiment proved that it is possible to verify the vehicle classification and route simply and quickly using the proposed method. The proposed method can not only reduce the investigation process and cost, but also increase the reliability due to more accurate results.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"339-343"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83422067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 20th International Conference on Control, Automation and Systems (ICCAS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1