Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170438
Rui Ma, Guocheng Liu, Qi Hao, Cong Wang
Many applications demand proper design and implementation of 0-1 binary compressive sensing (CS) measurement matrices. This paper presents a construction method for such binary CS measurement matrices by training a convolutional neural network (CNN) with 0-1 weights. The desired CS performance of resultant binary measurement matrices can be achieved by designing a proper CNN training procedure. For human activity recognition applications, such a sensing system is implemented with a small number of optical sensors and optical masks, which can achieve a high recognition capability with a far smaller amount of data than traditional cameras. In the experiments, the compressive sensory readings are classified using a basic K-Nearest Neighbor (KNN) algorithm to demonstrate the high sampling efficiency of hardware without compromising much the recognition performance.
{"title":"Design of compressive imaging masks for human activity perception based on binary convolutional neural network","authors":"Rui Ma, Guocheng Liu, Qi Hao, Cong Wang","doi":"10.1109/MFI.2017.8170438","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170438","url":null,"abstract":"Many applications demand proper design and implementation of 0-1 binary compressive sensing (CS) measurement matrices. This paper presents a construction method for such binary CS measurement matrices by training a convolutional neural network (CNN) with 0-1 weights. The desired CS performance of resultant binary measurement matrices can be achieved by designing a proper CNN training procedure. For human activity recognition applications, such a sensing system is implemented with a small number of optical sensors and optical masks, which can achieve a high recognition capability with a far smaller amount of data than traditional cameras. In the experiments, the compressive sensory readings are classified using a basic K-Nearest Neighbor (KNN) algorithm to demonstrate the high sampling efficiency of hardware without compromising much the recognition performance.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125149391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170394
Ting Yuan, K. Krishnan, B. Duraisamy, M. Maile, T. Schwarz
Autonomous driving poses unique challenges for vehicle environment perception due to the complicated driving environment where the autonomous vehicle connects itself with surrounding objects. Precise tracking of the relevant dynamic traffic participants (e.g., vehicle/byciclist/pedestrian) becomes a key component for the task of comprehensive environmental perception and reliable scene understanding. It is necessary for vehicle trackers to treat the objects as extended (rigid) target, as opposed to traditional point target tracking (say, in aerospace applications). The extended object tracking is an extremely challenging problem in real world, due to high requirements of the object estimation on accuracy of kinematic/shape information, association robustness, model match on various target motion behaviors, and statistical property amicability (e.g., estimation consistency/covariance reliability). We present an extended object tracker — based on an interacting multiple model with unbiased mixing estimator for kinematic information at a specified tracking reference point, a truncated Gaussian scheme for shape (width/length/orientation) estimation, and a hierarchical association method according to both kinematic and shape information — to tackle all of the major challenges. Our special effort is put on handling an intriguing conflict between theory and practice: the so-called likelihood credibility issue. That is, the likelihood is expected to credibly reflect the data statistical probability but is actually distorted/drifting in real world systems, due to mainly artificial physics introduced in multiple-stage data processing. In this study, from systematic point of view, we design an interacting multiple model based extended object tracker with proper likelihood compensation in the statistically-distorted real world. It can be shown that the presented tracker can deliver an effective estimation performance in real road traffic of the imperfect world.
{"title":"Extended object tracking using IMM approach for a real-world vehicle sensor fusion system","authors":"Ting Yuan, K. Krishnan, B. Duraisamy, M. Maile, T. Schwarz","doi":"10.1109/MFI.2017.8170394","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170394","url":null,"abstract":"Autonomous driving poses unique challenges for vehicle environment perception due to the complicated driving environment where the autonomous vehicle connects itself with surrounding objects. Precise tracking of the relevant dynamic traffic participants (e.g., vehicle/byciclist/pedestrian) becomes a key component for the task of comprehensive environmental perception and reliable scene understanding. It is necessary for vehicle trackers to treat the objects as extended (rigid) target, as opposed to traditional point target tracking (say, in aerospace applications). The extended object tracking is an extremely challenging problem in real world, due to high requirements of the object estimation on accuracy of kinematic/shape information, association robustness, model match on various target motion behaviors, and statistical property amicability (e.g., estimation consistency/covariance reliability). We present an extended object tracker — based on an interacting multiple model with unbiased mixing estimator for kinematic information at a specified tracking reference point, a truncated Gaussian scheme for shape (width/length/orientation) estimation, and a hierarchical association method according to both kinematic and shape information — to tackle all of the major challenges. Our special effort is put on handling an intriguing conflict between theory and practice: the so-called likelihood credibility issue. That is, the likelihood is expected to credibly reflect the data statistical probability but is actually distorted/drifting in real world systems, due to mainly artificial physics introduced in multiple-stage data processing. In this study, from systematic point of view, we design an interacting multiple model based extended object tracker with proper likelihood compensation in the statistically-distorted real world. It can be shown that the presented tracker can deliver an effective estimation performance in real road traffic of the imperfect world.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125907360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170428
Xiuyi Fan, Huiguo Zhang, Cyril Leung, Zhiqi Shen
As the world's aging population grows, fall is becoming a major problem in public health. It is one of the most vital risk to the elderly. Many technology based fall detection systems have been developed in recent years with hardware ranging from wearable devices to ambience sensors and video cameras. Several machine learning based fall detection classifiers have been developed to process sensor data with various degrees of success. In this paper, we present a fall detection system using infrared array sensors with several deep learning methods, including long-short-term-memory and gated recurrent unit models. Evaluated with fall data collected in two different sets of configurations, we show that our approach gives significant improvement over existing works using the same infrared array sensor.
{"title":"Robust unobtrusive fall detection using infrared array sensors","authors":"Xiuyi Fan, Huiguo Zhang, Cyril Leung, Zhiqi Shen","doi":"10.1109/MFI.2017.8170428","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170428","url":null,"abstract":"As the world's aging population grows, fall is becoming a major problem in public health. It is one of the most vital risk to the elderly. Many technology based fall detection systems have been developed in recent years with hardware ranging from wearable devices to ambience sensors and video cameras. Several machine learning based fall detection classifiers have been developed to process sensor data with various degrees of success. In this paper, we present a fall detection system using infrared array sensors with several deep learning methods, including long-short-term-memory and gated recurrent unit models. Evaluated with fall data collected in two different sets of configurations, we show that our approach gives significant improvement over existing works using the same infrared array sensor.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170392
Selim Ozgen, F. Faion, Antonio Zea, U. Hanebeck
This study explores the non-parametric estimation of a shape boundary from noisy points in 2D when the sensor characteristics are known. As the underlying shape information is not known, the offered algorithm estimates points on the shape boundary by using the statistics of the subsets of point cloud data. The novel approach proposed in this paper is able to find corner points in a local geometry by only using sample mean and covariance matrices of the subsets of the point cloud. While the proposed approach can be used for any class of boundary functions that demonstrates symmetry; for this paper, the analysis and experiments are performed on a connected line segment.
{"title":"A non-parametric inference technique for shape boundaries in noisy point clouds","authors":"Selim Ozgen, F. Faion, Antonio Zea, U. Hanebeck","doi":"10.1109/MFI.2017.8170392","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170392","url":null,"abstract":"This study explores the non-parametric estimation of a shape boundary from noisy points in 2D when the sensor characteristics are known. As the underlying shape information is not known, the offered algorithm estimates points on the shape boundary by using the statistics of the subsets of point cloud data. The novel approach proposed in this paper is able to find corner points in a local geometry by only using sample mean and covariance matrices of the subsets of the point cloud. While the proposed approach can be used for any class of boundary functions that demonstrates symmetry; for this paper, the analysis and experiments are performed on a connected line segment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124133192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170440
Hyeonwoo Yu, Beomhee Lee
In this paper, we represent a terrain inference method based on vibration features. Autonomous navigation in unstructured environments is a challenging problem. Especially, the detailed interpretation of terrain in unstructured environments is necessary to set an efficient navigation trajectory. As the vibration features are obtained from interactions between the robot and terrain, terrain inference based on vibration can be conducted. To perform the terrain inference for robot path and unobserved field simultaneously, we use a Bayesian random field for structured prediction method. The robot path and the unobserved field are represented by the Conditional Random Field (CRF), and based on the terrain information observed on the robot path, the terrain of the region that the robot does not approach is estimated together. The proposed algorithm is tested with a 4WD mobile robot and real-terrain testbed.
本文提出了一种基于振动特征的地形推断方法。在非结构化环境中自主导航是一个具有挑战性的问题。特别是,在非结构化环境中,地形的详细解释对于设置有效的导航轨迹是必要的。由于振动特征是由机器人与地形的相互作用得到的,因此可以进行基于振动的地形推断。为了同时对机器人路径和未观测场进行地形推断,我们采用贝叶斯随机场进行结构化预测。将机器人路径和未观测区域用条件随机场(Conditional Random field, CRF)表示,根据机器人路径上观测到的地形信息,共同估计机器人未接近区域的地形。采用四轮驱动移动机器人和真实地形试验台对该算法进行了验证。
{"title":"A Bayesian approach to terrain map inference based on vibration features","authors":"Hyeonwoo Yu, Beomhee Lee","doi":"10.1109/MFI.2017.8170440","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170440","url":null,"abstract":"In this paper, we represent a terrain inference method based on vibration features. Autonomous navigation in unstructured environments is a challenging problem. Especially, the detailed interpretation of terrain in unstructured environments is necessary to set an efficient navigation trajectory. As the vibration features are obtained from interactions between the robot and terrain, terrain inference based on vibration can be conducted. To perform the terrain inference for robot path and unobserved field simultaneously, we use a Bayesian random field for structured prediction method. The robot path and the unobserved field are represented by the Conditional Random Field (CRF), and based on the terrain information observed on the robot path, the terrain of the region that the robot does not approach is estimated together. The proposed algorithm is tested with a 4WD mobile robot and real-terrain testbed.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130257722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170452
Jin Yan, Seung-Hae Baek, Soon-Yong Park
In this paper, we propose an illumination invariant lane color recognition method. Most of the conventional lane color recognition methods suffer from various illumination changes. In the past, the HSV color space has been commonly used to tell white and the yellow road lines, because the HSV color space is a range of specific colors. However, it is known that accurate road line recognition is difficult using the HSV space, because the road illumination is not static but always dynamic. In this paper, we propose a robust road line color recognition method by introducing a 2-dimensional S-color space. The white and yellow color features are clustered in the 2-D S-color space. The centroid of the feature samples in S-space is tracked continuously for real-time lane tracking.
{"title":"Robust road line color recognition based on 2-dimensional S-color space","authors":"Jin Yan, Seung-Hae Baek, Soon-Yong Park","doi":"10.1109/MFI.2017.8170452","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170452","url":null,"abstract":"In this paper, we propose an illumination invariant lane color recognition method. Most of the conventional lane color recognition methods suffer from various illumination changes. In the past, the HSV color space has been commonly used to tell white and the yellow road lines, because the HSV color space is a range of specific colors. However, it is known that accurate road line recognition is difficult using the HSV space, because the road illumination is not static but always dynamic. In this paper, we propose a robust road line color recognition method by introducing a 2-dimensional S-color space. The white and yellow color features are clustered in the 2-D S-color space. The centroid of the feature samples in S-space is tracked continuously for real-time lane tracking.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133364820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170366
Hyun-Hee Kim, C. Park, Min-Cheol Lee
In the field of architecture, 3D printing technology has the advantage of shortening the construction period by continuous addition and installing the desired shape and structure directly on site. However, the conventional 3D printer structure has limitations in practical use because of its versatility, mobility, and limited accessibility. In this study, a 3-axis gantry robot type 3D printing simulator for construction is proposed, and nozzle part is designed to inject viscous material. Since viscous material has strong nonlinear characteristics due to compression and elasticity, the robust controller Sliding Mode Control with Sliding Perturbation Observer (SMCSPO) was applied to the nozzle control and compared with the PID control results. From the simulation results, it can be confirmed that SMCSPO control is more suitable than PID control on the nozzle control for viscous material injection.
{"title":"A study on the 3D printing simulator for construction and application of robust control Using SMCSPO","authors":"Hyun-Hee Kim, C. Park, Min-Cheol Lee","doi":"10.1109/MFI.2017.8170366","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170366","url":null,"abstract":"In the field of architecture, 3D printing technology has the advantage of shortening the construction period by continuous addition and installing the desired shape and structure directly on site. However, the conventional 3D printer structure has limitations in practical use because of its versatility, mobility, and limited accessibility. In this study, a 3-axis gantry robot type 3D printing simulator for construction is proposed, and nozzle part is designed to inject viscous material. Since viscous material has strong nonlinear characteristics due to compression and elasticity, the robust controller Sliding Mode Control with Sliding Perturbation Observer (SMCSPO) was applied to the nozzle control and compared with the PID control results. From the simulation results, it can be confirmed that SMCSPO control is more suitable than PID control on the nozzle control for viscous material injection.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115385326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170375
T. Henderson, R. Simmons, D. Sacharny, A. Mitiche, Xiuyi Fan
We investigate methods to define a probabilistic logic and their application to multi-source fusion problems in geospatial decision support systems1. We begin with a discussion of augmenting propositional calculus with probabilities. Given a set of sentences, S, each with a known probability, the problem is to determine the probability of a query sentence that is a disjunction of literals appearing in S. First, we examine Nilsson's [19] solution based on the semantic models of the sentences; we develop two different approaches to solving the problem as posed: (1) using a linear solver, and (2) geometrically finding the intersection of a line with the probability convex hull. Nilsson's approach provides lower and upper bounds on the solution. We then propose a new approach which finds probabilities for the atoms found in the sentences, and then uses these probabilities to compute the probability of the query sentence. Finally, we describe how this probability representation method can form the basis for a probabilistic logic system to support a multi-source knowledge base for decision support.
{"title":"A probabilistic logic for multi-source heterogeneous information fusion","authors":"T. Henderson, R. Simmons, D. Sacharny, A. Mitiche, Xiuyi Fan","doi":"10.1109/MFI.2017.8170375","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170375","url":null,"abstract":"We investigate methods to define a probabilistic logic and their application to multi-source fusion problems in geospatial decision support systems1. We begin with a discussion of augmenting propositional calculus with probabilities. Given a set of sentences, S, each with a known probability, the problem is to determine the probability of a query sentence that is a disjunction of literals appearing in S. First, we examine Nilsson's [19] solution based on the semantic models of the sentences; we develop two different approaches to solving the problem as posed: (1) using a linear solver, and (2) geometrically finding the intersection of a line with the probability convex hull. Nilsson's approach provides lower and upper bounds on the solution. We then propose a new approach which finds probabilities for the atoms found in the sentences, and then uses these probabilities to compute the probability of the query sentence. Finally, we describe how this probability representation method can form the basis for a probabilistic logic system to support a multi-source knowledge base for decision support.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115428351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170435
T. Nguyen, J. Spehr, Jian Xiong, M. Baum, S. Zug, R. Kruse
Lane estimation plays a central role for Driver Assistance Systems, therefore many approaches have been proposed to measure its performance. However, no commonly agreed metric exists. In this work, we first present a detailed survey of the current measures. Most of them apply pixel-level benchmarks on camera images and require a time-consuming and fault-prone labeling process. Moreover, these metrics cannot be used to assess other sources such as the detected guardrails, curbs or other vehicles. Therefore, we introduce an efficient and sensor-independent metric, which provides an objective and intuitive self-assessment for the entire road estimation process at multiple levels: individual detectors, lane estimation itself, and the target applications (e.g., lane keeping system). Our metric does not require a high labeling effort and can be used both online and offline. By selecting the evaluated points in specific distances, it can be applied to any road model representation. By comparing in 2D vehicle coordinate system, two possibilities exist to generate the ground-truth: the human-driven path or the expensive alternative with DGPS and detailed maps. This paper applies both methods and reveals that the human-driven path also qualifies for this task and it is applicable to scenarios without GPS signal, e.g., tunnel. Although the lateral offset between reference and detection is widely used in the majority of works, this paper shows that another criterion, the angle deviation, is more appropriate. Finally, we compare our metric with other state-of-the-art metrics using real data recordings from different scenarios.
{"title":"A survey of performance measures to evaluate ego-lane estimation and a novel sensor-independent measure along with its applications","authors":"T. Nguyen, J. Spehr, Jian Xiong, M. Baum, S. Zug, R. Kruse","doi":"10.1109/MFI.2017.8170435","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170435","url":null,"abstract":"Lane estimation plays a central role for Driver Assistance Systems, therefore many approaches have been proposed to measure its performance. However, no commonly agreed metric exists. In this work, we first present a detailed survey of the current measures. Most of them apply pixel-level benchmarks on camera images and require a time-consuming and fault-prone labeling process. Moreover, these metrics cannot be used to assess other sources such as the detected guardrails, curbs or other vehicles. Therefore, we introduce an efficient and sensor-independent metric, which provides an objective and intuitive self-assessment for the entire road estimation process at multiple levels: individual detectors, lane estimation itself, and the target applications (e.g., lane keeping system). Our metric does not require a high labeling effort and can be used both online and offline. By selecting the evaluated points in specific distances, it can be applied to any road model representation. By comparing in 2D vehicle coordinate system, two possibilities exist to generate the ground-truth: the human-driven path or the expensive alternative with DGPS and detailed maps. This paper applies both methods and reveals that the human-driven path also qualifies for this task and it is applicable to scenarios without GPS signal, e.g., tunnel. Although the lateral offset between reference and detection is widely used in the majority of works, this paper shows that another criterion, the angle deviation, is more appropriate. Finally, we compare our metric with other state-of-the-art metrics using real data recordings from different scenarios.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"3 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114373666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170408
Geun-Mo Lee, Ju-Hwan Lee, Soon-Yong Park
Calibration between Lidar sensor and RGB cameras can be applied to various fields such as object recognition and tracking, 2D-3D mapping, and simultaneous localization and mapping (SLAM). Different methods for calibrating Lidar sensor and RGB cameras have been proposed using special 3D markers or calibration patterns. However, most of these methods have disadvantages of longer processing time, and various experimental constraints such as entire calibration pattern must appear within the scan range of the Lidar. In this paper, we propose a simple and fast calibration method between a Lidar sensor and multiple RGB cameras using a sphere object.
{"title":"Calibration of VLP-16 Lidar and multi-view cameras using a ball for 360 degree 3D color map acquisition","authors":"Geun-Mo Lee, Ju-Hwan Lee, Soon-Yong Park","doi":"10.1109/MFI.2017.8170408","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170408","url":null,"abstract":"Calibration between Lidar sensor and RGB cameras can be applied to various fields such as object recognition and tracking, 2D-3D mapping, and simultaneous localization and mapping (SLAM). Different methods for calibrating Lidar sensor and RGB cameras have been proposed using special 3D markers or calibration patterns. However, most of these methods have disadvantages of longer processing time, and various experimental constraints such as entire calibration pattern must appear within the scan range of the Lidar. In this paper, we propose a simple and fast calibration method between a Lidar sensor and multiple RGB cameras using a sphere object.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114513200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}