Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170431
Adi Sujiwo, E. Takeuchi, Luis Yoichi Morales Saiki, Naoki Akai, Y. Ninomiya, M. Edahiro
This paper presents a fusion of monocular camera-based metric localization, IMU and odometry in dynamic environments of public roads. We build multiple vision-based maps and use them at the same time in localization phase. For the mapping phase, visual maps are built by employing ORB-SLAM and accurate metric positioning from LiDAR-based NDT scan matching. This external positioning is utilized to correct for scale drift inherent in all vision-based SLAM methods. Next in the localization phase, these embedded positions are used to estimate the vehicle pose in metric global coordinates using solely monocular camera. Furthermore, to increase system robustness we also proposed utilization of multiple maps and sensor fusion with odometry and IMU using particle filter method. Experimental testing were performed through public road environment as far as 170 km at different times of day to evaluate and compare localization results of vision-only, GNSS and sensor fusion methods. The results show that sensor fusion method offers lower average errors than GNSS and better coverage than vision-only one.
{"title":"Localization based on multiple visual-metric maps","authors":"Adi Sujiwo, E. Takeuchi, Luis Yoichi Morales Saiki, Naoki Akai, Y. Ninomiya, M. Edahiro","doi":"10.1109/MFI.2017.8170431","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170431","url":null,"abstract":"This paper presents a fusion of monocular camera-based metric localization, IMU and odometry in dynamic environments of public roads. We build multiple vision-based maps and use them at the same time in localization phase. For the mapping phase, visual maps are built by employing ORB-SLAM and accurate metric positioning from LiDAR-based NDT scan matching. This external positioning is utilized to correct for scale drift inherent in all vision-based SLAM methods. Next in the localization phase, these embedded positions are used to estimate the vehicle pose in metric global coordinates using solely monocular camera. Furthermore, to increase system robustness we also proposed utilization of multiple maps and sensor fusion with odometry and IMU using particle filter method. Experimental testing were performed through public road environment as far as 170 km at different times of day to evaluate and compare localization results of vision-only, GNSS and sensor fusion methods. The results show that sensor fusion method offers lower average errors than GNSS and better coverage than vision-only one.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115498409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170447
Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Y. Tamura, A. Yamashita, H. Asama
In order to understand the underwater environment, it is essential to use sensing methodologies able to perceive the three dimensional (3D) information of the explored site. Sonar sensors are commonly employed in underwater exploration. This paper presents a novel methodology able to retrieve 3D information of underwater objects. The proposed solution employs an acoustic camera, which represents the next generation of sonar sensors, to extract and track the line of the underwater objects which are used as visual features for the image processing algorithm. In this work, we concentrate on artificial underwater environments, such as dams and bridges. In these structured environments, the line segments are preferred over the points feature, as they can represent structure information more effectively. We also developed a method for automatic extraction and correspondences matching of line features. Our approach enables 3D measurement of underwater objects using arbitrary viewpoints based on an extended Kalman filter (EKF). The probabilistic method allows computing the 3D reconstruction of underwater objects even in presence of uncertainty in the control input of the camera's movements. Experiments have been performed in real environments. Results showed the effectiveness and accuracy of the proposed solution.
{"title":"3D reconstruction of line features using multi-view acoustic images in underwater environment","authors":"Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Y. Tamura, A. Yamashita, H. Asama","doi":"10.1109/MFI.2017.8170447","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170447","url":null,"abstract":"In order to understand the underwater environment, it is essential to use sensing methodologies able to perceive the three dimensional (3D) information of the explored site. Sonar sensors are commonly employed in underwater exploration. This paper presents a novel methodology able to retrieve 3D information of underwater objects. The proposed solution employs an acoustic camera, which represents the next generation of sonar sensors, to extract and track the line of the underwater objects which are used as visual features for the image processing algorithm. In this work, we concentrate on artificial underwater environments, such as dams and bridges. In these structured environments, the line segments are preferred over the points feature, as they can represent structure information more effectively. We also developed a method for automatic extraction and correspondences matching of line features. Our approach enables 3D measurement of underwater objects using arbitrary viewpoints based on an extended Kalman filter (EKF). The probabilistic method allows computing the 3D reconstruction of underwater objects even in presence of uncertainty in the control input of the camera's movements. Experiments have been performed in real environments. Results showed the effectiveness and accuracy of the proposed solution.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170364
Dong-Eon Kim, Dongju Park, Jeong-Hwan Moon, Ki-Seo Kim, Jin‐Hyun Park, Jangmyung Lee
A new manipulation strategy has been proposed to grasp various objects stably using a dual-arm robotic system in the ROS environment. The grasping pose of the dual-arm has been determined depending upon the shape of the objects which is identified by the pan/tilt camera. For the stable grasping of the object, an operability index of the dual-arm robot (OPIND) has been defined by using the current values applied to the motors for the given grasping pose. When analyzing the motion of a manipulator, the manipulability index of both arms has been derived from the Jacobian to represent the relationship between the joint velocity vector and the workspace velocity vector, which has an elliptical range representing easiness to work with. Through the experiments, the OPIND applied state and the non — applied state of the dual-arm robotic system have been compared to each to other.
{"title":"Development of robot manipulation technology in ROS environment","authors":"Dong-Eon Kim, Dongju Park, Jeong-Hwan Moon, Ki-Seo Kim, Jin‐Hyun Park, Jangmyung Lee","doi":"10.1109/MFI.2017.8170364","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170364","url":null,"abstract":"A new manipulation strategy has been proposed to grasp various objects stably using a dual-arm robotic system in the ROS environment. The grasping pose of the dual-arm has been determined depending upon the shape of the objects which is identified by the pan/tilt camera. For the stable grasping of the object, an operability index of the dual-arm robot (OPIND) has been defined by using the current values applied to the motors for the given grasping pose. When analyzing the motion of a manipulator, the manipulability index of both arms has been derived from the Jacobian to represent the relationship between the joint velocity vector and the workspace velocity vector, which has an elliptical range representing easiness to work with. Through the experiments, the OPIND applied state and the non — applied state of the dual-arm robotic system have been compared to each to other.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128338714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170439
Yunho Choi, Inhwan Hwang, Songhwai Oh
Quadrotor unmanned aerial vehicles (UAVs) have seen a surge of use in various applications due to its structural simplicity and high maneuverability. However, conventional control methods using joysticks prohibit novices from getting used to maneuvering quadrotors in short time. In this paper, we suggest the use of a wearable device, such as a smart watch, as a new remote-controller for a quadrotor. The user's command is recognized as gestures using the 9-DoF inertial measurement unit (IMU) of a wearable device through a recurrent neural network (RNN) with long short-term memory (LSTM) cells. Our implementation also makes it possible to align the heading of a quadrotor with the heading of the user. Our implementation allows nine different gestures and the trained RNN is used for real-time gesture recognition for controlling a micro quadrotor. The proposed system exploits available sensors in a wearable device and a quadrotor as much as possible to make the gesture-based control intuitive. We have experimentally validated the performance of the proposed system by using a Samsung Gear S smart watch and a Crazyflie Nano Quadcopter.
{"title":"Wearable gesture control of agile micro quadrotors","authors":"Yunho Choi, Inhwan Hwang, Songhwai Oh","doi":"10.1109/MFI.2017.8170439","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170439","url":null,"abstract":"Quadrotor unmanned aerial vehicles (UAVs) have seen a surge of use in various applications due to its structural simplicity and high maneuverability. However, conventional control methods using joysticks prohibit novices from getting used to maneuvering quadrotors in short time. In this paper, we suggest the use of a wearable device, such as a smart watch, as a new remote-controller for a quadrotor. The user's command is recognized as gestures using the 9-DoF inertial measurement unit (IMU) of a wearable device through a recurrent neural network (RNN) with long short-term memory (LSTM) cells. Our implementation also makes it possible to align the heading of a quadrotor with the heading of the user. Our implementation allows nine different gestures and the trained RNN is used for real-time gesture recognition for controlling a micro quadrotor. The proposed system exploits available sensors in a wearable device and a quadrotor as much as possible to make the gesture-based control intuitive. We have experimentally validated the performance of the proposed system by using a Samsung Gear S smart watch and a Crazyflie Nano Quadcopter.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128849876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170421
J. J. Steckenrider, T. Furukawa
This paper introduces a multi-Bayesian framework for detection and classification of features in environments abundant with error-inducing noise. This approach takes advantage of Bayesian correction and classification in three distinct stages. The corrective scheme described here extracts useful but highly stochastic features from a data source, whether vision-based or otherwise, to aid in higher-level classification. Unlike conventional methods, these features' uncertainties are characterized so that test data can be correctively cast into the feature space with probability distribution functions that can be integrated over class decision boundaries created by a quadratic Bayesian classifier. The proposed approach is specifically formulated for road crack detection and characterization, which is one of the potential applications. For test images assessed with this technique, ground truth was estimated accurately and consistently with effective Bayesian correction, showing a 25% improvement in recall rate over standard classification. Application to road cracks demonstrated successful detection and classification in a practical domain. The proposed approach is extremely effective in characterizing highly probabilistic features in noisy environments when several correlated observations are available either from multiple sensors or from data sequentially obtained by a single sensor.
{"title":"Detection and classification of stochastic features using a multi-Bayesian approach","authors":"J. J. Steckenrider, T. Furukawa","doi":"10.1109/MFI.2017.8170421","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170421","url":null,"abstract":"This paper introduces a multi-Bayesian framework for detection and classification of features in environments abundant with error-inducing noise. This approach takes advantage of Bayesian correction and classification in three distinct stages. The corrective scheme described here extracts useful but highly stochastic features from a data source, whether vision-based or otherwise, to aid in higher-level classification. Unlike conventional methods, these features' uncertainties are characterized so that test data can be correctively cast into the feature space with probability distribution functions that can be integrated over class decision boundaries created by a quadratic Bayesian classifier. The proposed approach is specifically formulated for road crack detection and characterization, which is one of the potential applications. For test images assessed with this technique, ground truth was estimated accurately and consistently with effective Bayesian correction, showing a 25% improvement in recall rate over standard classification. Application to road cracks demonstrated successful detection and classification in a practical domain. The proposed approach is extremely effective in characterizing highly probabilistic features in noisy environments when several correlated observations are available either from multiple sensors or from data sequentially obtained by a single sensor.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130625631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170448
A. P. Pobil, Majd Kassawat, A. J. Duran, M. Arias, N. Nechyporenko, Arijit Mallick, E. Cervera, Dipendra Subedi, Ilia Vasilev, D. Cardin, Emanuele Sansebastiano, Ester Martínez-Martín, A. Morales, Gustavo A. Casañ, A. Arenal, B. Goriatcheff, C. Rubert, G. Recatalá
This paper describes the approach taken by the team from the Robotic Intelligence Laboratory at Jaume I University to the Amazon Robotics Challenge 2017. The goal of the challenge is to automate pick and place operations in unstructured environments, specifically the shelves in an Amazon warehouse. RobInLab's approach is based on a Baxter Research robot and a customized storage system. The system's modular architecture, based on ROS, allows communication between two computers, two Arduinos and the Baxter. It integrates 9 hardware components along with 10 different algorithms to accomplish the pick and stow tasks. We describe the main components and pipelines of the system, along with some experimental results.
{"title":"UJI RobInLab's approach to the Amazon Robotics Challenge 2017","authors":"A. P. Pobil, Majd Kassawat, A. J. Duran, M. Arias, N. Nechyporenko, Arijit Mallick, E. Cervera, Dipendra Subedi, Ilia Vasilev, D. Cardin, Emanuele Sansebastiano, Ester Martínez-Martín, A. Morales, Gustavo A. Casañ, A. Arenal, B. Goriatcheff, C. Rubert, G. Recatalá","doi":"10.1109/MFI.2017.8170448","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170448","url":null,"abstract":"This paper describes the approach taken by the team from the Robotic Intelligence Laboratory at Jaume I University to the Amazon Robotics Challenge 2017. The goal of the challenge is to automate pick and place operations in unstructured environments, specifically the shelves in an Amazon warehouse. RobInLab's approach is based on a Baxter Research robot and a customized storage system. The system's modular architecture, based on ROS, allows communication between two computers, two Arduinos and the Baxter. It integrates 9 hardware components along with 10 different algorithms to accomplish the pick and stow tasks. We describe the main components and pipelines of the system, along with some experimental results.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133084234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170429
Gaochao Feng, Deqiang Han, Yi Yang, Jiankun Ding
A new multiple classifier system (MCS) is proposed based on CTSP (classification based on Testing Sample Pairs), which is a kind of applicable and efficient classification method. However, the original output form of the CTSP is only crisp class labels. To make use of the information provided by the classifier, in this paper, the output of CTSP is modeled using the membership function. Then, the fuzzy-cautious ordered weighted averaging approach with evidential reasoning (FCOWA-ER) is used to combine the membership functions originated from different member classifiers. It is shown by experimental results that the proposed MCS effectively can improve the classification performance.
{"title":"Design of multiple classifier systems based on testing sample pairs","authors":"Gaochao Feng, Deqiang Han, Yi Yang, Jiankun Ding","doi":"10.1109/MFI.2017.8170429","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170429","url":null,"abstract":"A new multiple classifier system (MCS) is proposed based on CTSP (classification based on Testing Sample Pairs), which is a kind of applicable and efficient classification method. However, the original output form of the CTSP is only crisp class labels. To make use of the information provided by the classifier, in this paper, the output of CTSP is modeled using the membership function. Then, the fuzzy-cautious ordered weighted averaging approach with evidential reasoning (FCOWA-ER) is used to combine the membership functions originated from different member classifiers. It is shown by experimental results that the proposed MCS effectively can improve the classification performance.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133502058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170411
Qiang Liu, N. Rao
We consider tracking of a target with elliptical nonlinear constraints on its motion dynamics. The state estimates are generated by sensors and sent over long-haul links to a remote fusion center for fusion. We show that the constraints can be projected onto the known ellipse and hence incorporated into the estimation and fusion process. In particular, two methods based on (i) direct connection to the center, and (ii) shortest distance to the ellipse are discussed. A tracking example is used to illustrate the tracking performance using projection-based methods with various fusers in a lossy long-haul tracking environment.
{"title":"On state estimation and fusion with elliptical constraints","authors":"Qiang Liu, N. Rao","doi":"10.1109/MFI.2017.8170411","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170411","url":null,"abstract":"We consider tracking of a target with elliptical nonlinear constraints on its motion dynamics. The state estimates are generated by sensors and sent over long-haul links to a remote fusion center for fusion. We show that the constraints can be projected onto the known ellipse and hence incorporated into the estimation and fusion process. In particular, two methods based on (i) direct connection to the center, and (ii) shortest distance to the ellipse are discussed. A tracking example is used to illustrate the tracking performance using projection-based methods with various fusers in a lossy long-haul tracking environment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134310673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170450
Shirazi Muhammad Ayaz, Danish Khan, M. Y. Kim
This paper describes the implementation of a 3D handheld scanning approach based on Kinect. User may get the 3D scans at a very fast rate using real time scanning devices like Kinect. These devices have been utilized in several applications, but the scanning lacks in the accuracy and reliability of the 3D data, which makes their employment a difficult task. This research proposed the 3D handheld scanning approach based on Kinect device which renders the 3D point cloud data for different views and registers them using visual navigation and ICP. This research also compares several ICP variants with the proposed method. The proposed approach can be used for the 3D modeling applications especially in medical domain. Experiments and results demonstrate the feasibility of the proposed approach to generate reliable 3D reconstructions from the Kinect's point clouds.
{"title":"3D handheld scanning based on multiview 3D registration using Kinect Sensing device","authors":"Shirazi Muhammad Ayaz, Danish Khan, M. Y. Kim","doi":"10.1109/MFI.2017.8170450","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170450","url":null,"abstract":"This paper describes the implementation of a 3D handheld scanning approach based on Kinect. User may get the 3D scans at a very fast rate using real time scanning devices like Kinect. These devices have been utilized in several applications, but the scanning lacks in the accuracy and reliability of the 3D data, which makes their employment a difficult task. This research proposed the 3D handheld scanning approach based on Kinect device which renders the 3D point cloud data for different views and registers them using visual navigation and ICP. This research also compares several ICP variants with the proposed method. The proposed approach can be used for the 3D modeling applications especially in medical domain. Experiments and results demonstrate the feasibility of the proposed approach to generate reliable 3D reconstructions from the Kinect's point clouds.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134350749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170433
Fabian Sigges, M. Baum
In this paper, we present an approach to Multi-Object Tracking (MOT) that is based on the Ensemble Kalman Filter (EnKF). The EnKF is a standard algorithm for data assimilation in high-dimensional state spaces that is mainly used in geosciences, but has so far only attracted little attention for object tracking problems. In our approach, the Optimal Subpattern Assignment (OSPA) distance is used for coping with unlabeled noisy measurements and a robust covariance estimation is done using FastMCD to deal with possible outliers due to false detections. The algorithm is evaluated and compared against a global nearest neighbour Kalman Filter (NNKF) and a recently proposed JPDA-Ensemble Kalman Filter (JPDA-EnKF) in a simulated scenario with multiple objects and false detections.
{"title":"A nearest neighbour ensemble Kalman Filter for multi-object tracking","authors":"Fabian Sigges, M. Baum","doi":"10.1109/MFI.2017.8170433","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170433","url":null,"abstract":"In this paper, we present an approach to Multi-Object Tracking (MOT) that is based on the Ensemble Kalman Filter (EnKF). The EnKF is a standard algorithm for data assimilation in high-dimensional state spaces that is mainly used in geosciences, but has so far only attracted little attention for object tracking problems. In our approach, the Optimal Subpattern Assignment (OSPA) distance is used for coping with unlabeled noisy measurements and a robust covariance estimation is done using FastMCD to deal with possible outliers due to false detections. The algorithm is evaluated and compared against a global nearest neighbour Kalman Filter (NNKF) and a recently proposed JPDA-Ensemble Kalman Filter (JPDA-EnKF) in a simulated scenario with multiple objects and false detections.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132922472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}