Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170422
Md. Zia Uddin, W. Khaksar, J. Tørresen
Gait recognition plays a very vital role in many practical applications of computer and robot vision in smart environments such as health care for elderly using smart home technology. Hence, it has been attracting considerable attentions from many machine vision researchers in last decades. In this paper, we propose a novel method for depth video-based gait recognition using robust features and deep learning. Local Directional Pattern (LDP) features are first extracted from depth silhouettes. Then, LDP features are augmented with optical flow motion features to generate spatiotemporal robust features. The features are then applied on a Convolutional Neural Network (CNN) for training and recognition. The proposed method outperforms the conventional gait recognition approaches. This system can contribute in various practical applications such as observing elderly peoples' gait patterns in smart homes or hospitals.
{"title":"A robust gait recognition system using spatiotemporal features and deep learning","authors":"Md. Zia Uddin, W. Khaksar, J. Tørresen","doi":"10.1109/MFI.2017.8170422","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170422","url":null,"abstract":"Gait recognition plays a very vital role in many practical applications of computer and robot vision in smart environments such as health care for elderly using smart home technology. Hence, it has been attracting considerable attentions from many machine vision researchers in last decades. In this paper, we propose a novel method for depth video-based gait recognition using robust features and deep learning. Local Directional Pattern (LDP) features are first extracted from depth silhouettes. Then, LDP features are augmented with optical flow motion features to generate spatiotemporal robust features. The features are then applied on a Convolutional Neural Network (CNN) for training and recognition. The proposed method outperforms the conventional gait recognition approaches. This system can contribute in various practical applications such as observing elderly peoples' gait patterns in smart homes or hospitals.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122028159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170432
Murat Ambarkutuk, T. Furukawa
This paper presents a grid-based indoor radiolocation technique based on a Spatially Coherent Path Loss Model (SCPL). Received Signal Strength (RSS) fingerprints are collected at different positions in the environment from which the radio wave propagation for the environment is empirically approximated with the SCPL model. Unlike the conventional path loss models, SCPL approximates radio wave propagation by first dividing the localization environment into grid cells and estimating the model parameters for each grid cell. Thus, the proposed technique is able to account for attenuation, resulting from non-uniform environmental irregularities. The efficacy of the proposed technique was investigated with an experiment comparing SCPL and an indoor radiolocation technique based on a conventional path loss model. The comparison has indicated the improved performance of the SCPL by up to 44%.
{"title":"A grid-based indoor radiolocation technique based on spatially coherent path loss model","authors":"Murat Ambarkutuk, T. Furukawa","doi":"10.1109/MFI.2017.8170432","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170432","url":null,"abstract":"This paper presents a grid-based indoor radiolocation technique based on a Spatially Coherent Path Loss Model (SCPL). Received Signal Strength (RSS) fingerprints are collected at different positions in the environment from which the radio wave propagation for the environment is empirically approximated with the SCPL model. Unlike the conventional path loss models, SCPL approximates radio wave propagation by first dividing the localization environment into grid cells and estimating the model parameters for each grid cell. Thus, the proposed technique is able to account for attenuation, resulting from non-uniform environmental irregularities. The efficacy of the proposed technique was investigated with an experiment comparing SCPL and an indoor radiolocation technique based on a conventional path loss model. The comparison has indicated the improved performance of the SCPL by up to 44%.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122137921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The autonomous navigation of unmanned aerial vehicles (UAVs) require a lot of sensing modalities to improve their cruise efficiency. This paper presents a system for autonomous navigation and path planning of UAVs in GPS-denied environment based on the fusion of geo-registered 3D point clouds with proprioceptive sensors (IMU, odometry and barometer) and the 2D Google maps. The contributions of this paper are illustrated as follows: 1) combination of 2D map and geo-registered 3D point clouds; 2) registration of local point cloud to global geo-registered 3D point clouds; 3) integration of visual odometry, IMU, GPS and barometer. Experiment and simulation results demonstrate the efficacy and robustness of the proposed system.
{"title":"An integrated UAV navigation system based on geo-registered 3D point cloud","authors":"Shuai Zhang, Shuaijun Wang, Chengyang Li, Guocheng Liu, Qi Hao","doi":"10.1109/MFI.2017.8170396","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170396","url":null,"abstract":"The autonomous navigation of unmanned aerial vehicles (UAVs) require a lot of sensing modalities to improve their cruise efficiency. This paper presents a system for autonomous navigation and path planning of UAVs in GPS-denied environment based on the fusion of geo-registered 3D point clouds with proprioceptive sensors (IMU, odometry and barometer) and the 2D Google maps. The contributions of this paper are illustrated as follows: 1) combination of 2D map and geo-registered 3D point clouds; 2) registration of local point cloud to global geo-registered 3D point clouds; 3) integration of visual odometry, IMU, GPS and barometer. Experiment and simulation results demonstrate the efficacy and robustness of the proposed system.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132495193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170349
Dhiraj Gulati, Uzair Sharif, Feihu Zhang, Daniel Clarke, A. Knoll
Data or measurement-to-track association is an integral and expensive part of any solution performing multi-target multi-sensor Cooperative Localization (CL) for better state estimation. Various performance evaluations have been performed between various state-of-the-art solutions, but they have been often limited within same family of algorithms. However, there exist solutions which avoid the task of data association to perform the CL in a multi-target multi-sensor environment. Factor Graphs using Symmetric Measurement Equations (SMEs) factor is one such solution. In this paper we compare and contrast the state estimation using state-of-the-art Random Finite Set (RFS) approach and using a Factor Graph solution with SMEs. For a RFS we use multi-sensor multi-object with the Generalized Labeled Multi-Bernoulli (GLMB) Filter. These two solution use conceptually different approaches, GLMB Filter solves the data association implicitly, but Factor Graph based solution avoids the task altogether. Simulations present an interesting results where for simple scenarios implemented GLMB filter performs efficiently. But the performance of GLMB Filter degrades faster than Factor Graphs using SMEs when the error in the sensors increase.
{"title":"Data association — solution or avoidance: Evaluation of a filter based on RFS framework and factor graphs with SME","authors":"Dhiraj Gulati, Uzair Sharif, Feihu Zhang, Daniel Clarke, A. Knoll","doi":"10.1109/MFI.2017.8170349","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170349","url":null,"abstract":"Data or measurement-to-track association is an integral and expensive part of any solution performing multi-target multi-sensor Cooperative Localization (CL) for better state estimation. Various performance evaluations have been performed between various state-of-the-art solutions, but they have been often limited within same family of algorithms. However, there exist solutions which avoid the task of data association to perform the CL in a multi-target multi-sensor environment. Factor Graphs using Symmetric Measurement Equations (SMEs) factor is one such solution. In this paper we compare and contrast the state estimation using state-of-the-art Random Finite Set (RFS) approach and using a Factor Graph solution with SMEs. For a RFS we use multi-sensor multi-object with the Generalized Labeled Multi-Bernoulli (GLMB) Filter. These two solution use conceptually different approaches, GLMB Filter solves the data association implicitly, but Factor Graph based solution avoids the task altogether. Simulations present an interesting results where for simple scenarios implemented GLMB filter performs efficiently. But the performance of GLMB Filter degrades faster than Factor Graphs using SMEs when the error in the sensors increase.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126227865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170406
G. Kurz, F. Pfaff, U. Hanebeck
The group of rotations in three dimensions SO(3) plays a crucial role in applications ranging from robotics and aeronautics to computer graphics. Rotations have three degrees of freedom, but representing rotations is a nontrivial matter and different methods, such as Euler angles, quaternions, rotation matrices, and Rodrigues vectors are commonly used. Unfortunately, none of these representations allows easy discretization of orientations on evenly spaced grids. We present a novel discretization method that is based on a quaternion representation in conjunction with a recursive subdivision scheme of the four-dimensional hypercube, also known as the tesseract.
{"title":"Discretization of SO(3) using recursive tesseract subdivision","authors":"G. Kurz, F. Pfaff, U. Hanebeck","doi":"10.1109/MFI.2017.8170406","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170406","url":null,"abstract":"The group of rotations in three dimensions SO(3) plays a crucial role in applications ranging from robotics and aeronautics to computer graphics. Rotations have three degrees of freedom, but representing rotations is a nontrivial matter and different methods, such as Euler angles, quaternions, rotation matrices, and Rodrigues vectors are commonly used. Unfortunately, none of these representations allows easy discretization of orientations on evenly spaced grids. We present a novel discretization method that is based on a quaternion representation in conjunction with a recursive subdivision scheme of the four-dimensional hypercube, also known as the tesseract.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126499950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170397
P. Chu, Seoungjae Cho, Y. Park, Kyungeun Cho
With the aim of providing a fast and effective segmentation method based on the flood-fill algorithm, in this study, we propose a new approach to segment a 3D point cloud gained by a 3D multi-channel laser range sensor into different objects. First, we divide the point cloud into two groups: ground and nonground points. Next, we segment clusters in each scanline dataset from the group of nonground points. Each scanline cluster is joined with other scanline clusters using the flood-fill algorithm. In this manner, each group of scanline clusters represents an object in the 3D environment. Finally, we obtain each object separately. Experiments show that our method has the ability to segment objects accurately and in real time.
{"title":"Fast point cloud segmentation based on flood-fill algorithm","authors":"P. Chu, Seoungjae Cho, Y. Park, Kyungeun Cho","doi":"10.1109/MFI.2017.8170397","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170397","url":null,"abstract":"With the aim of providing a fast and effective segmentation method based on the flood-fill algorithm, in this study, we propose a new approach to segment a 3D point cloud gained by a 3D multi-channel laser range sensor into different objects. First, we divide the point cloud into two groups: ground and nonground points. Next, we segment clusters in each scanline dataset from the group of nonground points. Each scanline cluster is joined with other scanline clusters using the flood-fill algorithm. In this manner, each group of scanline clusters represents an object in the 3D environment. Finally, we obtain each object separately. Experiments show that our method has the ability to segment objects accurately and in real time.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121694603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The eye structure of insects, which is called a compound eye, has interesting advantages. It has a large field of view, low aberrations, compact size, short image processing time, and an infinite depth of field. If we can design a compound eye camera which mimics the compound eye structure of insects, compound images with these interesting advantages can be obtained. In this paper, we consider the design of a compound camera prototype and low complexity semantic segmentation scheme for compound images. The prototype has a hemisphere shape and consists of several synchronized single-lens reflex camera modules. Images captured from camera modules are mapped to compound images using multi-view geometry to emulate a compound eye. In this way, we can simulate various configurations of compound eye structures, which is useful for developing high-level applications. After that, a low complexity semantic segmentation scheme for compound images based on a convolutional neural network is proposed. The experimental result shows that compound images are more suitable for semantic segmentation than typical RGB images.
{"title":"Light-weight semantic segmentation for compound images","authors":"Geonho Cha, Hwiyeon Yoo, Donghoon Lee, Songhwai Oh","doi":"10.1109/MFI.2017.8170444","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170444","url":null,"abstract":"The eye structure of insects, which is called a compound eye, has interesting advantages. It has a large field of view, low aberrations, compact size, short image processing time, and an infinite depth of field. If we can design a compound eye camera which mimics the compound eye structure of insects, compound images with these interesting advantages can be obtained. In this paper, we consider the design of a compound camera prototype and low complexity semantic segmentation scheme for compound images. The prototype has a hemisphere shape and consists of several synchronized single-lens reflex camera modules. Images captured from camera modules are mapped to compound images using multi-view geometry to emulate a compound eye. In this way, we can simulate various configurations of compound eye structures, which is useful for developing high-level applications. After that, a low complexity semantic segmentation scheme for compound images based on a convolutional neural network is proposed. The experimental result shows that compound images are more suitable for semantic segmentation than typical RGB images.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170370
Dokkyun Yi, Su-yeon Kim, S. Yeom, Mun-Kyo Lee, Sang-Won Jung
Millimeter wave (MMW) readily penetrates fabrics, thus it can be used to detect objects concealed under clothing. A passive MMW imaging system can operate as a stand-off type sensor that scans people away from the system. However, the image often suffers from low image quality due to the diffraction limit and low signal level. In the paper, we discuss four image interpolation methods to recognize objects hidden in the clothing of a person. The first method is the bi-cubic with Gaussian smooth filtering, the second is the Lagrangian, the third is the cubic spline, and the last is the Weighted Essentially Non-Oscillatory (WENO) interpolation. In the experiment, a person hiding a metallic gun in the clothing is captured by the passive MMW system. The experimental results show the interpolation methods can enhance the gun object for segmentation and recognition.
{"title":"Experimental Study on Image Interpolation for Concealed Object Detection","authors":"Dokkyun Yi, Su-yeon Kim, S. Yeom, Mun-Kyo Lee, Sang-Won Jung","doi":"10.1109/MFI.2017.8170370","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170370","url":null,"abstract":"Millimeter wave (MMW) readily penetrates fabrics, thus it can be used to detect objects concealed under clothing. A passive MMW imaging system can operate as a stand-off type sensor that scans people away from the system. However, the image often suffers from low image quality due to the diffraction limit and low signal level. In the paper, we discuss four image interpolation methods to recognize objects hidden in the clothing of a person. The first method is the bi-cubic with Gaussian smooth filtering, the second is the Lagrangian, the third is the cubic spline, and the last is the Weighted Essentially Non-Oscillatory (WENO) interpolation. In the experiment, a person hiding a metallic gun in the clothing is captured by the passive MMW system. The experimental results show the interpolation methods can enhance the gun object for segmentation and recognition.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122443783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170404
N. Rao, C. Ramirez
We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility's ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor's location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i) test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.
{"title":"Facility activity inference using networks of radiation detectors based on SPRT","authors":"N. Rao, C. Ramirez","doi":"10.1109/MFI.2017.8170404","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170404","url":null,"abstract":"We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility's ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor's location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i) test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122864194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170391
J. Buyer, M. Vollert, Mihai Kocsis, Nico Susmann, R. Zöllner
The paper presents an approach for tracking a variable number of objects by using a multi-layer particle filter combined with an extended Expectation Maximization (EM) clustering. The approach works on basis of binary foreground images coming from previous background subtraction. The multi-layer particle filter is an improvement of a conventional particle filter approach. It uses an introduced adaptive layer distribution spanned over the tracking area, which determines the areal extents of the particles. Thus, the multi-modal posterior distribution representing the objects is approximated with locally different resolutions. In addition, the layer distribution is used to find new appearing objects. In order to generate an object list out of the particle density, an EM clustering is used. The basic algorithm is extended with an estimation of the needful number of clusters by iteratively splitting and comparing the overall cluster areas. The new tracking approach improves tracking quality and robustness compared to the conventional particle filter approach. Experimental results are shown using the example of a traffic scene in a roundabout.
{"title":"Image-based multi-target tracking using a multi-layer particle filter and extended EM clustering","authors":"J. Buyer, M. Vollert, Mihai Kocsis, Nico Susmann, R. Zöllner","doi":"10.1109/MFI.2017.8170391","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170391","url":null,"abstract":"The paper presents an approach for tracking a variable number of objects by using a multi-layer particle filter combined with an extended Expectation Maximization (EM) clustering. The approach works on basis of binary foreground images coming from previous background subtraction. The multi-layer particle filter is an improvement of a conventional particle filter approach. It uses an introduced adaptive layer distribution spanned over the tracking area, which determines the areal extents of the particles. Thus, the multi-modal posterior distribution representing the objects is approximated with locally different resolutions. In addition, the layer distribution is used to find new appearing objects. In order to generate an object list out of the particle density, an EM clustering is used. The basic algorithm is extended with an estimation of the needful number of clusters by iteratively splitting and comparing the overall cluster areas. The new tracking approach improves tracking quality and robustness compared to the conventional particle filter approach. Experimental results are shown using the example of a traffic scene in a roundabout.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126188733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}