Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009658
T. Xiang, Zuomei Lai, Wensheng Qiao, Tao Li
Hough voting based methods for object detection work by means of allowing local image patches to vote for the center of the object according to the trained visual words. They are effective for object with small local varieties, but incapable of solving multi-view detection problem. The traditional way is training visual words for each subcategory that has similar view. However, limited training data prevents this from being effective. In this paper, we propose an extension to the Hough voting which allows for sharing visual words among multiple subcategories and accumulating votes with discriminative combination weights for different subcategories. The shared visual words are learned using dense image patches. Having such visual words, we can collect descriptors of samples in all subcategories and negative set to train the discriminative combination weights. The final score of a hypothesis is the maximum one in all discretized views. By fusing the geometry structure, image appearance and view information of the object, multi-view object detection problem is solved effectively. In this paper, we mainly focus on multi-view car detection, but not limited to. The proposed method is evaluated on 2 well-known datasets: MIT StreetScene Cars dataset and PASCAL VOC2007 car dataset. The experimental results demonstrate that our method achieves state-of-the-art or competitive performance.
{"title":"Weighted Hough voting for multi-view car detection","authors":"T. Xiang, Zuomei Lai, Wensheng Qiao, Tao Li","doi":"10.23919/ICIF.2017.8009658","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009658","url":null,"abstract":"Hough voting based methods for object detection work by means of allowing local image patches to vote for the center of the object according to the trained visual words. They are effective for object with small local varieties, but incapable of solving multi-view detection problem. The traditional way is training visual words for each subcategory that has similar view. However, limited training data prevents this from being effective. In this paper, we propose an extension to the Hough voting which allows for sharing visual words among multiple subcategories and accumulating votes with discriminative combination weights for different subcategories. The shared visual words are learned using dense image patches. Having such visual words, we can collect descriptors of samples in all subcategories and negative set to train the discriminative combination weights. The final score of a hypothesis is the maximum one in all discretized views. By fusing the geometry structure, image appearance and view information of the object, multi-view object detection problem is solved effectively. In this paper, we mainly focus on multi-view car detection, but not limited to. The proposed method is evaluated on 2 well-known datasets: MIT StreetScene Cars dataset and PASCAL VOC2007 car dataset. The experimental results demonstrate that our method achieves state-of-the-art or competitive performance.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131972252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009655
C. M. Martinez, Feihu Zhang, Daniel Clarke, Gereon Hinz, Dongpu Cao
Within the complex driving environment, progress in autonomous vehicles is supported by advances in sensing and data fusion. Safe and robust autonomous driving can only be guaranteed provided that vehicles and infrastructure are fully aware of the driving scenario. This paper proposes a methodology for feature uncertainty prediction for sensor fusion by generating neural network surrogate models directly from data. This technique is particularly applied to vehicle location through odometry measurements, vehicle speed and orientation, to estimate the location uncertainty at any point along the trajectory. Neural networks are shown to be a suitable modeling technique, presenting good generalization capability and robust results.
{"title":"Feature uncertainty estimation in sensor fusion applied to autonomous vehicle location","authors":"C. M. Martinez, Feihu Zhang, Daniel Clarke, Gereon Hinz, Dongpu Cao","doi":"10.23919/ICIF.2017.8009655","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009655","url":null,"abstract":"Within the complex driving environment, progress in autonomous vehicles is supported by advances in sensing and data fusion. Safe and robust autonomous driving can only be guaranteed provided that vehicles and infrastructure are fully aware of the driving scenario. This paper proposes a methodology for feature uncertainty prediction for sensor fusion by generating neural network surrogate models directly from data. This technique is particularly applied to vehicle location through odometry measurements, vehicle speed and orientation, to estimate the location uncertainty at any point along the trajectory. Neural networks are shown to be a suitable modeling technique, presenting good generalization capability and robust results.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"377 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133451339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009688
Qiegen Liu, H. Leung
Registration of images from different modalities in the presence of intra-image fluctuation and noise contamination is a challenging task. The accuracy and robustness of the deformable registration largely depend on the definition of appropriate objective function, measuring the similarity between the images. Among them the multi-dimensional modality independent neighbourhood descriptor (MIND) is a promising method, yet its ability is limited by non-uniform bias fields and image noise, etc. Motivated by the fact that Log-Euclidean metric has promising invariance properties such as inversion invariant and similarity invariant, this paper introduces an objective function that embeds Log-Euclidean similarity metric between patches to form a multi-dimensional descriptor. The Gaussian-like penalty function consisting of the log-Euclidean metric between images to be registered is incorporated to better reflect the degree of preserving feature discriminability and structure ordering. Experimental results show the advantages of the proposed method over state-of-the-art techniques both quantitatively and qualitatively.
{"title":"Log-Euclidean metric for robust multi-modal deformable registration","authors":"Qiegen Liu, H. Leung","doi":"10.23919/ICIF.2017.8009688","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009688","url":null,"abstract":"Registration of images from different modalities in the presence of intra-image fluctuation and noise contamination is a challenging task. The accuracy and robustness of the deformable registration largely depend on the definition of appropriate objective function, measuring the similarity between the images. Among them the multi-dimensional modality independent neighbourhood descriptor (MIND) is a promising method, yet its ability is limited by non-uniform bias fields and image noise, etc. Motivated by the fact that Log-Euclidean metric has promising invariance properties such as inversion invariant and similarity invariant, this paper introduces an objective function that embeds Log-Euclidean similarity metric between patches to form a multi-dimensional descriptor. The Gaussian-like penalty function consisting of the log-Euclidean metric between images to be registered is incorporated to better reflect the degree of preserving feature discriminability and structure ordering. Experimental results show the advantages of the proposed method over state-of-the-art techniques both quantitatively and qualitatively.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116581363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009620
Luis Úbeda-Medina, Á. F. García-Fernández, J. Grajal
Particle filters are a widely used tool to perform Bayesian filtering under nonlinear dynamic and measurement models or non-Gaussian distributions. However, the performance of particle filters plummets when dealing with high-dimensional state spaces. In this paper, we propose a method that makes use of multiple particle filtering to circumvent this difficulty. Multiple particle filters partition the state space and run an individual particle filter for every component. Each particle filter shares information with the rest of the filters to account for the influence of the complete state in the observations collected by sensors. The method considered in this paper uses auxiliary filtering within the MPF framework, outperforming previous algorithms in the literature. The performance of the considered algorithm is tested in a multiple target tracking scenario, with fixed and known number of targets, using a sensor network with a nonlinear measurement model.
{"title":"Target tracking using multiple auxiliary particle filtering","authors":"Luis Úbeda-Medina, Á. F. García-Fernández, J. Grajal","doi":"10.23919/ICIF.2017.8009620","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009620","url":null,"abstract":"Particle filters are a widely used tool to perform Bayesian filtering under nonlinear dynamic and measurement models or non-Gaussian distributions. However, the performance of particle filters plummets when dealing with high-dimensional state spaces. In this paper, we propose a method that makes use of multiple particle filtering to circumvent this difficulty. Multiple particle filters partition the state space and run an individual particle filter for every component. Each particle filter shares information with the rest of the filters to account for the influence of the complete state in the observations collected by sensors. The method considered in this paper uses auxiliary filtering within the MPF framework, outperforming previous algorithms in the literature. The performance of the considered algorithm is tested in a multiple target tracking scenario, with fixed and known number of targets, using a sensor network with a nonlinear measurement model.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122919571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009851
L. C. Botega, Valdir A. Pereira, Allan Oliveira, J. F. Saran, L. Villas, R. B. Araujo
Situational Awareness (SAW) is a widespread concept in areas that require critical decision-making and refers to the level of consciousness that an individual or team has about a situation. A poor SAW can induce humans to failures in the decision-making process, leading to losses of lives and property damage. Data fusion processes present opportunities to enrich the knowledge about situations by integrating heterogeneous and synergistic data from different sources and transforming them into more meaningful subsidies for decision-making. However, a problem arises when information is subject to problems concerning its quality, especially when humans are the main sources of data (HUMINT). Motivated by the informational demand from the emergency management domain and by the limitations and challenges of the state of the art, this work proposes and describes a new information fusion model, called Quantify (Quality-aware Human-Driven Information Fusion Model), whose main contribution is the exhaustive use of the quality information management throughout the fusion process to parameterize and to guide the work of humans and systems. To validate the model, an emergency situation assessment system prototype was developed, called ESAS (Emergency Situation Assessment Systems). Then, experts from the Sao Paulo State Police (PMESP) tested the prototypes and the system was evaluated using SART (Situation Awareness Rating Technique), which showed higher rates of SAW using the Quantify model, compared to the model from the state-of-the-art, especially in questions relating to the components of resource supply and situational understanding.
{"title":"Quality-aware human-driven information fusion model","authors":"L. C. Botega, Valdir A. Pereira, Allan Oliveira, J. F. Saran, L. Villas, R. B. Araujo","doi":"10.23919/ICIF.2017.8009851","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009851","url":null,"abstract":"Situational Awareness (SAW) is a widespread concept in areas that require critical decision-making and refers to the level of consciousness that an individual or team has about a situation. A poor SAW can induce humans to failures in the decision-making process, leading to losses of lives and property damage. Data fusion processes present opportunities to enrich the knowledge about situations by integrating heterogeneous and synergistic data from different sources and transforming them into more meaningful subsidies for decision-making. However, a problem arises when information is subject to problems concerning its quality, especially when humans are the main sources of data (HUMINT). Motivated by the informational demand from the emergency management domain and by the limitations and challenges of the state of the art, this work proposes and describes a new information fusion model, called Quantify (Quality-aware Human-Driven Information Fusion Model), whose main contribution is the exhaustive use of the quality information management throughout the fusion process to parameterize and to guide the work of humans and systems. To validate the model, an emergency situation assessment system prototype was developed, called ESAS (Emergency Situation Assessment Systems). Then, experts from the Sao Paulo State Police (PMESP) tested the prototypes and the system was evaluated using SART (Situation Awareness Rating Technique), which showed higher rates of SAW using the Quantify model, compared to the model from the state-of-the-art, especially in questions relating to the components of resource supply and situational understanding.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122573623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009647
B. Vo, B. Vo
This paper proposes an efficient implementation of the multi-sensor generalized labeled multi-Bernoulli (GLMB) filter. The solution exploits the GLMB joint prediction and update together with a new technique for truncating the GLMB filtering density based on Gibbs sampling. The resulting algorithm has a complexity in the order of the product of the number of measurements from each sensor, and quadratic in the number of hypothesized objects.
{"title":"An implementation of the multi-sensor generalized labeled multi-Bernoulli filter via Gibbs sampling","authors":"B. Vo, B. Vo","doi":"10.23919/ICIF.2017.8009647","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009647","url":null,"abstract":"This paper proposes an efficient implementation of the multi-sensor generalized labeled multi-Bernoulli (GLMB) filter. The solution exploits the GLMB joint prediction and update together with a new technique for truncating the GLMB filtering density based on Gibbs sampling. The resulting algorithm has a complexity in the order of the product of the number of measurements from each sensor, and quadratic in the number of hypothesized objects.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130548789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.23919/ICIF.2017.8009649
V. Jilkov, Jeffrey H. Ledet, X. R. Li
This paper proposes a new approach for constrained multiple model (MM) maximum a posteriori (MAP) estimation through the expectation-maximization (EM) method by using our previously developed constrained sequential list Viterbi algorithm (CSLVA). The approach is general and applicable for any type of constraints provided they are verifiable. Specific algorithms for implementation are designed, and the performance of the proposed method is illustrated by simulation.
{"title":"Constrained multiple model maximum a posteriori estimation using list Viterbi algorithm","authors":"V. Jilkov, Jeffrey H. Ledet, X. R. Li","doi":"10.23919/ICIF.2017.8009649","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009649","url":null,"abstract":"This paper proposes a new approach for constrained multiple model (MM) maximum a posteriori (MAP) estimation through the expectation-maximization (EM) method by using our previously developed constrained sequential list Viterbi algorithm (CSLVA). The approach is general and applicable for any type of constraints provided they are verifiable. Specific algorithms for implementation are designed, and the performance of the proposed method is illustrated by simulation.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117124505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.23919/ICIF.2017.8009666
Ying Wu
Dempster-Shafer(D-S) method has been used widely in fault diagnosis system of civil aircraft electrical system, but it has difficulty in dealing with combining evidences with high degree of conflict. In order to solve the problem, a new method is proposed in this paper. The proposed method in this paper introduces the historical data, defines the concept of modifying factor, and considers the influence of expert knowledge to basic probability assignments. The experimental results show that the new method can improve the reliability and accuracy of fault diagnosis results and enhance the performance of the system. This method is a breakthrough in the engineering application of D-S evidence theory.
{"title":"Fault diagnosis of civil aircraft electrical system based on evidence theory","authors":"Ying Wu","doi":"10.23919/ICIF.2017.8009666","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009666","url":null,"abstract":"Dempster-Shafer(D-S) method has been used widely in fault diagnosis system of civil aircraft electrical system, but it has difficulty in dealing with combining evidences with high degree of conflict. In order to solve the problem, a new method is proposed in this paper. The proposed method in this paper introduces the historical data, defines the concept of modifying factor, and considers the influence of expert knowledge to basic probability assignments. The experimental results show that the new method can improve the reliability and accuracy of fault diagnosis results and enhance the performance of the system. This method is a breakthrough in the engineering application of D-S evidence theory.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127304630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.23919/ICIF.2017.8009626
Guthrie Cordone, R. Brooks, S. Sen, N. Rao, C. Wu, M. Berry, Kayla M. Grieme
Multi-resolution grid computation is a technique used to speed up source localization with a Maximum Likelihood Estimation (MLE) algorithm. In the case where the source is located midway between grid points, the MLE algorithm may choose an incorrect location, causing following iterations of the search to close in on an area that does not contain the source. To address this issue, we propose a modification to multi-resolution MLE that expands the search area by a small percentage between two consecutive MLE iterations. At the cost of slightly more computation, this modification allows consecutive iterations to accurately locate the target over a larger portion of the field than a standard multi-resolution localization. The localization and computation performance of our approach is compared to both standard multi-resolution and single-resolution MLE algorithms. Tests are performed using seven data sets representing different scenarios of a single radiation source located within an indoor field of detectors. Results show that our method (i) significantly improves the localization accuracy in cases that caused initial grid selection errors in traditional MLE algorithms, (ii) does not have a negative impact on the localization accuracy in other cases, and (iii) requires a negligible increase in computation time relative to the increase in localization accuracy.
{"title":"Improved multi-resolution method for MLE-based localization of radiation sources","authors":"Guthrie Cordone, R. Brooks, S. Sen, N. Rao, C. Wu, M. Berry, Kayla M. Grieme","doi":"10.23919/ICIF.2017.8009626","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009626","url":null,"abstract":"Multi-resolution grid computation is a technique used to speed up source localization with a Maximum Likelihood Estimation (MLE) algorithm. In the case where the source is located midway between grid points, the MLE algorithm may choose an incorrect location, causing following iterations of the search to close in on an area that does not contain the source. To address this issue, we propose a modification to multi-resolution MLE that expands the search area by a small percentage between two consecutive MLE iterations. At the cost of slightly more computation, this modification allows consecutive iterations to accurately locate the target over a larger portion of the field than a standard multi-resolution localization. The localization and computation performance of our approach is compared to both standard multi-resolution and single-resolution MLE algorithms. Tests are performed using seven data sets representing different scenarios of a single radiation source located within an indoor field of detectors. Results show that our method (i) significantly improves the localization accuracy in cases that caused initial grid selection errors in traditional MLE algorithms, (ii) does not have a negative impact on the localization accuracy in other cases, and (iii) requires a negligible increase in computation time relative to the increase in localization accuracy.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125120208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.23919/ICIF.2017.8009836
Qiang Liu, N. Rao
In this paper, we consider a scenario where sensors are deployed over a large geographical area for tracking a target with circular nonlinear constraints on its motion dynamics. The sensor state estimates are sent over long-haul networks to a remote fusion center for fusion. We are interested in different ways to incorporate the constraints into the estimation and fusion process in the presence of communication loss. In particular, we consider closed-form projection-based solutions, including rules for fusing the estimates and for incorporating the constraints, which jointly can guarantee timely fusion often required in realtime systems. We test the performance of these methods in the long-haul tracking environment using a simple example.
{"title":"Projection-based circular constrained state estimation and fusion over long-haul links","authors":"Qiang Liu, N. Rao","doi":"10.23919/ICIF.2017.8009836","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009836","url":null,"abstract":"In this paper, we consider a scenario where sensors are deployed over a large geographical area for tracking a target with circular nonlinear constraints on its motion dynamics. The sensor state estimates are sent over long-haul networks to a remote fusion center for fusion. We are interested in different ways to incorporate the constraints into the estimation and fusion process in the presence of communication loss. In particular, we consider closed-form projection-based solutions, including rules for fusing the estimates and for incorporating the constraints, which jointly can guarantee timely fusion often required in realtime systems. We test the performance of these methods in the long-haul tracking environment using a simple example.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}