Pub Date : 2017-08-15DOI: 10.23919/ICIF.2017.8009785
Supriyo Chakraborty, A. Preece, M. Alzantot, Tianwei Xing, Dave Braines, M. Srivastava
Situational understanding (SU) requires a combination of insight — the ability to accurately perceive an existing situation — and foresight — the ability to anticipate how an existing situation may develop in the future. SU involves information fusion as well as model representation and inference. Commonly, heterogenous data sources must be exploited in the fusion process: often including both hard and soft data products. In a coalition context, data and processing resources will also be distributed and subjected to restrictions on information sharing. It will often be necessary for a human to be in the loop in SU processes, to provide key input and guidance, and to interpret outputs in a way that necessitates a degree of transparency in the processing: systems cannot be “black boxes”. In this paper, we characterize the Coalition Situational Understanding (CSU) problem in terms of fusion, temporal, distributed, and human requirements. There is currently significant interest in deep learning (DL) approaches for processing both hard and soft data. We analyze the state-of-the-art in DL in relation to these requirements for CSU, and identify areas where there is currently considerable promise, and key gaps.
{"title":"Deep learning for situational understanding","authors":"Supriyo Chakraborty, A. Preece, M. Alzantot, Tianwei Xing, Dave Braines, M. Srivastava","doi":"10.23919/ICIF.2017.8009785","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009785","url":null,"abstract":"Situational understanding (SU) requires a combination of insight — the ability to accurately perceive an existing situation — and foresight — the ability to anticipate how an existing situation may develop in the future. SU involves information fusion as well as model representation and inference. Commonly, heterogenous data sources must be exploited in the fusion process: often including both hard and soft data products. In a coalition context, data and processing resources will also be distributed and subjected to restrictions on information sharing. It will often be necessary for a human to be in the loop in SU processes, to provide key input and guidance, and to interpret outputs in a way that necessitates a degree of transparency in the processing: systems cannot be “black boxes”. In this paper, we characterize the Coalition Situational Understanding (CSU) problem in terms of fusion, temporal, distributed, and human requirements. There is currently significant interest in deep learning (DL) approaches for processing both hard and soft data. We analyze the state-of-the-art in DL in relation to these requirements for CSU, and identify areas where there is currently considerable promise, and key gaps.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122581230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-08-11DOI: 10.23919/ICIF.2017.8009806
Xina Cheng, N. Ikoma, M. Honda, T. Ikenaga
The ball state tracking and detection technology plays a significant role in volleyball game analysis for volleyball team supporting and tactics development. This paper proposes a ball event detection method to achieve high detection rate by solving challenges including: the great variety of event length, the large intra-class difference of one event and the influence caused by ball trajectories. Proposed state vector covers both the event type and the event period length so that the system model can transits various lengths of event period and predicts event types by volleyball game rules. The curve segmental observation model avoids the tracking error influence to evaluate the event period likelihood by referring neighbouring trajectories of the ball. And according to the standard of the ball event, the feature of the distance between the ball and specific court line are extracted to evaluate the ball event type in observation. At last a two-layer estimation method estimates the posterior state which is a joint probability distribution. Experiments of the proposed method implemented on 3D trajectories tracked from multi-view volleyball game videos shows the detection rate reaches 90.43%.
{"title":"Event state based particle filter for ball event detection in volleyball game analysis","authors":"Xina Cheng, N. Ikoma, M. Honda, T. Ikenaga","doi":"10.23919/ICIF.2017.8009806","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009806","url":null,"abstract":"The ball state tracking and detection technology plays a significant role in volleyball game analysis for volleyball team supporting and tactics development. This paper proposes a ball event detection method to achieve high detection rate by solving challenges including: the great variety of event length, the large intra-class difference of one event and the influence caused by ball trajectories. Proposed state vector covers both the event type and the event period length so that the system model can transits various lengths of event period and predicts event types by volleyball game rules. The curve segmental observation model avoids the tracking error influence to evaluate the event period likelihood by referring neighbouring trajectories of the ball. And according to the standard of the ball event, the feature of the distance between the ball and specific court line are extracted to evaluate the ball event type in observation. At last a two-layer estimation method estimates the posterior state which is a joint probability distribution. Experiments of the proposed method implemented on 3D trajectories tracked from multi-view volleyball game videos shows the detection rate reaches 90.43%.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124377610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009805
C. Musso, A. Bresson, Y. Bidel, N. Zahzam, K. Dahia, J. Allard, B. Sacleux
Cold atom interferometer is a promising technology to obtain a highly sensitive and accurate absolute gravimeter. With the help of an anomalies gravity map, local measurements of gravity allow a terrain-based navigation. We describe the model of the absolute gravity measurement. We develop a Laplace-based particle filter adapted to this context. This non-linear filter is able to estimate the positions and velocities of a carrier (vessel). Some results on realistic simulated data are presented.
{"title":"Absolute gravimeter for terrain-aided navigation","authors":"C. Musso, A. Bresson, Y. Bidel, N. Zahzam, K. Dahia, J. Allard, B. Sacleux","doi":"10.23919/ICIF.2017.8009805","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009805","url":null,"abstract":"Cold atom interferometer is a promising technology to obtain a highly sensitive and accurate absolute gravimeter. With the help of an anomalies gravity map, local measurements of gravity allow a terrain-based navigation. We describe the model of the absolute gravity measurement. We develop a Laplace-based particle filter adapted to this context. This non-linear filter is able to estimate the positions and velocities of a carrier (vessel). Some results on realistic simulated data are presented.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125420320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009731
Tiancheng Li, J. Corchado, Huimin Chen, J. Bajo
Under the common state space model for tracking a maneuvering target, the tracker needs to adapt its state transition model timely to match the target maneuver, which is usually carried out by finding the best one from a bank of candidate Markov models or employing all of them simultaneously but assigning different probabilities. Both methods suffer from time delay for confirming the target maneuver. To avoid these problems, we model the target motion by a continuous time trajectory function and the tracking problem is formulated as an optimization problem with the goal of finding the trajectory function that best fits the observation over a sliding time window. The trajectory function can be used for smoothing, filtering and even prediction. The approach is particularly applicable to a class of target motion patterns such as passenger aircraft, where little prior statistical information is available on the target dynamics or even the sensor observation except the linguistic information that “the target moves in a smooth trajectory” (as being called smoothly maneuvering target). Simulation is provided to demonstrate the supremacy of our approach with comparison to a number of classical Markov-Bayes approaches, based on Hartikainen et al.'s example.
{"title":"Track a smoothly maneuvering target based on trajectory estimation","authors":"Tiancheng Li, J. Corchado, Huimin Chen, J. Bajo","doi":"10.23919/ICIF.2017.8009731","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009731","url":null,"abstract":"Under the common state space model for tracking a maneuvering target, the tracker needs to adapt its state transition model timely to match the target maneuver, which is usually carried out by finding the best one from a bank of candidate Markov models or employing all of them simultaneously but assigning different probabilities. Both methods suffer from time delay for confirming the target maneuver. To avoid these problems, we model the target motion by a continuous time trajectory function and the tracking problem is formulated as an optimization problem with the goal of finding the trajectory function that best fits the observation over a sliding time window. The trajectory function can be used for smoothing, filtering and even prediction. The approach is particularly applicable to a class of target motion patterns such as passenger aircraft, where little prior statistical information is available on the target dynamics or even the sensor observation except the linguistic information that “the target moves in a smooth trajectory” (as being called smoothly maneuvering target). Simulation is provided to demonstrate the supremacy of our approach with comparison to a number of classical Markov-Bayes approaches, based on Hartikainen et al.'s example.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114911257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009713
A. D. Freitas, C. Fritsche, L. Mihaylova, F. Gunnarsson
Advances in sensor systems have resulted in the availability of high resolution sensors, capable of generating massive amounts of data. For complex systems to run online, the primary focus is on computationally efficient filters for the estimation of latent states related to the data. In this paper a novel method for efficient state estimation with the unscented Kalman Filter is proposed. The focus is on applications consisting of a massive amount of data. From a modelling perspective, this amounts to a measurement vector with dimensionality significantly greater than the dimensionality of the state vector. The efficiency of the filter is derived from a parallel filter structure which is enabled by the expectation propagation algorithm. A novel parallel measurement processing expectation propagation unscented Kalman filter is developed. The primary advantage of the novel algorithm is in the ability to achieve computational improvements with negligible loses in filter accuracy. An example of robot localization with a high resolution laser rangefinder sensor is presented. A 47.53% decrease in computational time was exhibited for a scenario with a processing platform consisting of 4 processors, with a negligible loss in accuracy.
{"title":"A novel measurement processing approach to the parallel expectation propagation unscented Kalman filter","authors":"A. D. Freitas, C. Fritsche, L. Mihaylova, F. Gunnarsson","doi":"10.23919/ICIF.2017.8009713","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009713","url":null,"abstract":"Advances in sensor systems have resulted in the availability of high resolution sensors, capable of generating massive amounts of data. For complex systems to run online, the primary focus is on computationally efficient filters for the estimation of latent states related to the data. In this paper a novel method for efficient state estimation with the unscented Kalman Filter is proposed. The focus is on applications consisting of a massive amount of data. From a modelling perspective, this amounts to a measurement vector with dimensionality significantly greater than the dimensionality of the state vector. The efficiency of the filter is derived from a parallel filter structure which is enabled by the expectation propagation algorithm. A novel parallel measurement processing expectation propagation unscented Kalman filter is developed. The primary advantage of the novel algorithm is in the ability to achieve computational improvements with negligible loses in filter accuracy. An example of robot localization with a high resolution laser rangefinder sensor is presented. A 47.53% decrease in computational time was exhibited for a scenario with a processing platform consisting of 4 processors, with a negligible loss in accuracy.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121301864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009656
Zhun-ga Liu, Li Zhang, Gang Li, You He
A new change detection method for heterogeneous remote sensing images (i.e. SAR & optics) has been proposed via pixel transformation. It is difficult to directly compare the pixels from heterogeneous images for detecting changes. We propose to transfer the pixels in different images to a common feature space for convenience of comparison. For each pixel in the 1st image, it will be transferred to the 2nd feature space associated with the 2nd image according to the given unchanged pixel pairs. In fact, this transformation is done assuming that the pixel is not affected by the events. Then the difference value between the estimation of transferred pixel and the actual one in the same location of the 2nd image can be calculated. The bigger difference value, the higher possibility of change happening. We can similarly do the opposite transformation from the 2nd image to the 1st image, and one more difference value is obtained in the 1st feature space. Change occurrences will be detected using Fuzzy C-means clustering method based on the sum of two difference values. The flood detection in the SAR and optical images is given in the experiments, and it shows that the proposed method is able to efficiently detect changes.
{"title":"Change detection in heterogeneous remote sensing images based on the fusion of pixel transformation","authors":"Zhun-ga Liu, Li Zhang, Gang Li, You He","doi":"10.23919/ICIF.2017.8009656","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009656","url":null,"abstract":"A new change detection method for heterogeneous remote sensing images (i.e. SAR & optics) has been proposed via pixel transformation. It is difficult to directly compare the pixels from heterogeneous images for detecting changes. We propose to transfer the pixels in different images to a common feature space for convenience of comparison. For each pixel in the 1st image, it will be transferred to the 2nd feature space associated with the 2nd image according to the given unchanged pixel pairs. In fact, this transformation is done assuming that the pixel is not affected by the events. Then the difference value between the estimation of transferred pixel and the actual one in the same location of the 2nd image can be calculated. The bigger difference value, the higher possibility of change happening. We can similarly do the opposite transformation from the 2nd image to the 1st image, and one more difference value is obtained in the 1st feature space. Change occurrences will be detected using Fuzzy C-means clustering method based on the sum of two difference values. The flood detection in the SAR and optical images is given in the experiments, and it shows that the proposed method is able to efficiently detect changes.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"C-29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126482385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009769
Yu Liu, Xun Chen, Juan Cheng, Hu Peng
Medical image fusion technique plays an an increasingly critical role in many clinical applications by deriving the complementary information from medical images with different modalities. In this paper, a medical image fusion method based on convolutional neural networks (CNNs) is proposed. In our method, a siamese convolutional network is adopted to generate a weight map which integrates the pixel activity information from two source images. The fusion process is conducted in a multi-scale manner via image pyramids to be more consistent with human visual perception. In addition, a local similarity based strategy is applied to adaptively adjust the fusion mode for the decomposed coefficients. Experimental results demonstrate that the proposed method can achieve promising results in terms of both visual quality and objective assessment.
{"title":"A medical image fusion method based on convolutional neural networks","authors":"Yu Liu, Xun Chen, Juan Cheng, Hu Peng","doi":"10.23919/ICIF.2017.8009769","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009769","url":null,"abstract":"Medical image fusion technique plays an an increasingly critical role in many clinical applications by deriving the complementary information from medical images with different modalities. In this paper, a medical image fusion method based on convolutional neural networks (CNNs) is proposed. In our method, a siamese convolutional network is adopted to generate a weight map which integrates the pixel activity information from two source images. The fusion process is conducted in a multi-scale manner via image pyramids to be more consistent with human visual perception. In addition, a local similarity based strategy is applied to adaptively adjust the fusion mode for the decomposed coefficients. Experimental results demonstrate that the proposed method can achieve promising results in terms of both visual quality and objective assessment.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128017739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009789
Na Li, Arnaud Martin, R. Estival
Detection of surface water in natural environment via multi-spectral imagery has been widely utilized in many fields, such land cover identification. However, due to the similarity of the spectra of water bodies, built-up areas, approaches based on high-resolution satellites sometimes confuse these features. A popular direction to detect water is spectral index, often requiring the ground truth to find appropriate thresholds manually. As for traditional machine learning methods, they identify water merely via differences of spectra of various land covers, without taking specific properties of spectral reflection into account. In this paper, we propose an automatic approach to detect water bodies based on Dempster-Shafer theory, combining supervised learning with specific property of water in spectral band in a fully unsupervised context. The benefits of our approach are twofold. On the one hand, it performs well in mapping principle water bodies, including little streams and branches. On the other hand, it labels all objects usually confused with water as ‘ignorance’, including half-dry watery areas, built-up areas and semi-transparent clouds and shadows. ‘Ignorance’ indicates not only limitations of the spectral properties of water and supervised learning itself but insufficiency of information from multi-spectral bands as well, providing valuable information for further land cover classification.
{"title":"An automatic water detection approach based on Dempster-Shafer theory for multi-spectral images","authors":"Na Li, Arnaud Martin, R. Estival","doi":"10.23919/ICIF.2017.8009789","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009789","url":null,"abstract":"Detection of surface water in natural environment via multi-spectral imagery has been widely utilized in many fields, such land cover identification. However, due to the similarity of the spectra of water bodies, built-up areas, approaches based on high-resolution satellites sometimes confuse these features. A popular direction to detect water is spectral index, often requiring the ground truth to find appropriate thresholds manually. As for traditional machine learning methods, they identify water merely via differences of spectra of various land covers, without taking specific properties of spectral reflection into account. In this paper, we propose an automatic approach to detect water bodies based on Dempster-Shafer theory, combining supervised learning with specific property of water in spectral band in a fully unsupervised context. The benefits of our approach are twofold. On the one hand, it performs well in mapping principle water bodies, including little streams and branches. On the other hand, it labels all objects usually confused with water as ‘ignorance’, including half-dry watery areas, built-up areas and semi-transparent clouds and shadows. ‘Ignorance’ indicates not only limitations of the spectral properties of water and supervised learning itself but insufficiency of information from multi-spectral bands as well, providing valuable information for further land cover classification.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128241591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009774
Xuemei Wang, Wenbo Ni
A loosely coupled INS/GPS integrated navigation system is a nonlinear dynamic system. A particle filter (PF) is a particular tool for the nonlinear and non-Gaussian problems. However typical bootstrap particle filters (BPFs) cannot solve the mismatch between the importance function and the likelihood function very well so that they are invalid to some extent in the application of the INS/GPS integrated navigation systems. The homotopy particle filters (HPFs) use the corresponding homotopy transformation to replace the weights updating and the particles resampling in the BPF and then obtain significant effects. However the HPF is sensitive to the spread of the particles and its accuracy decreases with the increase of the GPS observation time intervals. Therefore we proposed a bias-correction-based HPF (BCHPF). The BCHPF firstly estimates the corresponding state bias errors according to the current observation and then corrects the bias errors of the predicted particles before implementing the homotopy transformation. Simulations and practical experiments both show that the proposed BCHPF can effectively solve the mismatch between the importance function and the likelihood function in the BPF and compensate the accumulated errors of the INSs very well. Compared with the HPF it achieves better robustness and higher accuracy.
{"title":"An unbiased homotopy particle filter and its application to the INS/GPS integrated navigation system","authors":"Xuemei Wang, Wenbo Ni","doi":"10.23919/ICIF.2017.8009774","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009774","url":null,"abstract":"A loosely coupled INS/GPS integrated navigation system is a nonlinear dynamic system. A particle filter (PF) is a particular tool for the nonlinear and non-Gaussian problems. However typical bootstrap particle filters (BPFs) cannot solve the mismatch between the importance function and the likelihood function very well so that they are invalid to some extent in the application of the INS/GPS integrated navigation systems. The homotopy particle filters (HPFs) use the corresponding homotopy transformation to replace the weights updating and the particles resampling in the BPF and then obtain significant effects. However the HPF is sensitive to the spread of the particles and its accuracy decreases with the increase of the GPS observation time intervals. Therefore we proposed a bias-correction-based HPF (BCHPF). The BCHPF firstly estimates the corresponding state bias errors according to the current observation and then corrects the bias errors of the predicted particles before implementing the homotopy transformation. Simulations and practical experiments both show that the proposed BCHPF can effectively solve the mismatch between the importance function and the likelihood function in the BPF and compensate the accumulated errors of the INSs very well. Compared with the HPF it achieves better robustness and higher accuracy.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130976676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-10DOI: 10.23919/ICIF.2017.8009690
Yi Yang, Yuanli Liu
A multi-source image registration algorithm based on combined line and point features is proposed for images containing typical line objects. Firstly, the image control line features are extracted for coarse registration by the use of visual saliency and Line Segment Detection (LSD). Visual saliency represents human visual characteristics. LSD has attributes including rotation invariance, illumination changes insensitivity and noise resistant ability. Secondly, Scale Invariant Feature Transform (SIFT) based on multi-resolution analysis is used to extract the point features with scale and rotation invariant characteristics. Then the feature points are used to realize the fine registration. Finally, the simulation results are analyzed, and the validity of the algorithm is verified from subjective effect and objective evaluation indices.
{"title":"A multi-source image registration algorithm based on combined line and point features","authors":"Yi Yang, Yuanli Liu","doi":"10.23919/ICIF.2017.8009690","DOIUrl":"https://doi.org/10.23919/ICIF.2017.8009690","url":null,"abstract":"A multi-source image registration algorithm based on combined line and point features is proposed for images containing typical line objects. Firstly, the image control line features are extracted for coarse registration by the use of visual saliency and Line Segment Detection (LSD). Visual saliency represents human visual characteristics. LSD has attributes including rotation invariance, illumination changes insensitivity and noise resistant ability. Secondly, Scale Invariant Feature Transform (SIFT) based on multi-resolution analysis is used to extract the point features with scale and rotation invariant characteristics. Then the feature points are used to realize the fine registration. Finally, the simulation results are analyzed, and the validity of the algorithm is verified from subjective effect and objective evaluation indices.","PeriodicalId":148407,"journal":{"name":"2017 20th International Conference on Information Fusion (Fusion)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122298062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}