Pub Date : 2021-04-03DOI: 10.1080/19479832.2021.1900408
Lazhar Khelifi, M. Mignotte
ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.
{"title":"A new fusion framework for motion segmentation in dynamic scenes","authors":"Lazhar Khelifi, M. Mignotte","doi":"10.1080/19479832.2021.1900408","DOIUrl":"https://doi.org/10.1080/19479832.2021.1900408","url":null,"abstract":"ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"99 - 121"},"PeriodicalIF":2.3,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2021.1900408","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48204831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-11DOI: 10.1080/19479832.2020.1864786
C. Rajakumar, S. Satheeskumaran
ABSTRACT Multiple sensors capture many images and these images are fused as a single image in many applications to obtain high spatial and spectral resolution. A new image fusion method is proposed in this work to enhance the fusion of infrared and visible images. Image fusion methods based on convolutional neural networks, edge-preserving filters and lower rank approximation require high computational complexity and it is very slow for complex tasks. To overcome these drawbacks, singular value decomposition (SVD) based image fusion is proposed. In SVD, accurate decomposition is performed and most of the information is packed in few singular values for a given image. Singular value decomposition decomposes the source images into base and detail layers. Visual saliency and weight map are constructed to integrate information and complimentary information into detail layers. Statistical techniques are used to fuse base layers and the fused image is a linear combination of base and detail layers. Visual inspection and fusion metrics are considered to validate the performance of image fusion. Testing the proposed method on several image pairs indicates that it is superior or comparable to the existing methods.
{"title":"Singular value decomposition and saliency - map based image fusion for visible and infrared images","authors":"C. Rajakumar, S. Satheeskumaran","doi":"10.1080/19479832.2020.1864786","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864786","url":null,"abstract":"ABSTRACT Multiple sensors capture many images and these images are fused as a single image in many applications to obtain high spatial and spectral resolution. A new image fusion method is proposed in this work to enhance the fusion of infrared and visible images. Image fusion methods based on convolutional neural networks, edge-preserving filters and lower rank approximation require high computational complexity and it is very slow for complex tasks. To overcome these drawbacks, singular value decomposition (SVD) based image fusion is proposed. In SVD, accurate decomposition is performed and most of the information is packed in few singular values for a given image. Singular value decomposition decomposes the source images into base and detail layers. Visual saliency and weight map are constructed to integrate information and complimentary information into detail layers. Statistical techniques are used to fuse base layers and the fused image is a linear combination of base and detail layers. Visual inspection and fusion metrics are considered to validate the performance of image fusion. Testing the proposed method on several image pairs indicates that it is superior or comparable to the existing methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"21 - 43"},"PeriodicalIF":2.3,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864786","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43491845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-04DOI: 10.1080/19479832.2020.1864788
Jian Wang, Minmin Wang, Deng Yang, Fei Liu, Zheng Wen
ABSTRACT UWB indoor positioning is a research hotspot, but there are few literatures systematically describing different positioning algorithms for different scenes. Therefore, several positioning algorithms are proposed for different indoor scenes. Firstly, for the sensing positioning scenes, a sensing positioning algorithm is proposed. Secondly, for the straight and narrow scenes, a two anchors robust positioning algorithm based on high pass filter is proposed. Experimental results show that this algorithm has better positioning accuracy and robustness than the traditional algorithm. Then, for ordinary indoor scenes, a robust indoor positioning model is proposed based on robust Kalman filter and total LS, which considers the coordinate error of UWB anchors. The positioning accuracy is 0.093m, which is about 29.54% higher than that of the traditional LS algorithm. Finally, for indoor scenes with map information, a map aided indoor positioning algorithm is proposed based on two UWB anchors. This algorithm can effectively improve the reliability and reduce the cost of UWB indoor positioning system, which average positioning accuracy is 0.238m. The biggest innovation of this paper lies in the systematic description of multi-scene positioning algorithm and the realisation of indoor positioning based on double anchors.
{"title":"UWB positioning algorithm and accuracy evaluation for different indoor scenes","authors":"Jian Wang, Minmin Wang, Deng Yang, Fei Liu, Zheng Wen","doi":"10.1080/19479832.2020.1864788","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864788","url":null,"abstract":"ABSTRACT UWB indoor positioning is a research hotspot, but there are few literatures systematically describing different positioning algorithms for different scenes. Therefore, several positioning algorithms are proposed for different indoor scenes. Firstly, for the sensing positioning scenes, a sensing positioning algorithm is proposed. Secondly, for the straight and narrow scenes, a two anchors robust positioning algorithm based on high pass filter is proposed. Experimental results show that this algorithm has better positioning accuracy and robustness than the traditional algorithm. Then, for ordinary indoor scenes, a robust indoor positioning model is proposed based on robust Kalman filter and total LS, which considers the coordinate error of UWB anchors. The positioning accuracy is 0.093m, which is about 29.54% higher than that of the traditional LS algorithm. Finally, for indoor scenes with map information, a map aided indoor positioning algorithm is proposed based on two UWB anchors. This algorithm can effectively improve the reliability and reduce the cost of UWB indoor positioning system, which average positioning accuracy is 0.238m. The biggest innovation of this paper lies in the systematic description of multi-scene positioning algorithm and the realisation of indoor positioning based on double anchors.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"203 - 225"},"PeriodicalIF":2.3,"publicationDate":"2021-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864788","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48018199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-04DOI: 10.1080/19479832.2020.1864785
Abhay K. Kolhe, A. Bhise
ABSTRACT This paper introduces a new modified local pattern descriptor to extract road from rural areas’ aerial imagery. The introduced local pattern descriptor is actually the modification of the proposed local vector pattern (P-LVP), and it is named as Modified-PLVP (M-PLVP). In fact, M-PLVP extracts the texture features from both road and non-road pixels. The features are subjected to train the Deep belief Network (DBN); thereby the unknown aerial imagery is classified into road and non-road pixel. Further, to improve the classification rate of DBN, morphological operations and grey thresholding operations are performed and so that the road segmentation is performed. Apart from this improvement, this paper incorporates the optimisation concept in the DBN classifier, where the activation function and the count of hidden neurons are optimally selected by a new Trail-based WOA (T-WOA) algorithm, which is the improvement of the Whale Optimisation Algorithm (WOA). Finally, the performance of proposed M-PLVP is compared over other local pattern descriptors concerning measures like Accuracy, Sensitivity, Specificity, Precision, Negative Predictive Value (NPV), F1Score and Mathews correlation coefficient (MCC), False positive rate (FPR), False negative rate (FNR), and False Discovery Rate (FDR), and proves the betterments of M-PLVP over others.
{"title":"Modified PLVP with Optimised Deep Learning for Morphological based Road Extraction","authors":"Abhay K. Kolhe, A. Bhise","doi":"10.1080/19479832.2020.1864785","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864785","url":null,"abstract":"ABSTRACT This paper introduces a new modified local pattern descriptor to extract road from rural areas’ aerial imagery. The introduced local pattern descriptor is actually the modification of the proposed local vector pattern (P-LVP), and it is named as Modified-PLVP (M-PLVP). In fact, M-PLVP extracts the texture features from both road and non-road pixels. The features are subjected to train the Deep belief Network (DBN); thereby the unknown aerial imagery is classified into road and non-road pixel. Further, to improve the classification rate of DBN, morphological operations and grey thresholding operations are performed and so that the road segmentation is performed. Apart from this improvement, this paper incorporates the optimisation concept in the DBN classifier, where the activation function and the count of hidden neurons are optimally selected by a new Trail-based WOA (T-WOA) algorithm, which is the improvement of the Whale Optimisation Algorithm (WOA). Finally, the performance of proposed M-PLVP is compared over other local pattern descriptors concerning measures like Accuracy, Sensitivity, Specificity, Precision, Negative Predictive Value (NPV), F1Score and Mathews correlation coefficient (MCC), False positive rate (FPR), False negative rate (FNR), and False Discovery Rate (FDR), and proves the betterments of M-PLVP over others.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"155 - 179"},"PeriodicalIF":2.3,"publicationDate":"2021-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864785","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47047947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/19479832.2020.1870578
Dhiman Karmakar, Rajib Sarkar, Madhura Datta
ABSTRACT Multi-spectral satellite remote sensing imagery have several applications including detection of objects or distinguishing land surface areas based on amount of greenery or water etc. The enhancement of spectral images helps extracting and visualizing spatial and spectral features. This paper identifies some specific regions of interest (RoI) of the earth's surface from the remotely sensed spectral or satellite image. The RoI are extracted and identified as major segments. Trivially, uni-variate histogram thresholding is used for gray images as a tool of segmentation. However, for color images multivariate histogram is effective to get control on color bands. It also helps emphasizing color information for clustering purpose. In this paper, the 2D and 3D histograms are used for clustering pixels in order to extract the RoI. The RGB color bands along with the infrared (IR) band information are used to form the multivariate histogram. Two datasets are used to carry out the experiment. The first one is an artificially designed dataset and the next is Indian Remotely Sensed (IRS-1A) satellite imagery. This paper proves the correctness of the proposed mathematical implication on the artificial dataset and consequently perform the application on LandSat Spectral data. The test result is found to be satisfactory.
{"title":"Colour band fusion and region enhancement of spectral image using multivariate histogram","authors":"Dhiman Karmakar, Rajib Sarkar, Madhura Datta","doi":"10.1080/19479832.2020.1870578","DOIUrl":"https://doi.org/10.1080/19479832.2020.1870578","url":null,"abstract":"ABSTRACT Multi-spectral satellite remote sensing imagery have several applications including detection of objects or distinguishing land surface areas based on amount of greenery or water etc. The enhancement of spectral images helps extracting and visualizing spatial and spectral features. This paper identifies some specific regions of interest (RoI) of the earth's surface from the remotely sensed spectral or satellite image. The RoI are extracted and identified as major segments. Trivially, uni-variate histogram thresholding is used for gray images as a tool of segmentation. However, for color images multivariate histogram is effective to get control on color bands. It also helps emphasizing color information for clustering purpose. In this paper, the 2D and 3D histograms are used for clustering pixels in order to extract the RoI. The RGB color bands along with the infrared (IR) band information are used to form the multivariate histogram. Two datasets are used to carry out the experiment. The first one is an artificially designed dataset and the next is Indian Remotely Sensed (IRS-1A) satellite imagery. This paper proves the correctness of the proposed mathematical implication on the artificial dataset and consequently perform the application on LandSat Spectral data. The test result is found to be satisfactory.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"64 - 82"},"PeriodicalIF":2.3,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1870578","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43685667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/19479832.2020.1821101
Iman Khosravi, Y. Razoumny, Javad Hatami Afkoueieh, S. K. Alavipanah
ABSTRACT This paper proposed an extended rotation-based ensemble method for the classification of a multi-source optical-radar data. The proposed method was actually inspired by the rotation-based support vector machine ensemble (RoSVM) with several fundamental refinements. In the first modification, a least squares support vector machine was used rather than the support vector machine due to its higher speed. The second modification was to apply a Platt calibrated version instead of a classical non-probabilistic version in order to consider more suitable probabilities for the classes. In the third modification, a filter-based feature selection algorithm was used rather than a wrapper algorithm in order to further speed up the proposed method. In the final modification, instead of a classical majority voting, an objective majority voting, which has better performance and less ambiguity, was employed for fusing the single classifiers. Accordingly, the proposed method was entitled rotation calibrated least squares support vector machine (RoCLSSVM). Then, it was compared to other SVM-based versions and also the RoSVM. The results implied higher accuracy, efficiency and diversity of the RoCLSSVM than the RoSVM for the classification of the data set of this paper. Furthermore, the RoCLSSVM had lower sensitivity to the training size than the RoSVM.
{"title":"An ensemble method based on rotation calibrated least squares support vector machine for multi-source data classification","authors":"Iman Khosravi, Y. Razoumny, Javad Hatami Afkoueieh, S. K. Alavipanah","doi":"10.1080/19479832.2020.1821101","DOIUrl":"https://doi.org/10.1080/19479832.2020.1821101","url":null,"abstract":"ABSTRACT This paper proposed an extended rotation-based ensemble method for the classification of a multi-source optical-radar data. The proposed method was actually inspired by the rotation-based support vector machine ensemble (RoSVM) with several fundamental refinements. In the first modification, a least squares support vector machine was used rather than the support vector machine due to its higher speed. The second modification was to apply a Platt calibrated version instead of a classical non-probabilistic version in order to consider more suitable probabilities for the classes. In the third modification, a filter-based feature selection algorithm was used rather than a wrapper algorithm in order to further speed up the proposed method. In the final modification, instead of a classical majority voting, an objective majority voting, which has better performance and less ambiguity, was employed for fusing the single classifiers. Accordingly, the proposed method was entitled rotation calibrated least squares support vector machine (RoCLSSVM). Then, it was compared to other SVM-based versions and also the RoSVM. The results implied higher accuracy, efficiency and diversity of the RoCLSSVM than the RoSVM for the classification of the data set of this paper. Furthermore, the RoCLSSVM had lower sensitivity to the training size than the RoSVM.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"48 - 63"},"PeriodicalIF":2.3,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1821101","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44538181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-02DOI: 10.1080/19479832.2021.1874635
Farhang Aliyari, T. Bouwmans, Yinguo Cao, Yushi Chen, A. Cherukuri, Srinivasa Rao Dammavalam, Vaidehi Deshmukh, Songlin Du, A. Erturk, S. Goh, Qing Guo, Marcus Hammer, Zhaozheng Hu, Jincai Huang, Shuying Huang, Maryam Imani, A. Jenerowicz, Bin Jia, W. Kainz, Singara Singh Kasana, J. Keighobadi, M. F. A. Khanan, Beibei Li, Dong Li, Zengke Li, Huimin Liu, Menghua Liu, Qingjie Liu, Ran Liu, Shengheng Liu, Zhengyi Liu, D. Lizcano, D. Lu, Xiaocheng Lu, P. Marpu, Deepak Mishra, P. Nepa, Yi-Ning Ning, Teerapong Panboonyuen, Rakesh C. Patel, Z. Shao, Huan-si Shen, Weina Song, A. Stein, Jianbo Tang, Yunwei Tang, Ling Tong, J. Torres-Sospedra, Md Azher Uddin, Kishor P. Upla, Sowmya V jian wang, Mingwen Wang, Qi Wang, Siye Wang, Kai Wen, Mengquan Wu, Youxi Wu, Fu Xiao, Bo-Lun Xu, Gong-Tao Yan, Hongbo Yan, Feng-Mei Yang, Xue Yang, Yuegang Yu, X. Yuan, C. Yuen, Yun Zhang, Bobai Zhao, Wen-long Zhao, Chao Zhou, Guoqing Zhou, Haiyang Zhou, Weidong Zou
Farhang Aliyari Thierry Bouwmans Ying Cao Yushi Chen Aswani Kumar Cherukuri Srinivasa Rao Dammavalam Vaidehi Deshmukh Songlin Du Alp Erturk Shu Ting Goh Qing Guo Marcus Hammer Zhaozheng Hu Jincai Huang Shuying Huang Maryam Imani Agnieszka Jenerowicz Binghao Jia Wolfgang Kainz Singara Singh Kasana Jafar Keighobadi M. F. Abdul Khanan Beibei Li Dong Li Zengke Li Huimin Liu Meng Liu Qingjie Liu Ran Liu Shengheng Liu Zhengyi Liu David Lizcano Dengsheng Lu INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 1, i–ii https://doi.org/10.1080/19479832.2021.1874635
Farhang Aliyari Thierry Bouwmans Ying Cao Yushi Chen Aswani Kumar Cherukuri Srinivasa Rao Dammavalam Vaidehi Deshmukh Sonlin Du Alp Erturk Shu Ting Goh Qing Guo Marcus Hammer Zhaozheng Hu Jincai Huang Shuying Huang Maryam Imani Agnieszka Jenerowicz Binghao Jia Wolfgang Kainz Singara Singh Kasana Jafar Keighobadi M.F。AbdulKhanan Beibei李东李增科李惠民刘萌刘庆杰刘然刘胜恒刘正义刘大卫Lizcano Dengsheng Lu国际图像与数据融合杂志2021,第12卷,第1期,i–iihttps://doi.org/10.1080/19479832.2021.1874635
{"title":"Acknowledgement to Reviewers of the International Journal of Image and Data Fusion in 2020","authors":"Farhang Aliyari, T. Bouwmans, Yinguo Cao, Yushi Chen, A. Cherukuri, Srinivasa Rao Dammavalam, Vaidehi Deshmukh, Songlin Du, A. Erturk, S. Goh, Qing Guo, Marcus Hammer, Zhaozheng Hu, Jincai Huang, Shuying Huang, Maryam Imani, A. Jenerowicz, Bin Jia, W. Kainz, Singara Singh Kasana, J. Keighobadi, M. F. A. Khanan, Beibei Li, Dong Li, Zengke Li, Huimin Liu, Menghua Liu, Qingjie Liu, Ran Liu, Shengheng Liu, Zhengyi Liu, D. Lizcano, D. Lu, Xiaocheng Lu, P. Marpu, Deepak Mishra, P. Nepa, Yi-Ning Ning, Teerapong Panboonyuen, Rakesh C. Patel, Z. Shao, Huan-si Shen, Weina Song, A. Stein, Jianbo Tang, Yunwei Tang, Ling Tong, J. Torres-Sospedra, Md Azher Uddin, Kishor P. Upla, Sowmya V jian wang, Mingwen Wang, Qi Wang, Siye Wang, Kai Wen, Mengquan Wu, Youxi Wu, Fu Xiao, Bo-Lun Xu, Gong-Tao Yan, Hongbo Yan, Feng-Mei Yang, Xue Yang, Yuegang Yu, X. Yuan, C. Yuen, Yun Zhang, Bobai Zhao, Wen-long Zhao, Chao Zhou, Guoqing Zhou, Haiyang Zhou, Weidong Zou","doi":"10.1080/19479832.2021.1874635","DOIUrl":"https://doi.org/10.1080/19479832.2021.1874635","url":null,"abstract":"Farhang Aliyari Thierry Bouwmans Ying Cao Yushi Chen Aswani Kumar Cherukuri Srinivasa Rao Dammavalam Vaidehi Deshmukh Songlin Du Alp Erturk Shu Ting Goh Qing Guo Marcus Hammer Zhaozheng Hu Jincai Huang Shuying Huang Maryam Imani Agnieszka Jenerowicz Binghao Jia Wolfgang Kainz Singara Singh Kasana Jafar Keighobadi M. F. Abdul Khanan Beibei Li Dong Li Zengke Li Huimin Liu Meng Liu Qingjie Liu Ran Liu Shengheng Liu Zhengyi Liu David Lizcano Dengsheng Lu INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 1, i–ii https://doi.org/10.1080/19479832.2021.1874635","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":" ","pages":"i - ii"},"PeriodicalIF":2.3,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2021.1874635","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46334267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-30DOI: 10.1080/19479832.2020.1864787
Tarablesse Settou, M. Kholladi, Abdelkamel Ben Ali
ABSTRACT One of the crucial problems after earthquakes is how to quickly and accurately detect and identify damaged areas. Several automated methods have been developed to analyse remote sensing (RS) images for earthquake damage classification. The performance of damage classification is mainly depending on powerful learning feature representations. Though the hand-crafted features can achieve satisfactory performance to some extent, the performance gain is small and does not generalise well. Recently, the convolutional neural network (CNN) has demonstrated its capability of deriving more powerful feature representations than hand-crafted features in many domains. Our main contribution in this paper is the investigation of hybrid feature representations derived from several pre-trained CNN models for earthquake damage classification. Also, in this study, in contrast to previous works, we explore the combination of feature representations extracted from the last two fully connected layers of a particular CNN model. We validated our proposals on two large datasets, including images highly varying in scene characteristics, lighting conditions, and image characteristics, captured from different earthquake events and several geographic locations. Extensive experiments showed that our proposals can improve significantly the performance.
{"title":"Improving damage classification via hybrid deep learning feature representations derived from post-earthquake aerial images","authors":"Tarablesse Settou, M. Kholladi, Abdelkamel Ben Ali","doi":"10.1080/19479832.2020.1864787","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864787","url":null,"abstract":"ABSTRACT One of the crucial problems after earthquakes is how to quickly and accurately detect and identify damaged areas. Several automated methods have been developed to analyse remote sensing (RS) images for earthquake damage classification. The performance of damage classification is mainly depending on powerful learning feature representations. Though the hand-crafted features can achieve satisfactory performance to some extent, the performance gain is small and does not generalise well. Recently, the convolutional neural network (CNN) has demonstrated its capability of deriving more powerful feature representations than hand-crafted features in many domains. Our main contribution in this paper is the investigation of hybrid feature representations derived from several pre-trained CNN models for earthquake damage classification. Also, in this study, in contrast to previous works, we explore the combination of feature representations extracted from the last two fully connected layers of a particular CNN model. We validated our proposals on two large datasets, including images highly varying in scene characteristics, lighting conditions, and image characteristics, captured from different earthquake events and several geographic locations. Extensive experiments showed that our proposals can improve significantly the performance.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"1 - 20"},"PeriodicalIF":2.3,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864787","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46508751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-03DOI: 10.1080/19479832.2020.1853614
Y. Bai, A. Kealy, Lucas Holden
ABSTRACT Wi-Fi-based positioning technology has been recognised as a useful and important technology for location-based service (LBS) accompanied by the rapid development and application of smartphones since the beginning of the 21st century. However, no mature technology or method of Wi-Fi-based positioning had provided a satisfying output in the past 20 years, until recently, when the IEEE 802.11mc standard was released and hardware-supported in the market, in which a fine time measurement (FTM) protocol and multiple round-trip time (RTT) was used for more accurate and robust ranging without the received signal strength indicator (RSSI) involved. This paper provided an evaluation and ranging offset correction approach for Wi-Fi FTM based ranging. The characteristics of the ranging offset deviation errors are specifically examined through two well-designed evaluation tests. In addition, the offset deviation errors from a CompuLab WILD router and a Google access point (AP) are also compared. An average of 0.181 m accuracy was achieved after a typical offset correction process to the ranging estimates obtained from a complex surrounding environment with line-of-sight (LOS) condition. The research outcome will become a useful resource for implementing other algorithms such as machine learning and multi-lateration for our future research projects.
{"title":"Evaluation and correction of smartphone-based fine time range measurements","authors":"Y. Bai, A. Kealy, Lucas Holden","doi":"10.1080/19479832.2020.1853614","DOIUrl":"https://doi.org/10.1080/19479832.2020.1853614","url":null,"abstract":"ABSTRACT Wi-Fi-based positioning technology has been recognised as a useful and important technology for location-based service (LBS) accompanied by the rapid development and application of smartphones since the beginning of the 21st century. However, no mature technology or method of Wi-Fi-based positioning had provided a satisfying output in the past 20 years, until recently, when the IEEE 802.11mc standard was released and hardware-supported in the market, in which a fine time measurement (FTM) protocol and multiple round-trip time (RTT) was used for more accurate and robust ranging without the received signal strength indicator (RSSI) involved. This paper provided an evaluation and ranging offset correction approach for Wi-Fi FTM based ranging. The characteristics of the ranging offset deviation errors are specifically examined through two well-designed evaluation tests. In addition, the offset deviation errors from a CompuLab WILD router and a Google access point (AP) are also compared. An average of 0.181 m accuracy was achieved after a typical offset correction process to the ranging estimates obtained from a complex surrounding environment with line-of-sight (LOS) condition. The research outcome will become a useful resource for implementing other algorithms such as machine learning and multi-lateration for our future research projects.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"185 - 202"},"PeriodicalIF":2.3,"publicationDate":"2020-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1853614","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41485134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-30DOI: 10.1080/19479832.2020.1845246
Zhongbin Li, Ping Wang, M. Fan, Yifan Long
ABSTRACT With the successful launch of China’s high spatial resolution satellite Gaofen-2 (GF-2), the use of high spatial resolution satellite images for land change detection has high research potential. Based on the images from GF-2, this study combines principal component analysis and the spectral feature change method to identify different land changes in the form of different coloured patches. Then, three decision tree classification models are constructed to automatically detect the change, which includes information on the increase in the number of airports and buildings and increased or decreased vegetation. Further, through Quick Bird images for identical regions in the same periods, a sample of 2624 pixels is selected using a stratified random sampling method to verify the accuracy of the results indicating a change. The results show that the overall accuracy of the extracted information on land change was 98.21%, and the Kappa coefficient was 0.9604. Therefore, the method for land change detection and extraction of land change information used in this study is proven to be effective.
{"title":"Method of urban land change detection that is based on GF-2 high-resolution RS images","authors":"Zhongbin Li, Ping Wang, M. Fan, Yifan Long","doi":"10.1080/19479832.2020.1845246","DOIUrl":"https://doi.org/10.1080/19479832.2020.1845246","url":null,"abstract":"ABSTRACT With the successful launch of China’s high spatial resolution satellite Gaofen-2 (GF-2), the use of high spatial resolution satellite images for land change detection has high research potential. Based on the images from GF-2, this study combines principal component analysis and the spectral feature change method to identify different land changes in the form of different coloured patches. Then, three decision tree classification models are constructed to automatically detect the change, which includes information on the increase in the number of airports and buildings and increased or decreased vegetation. Further, through Quick Bird images for identical regions in the same periods, a sample of 2624 pixels is selected using a stratified random sampling method to verify the accuracy of the results indicating a change. The results show that the overall accuracy of the extracted information on land change was 98.21%, and the Kappa coefficient was 0.9604. Therefore, the method for land change detection and extraction of land change information used in this study is proven to be effective.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"278 - 295"},"PeriodicalIF":2.3,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1845246","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46391844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}