首页 > 最新文献

International Journal of Image and Data Fusion最新文献

英文 中文
A new fusion framework for motion segmentation in dynamic scenes 一种新的动态场景运动分割融合框架
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-04-03 DOI: 10.1080/19479832.2021.1900408
Lazhar Khelifi, M. Mignotte
ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.
摘要动态场景中的运动分割目前广泛采用基于深度神经网络的参数化方法。本研究探索了在没有训练数据的情况下可以用于分割新视频的无监督分割方法。特别是,它解决了动态纹理分割的任务。通过自动为每个区域或组分配一个类标签,该任务包括将复杂的现象和特征聚类到组中,这些现象和特征在空间和时间上都是重复的。提出了一种有效的动态场景运动分割融合框架。该模型旨在合并包含多个弱质量区域的不同分割图,以获得更准确的分割最终结果。组合过程所需的不同标记字段是通过应用于输入视频的简化分组方案获得的(基于三个正交平面:、和)。在两个具有挑战性的数据集(SynthDB和YUP++)上进行的实验表明,与当前需要参数估计或训练步骤的运动分割方法相反,FFMS明显更快、更容易编码、简单且参数有限。
{"title":"A new fusion framework for motion segmentation in dynamic scenes","authors":"Lazhar Khelifi, M. Mignotte","doi":"10.1080/19479832.2021.1900408","DOIUrl":"https://doi.org/10.1080/19479832.2021.1900408","url":null,"abstract":"ABSTRACT Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: , and ). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"99 - 121"},"PeriodicalIF":2.3,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2021.1900408","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48204831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Singular value decomposition and saliency - map based image fusion for visible and infrared images 基于奇异值分解和显著性映射的可见光和红外图像融合
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-01-11 DOI: 10.1080/19479832.2020.1864786
C. Rajakumar, S. Satheeskumaran
ABSTRACT Multiple sensors capture many images and these images are fused as a single image in many applications to obtain high spatial and spectral resolution. A new image fusion method is proposed in this work to enhance the fusion of infrared and visible images. Image fusion methods based on convolutional neural networks, edge-preserving filters and lower rank approximation require high computational complexity and it is very slow for complex tasks. To overcome these drawbacks, singular value decomposition (SVD) based image fusion is proposed. In SVD, accurate decomposition is performed and most of the information is packed in few singular values for a given image. Singular value decomposition decomposes the source images into base and detail layers. Visual saliency and weight map are constructed to integrate information and complimentary information into detail layers. Statistical techniques are used to fuse base layers and the fused image is a linear combination of base and detail layers. Visual inspection and fusion metrics are considered to validate the performance of image fusion. Testing the proposed method on several image pairs indicates that it is superior or comparable to the existing methods.
摘要在许多应用中,多个传感器捕获许多图像,并将这些图像融合为单个图像,以获得高的空间和光谱分辨率。本文提出了一种新的图像融合方法,以增强红外和可见光图像的融合。基于卷积神经网络、边缘保持滤波器和低阶近似的图像融合方法需要很高的计算复杂度,对于复杂的任务来说速度很慢。为了克服这些缺点,提出了基于奇异值分解的图像融合方法。在SVD中,执行精确的分解,并且对于给定的图像,大多数信息被封装在少数奇异值中。奇异值分解将源图像分解为基础层和细节层。构建视觉显著性和权重图,将信息和互补信息集成到细节层中。统计技术用于融合基本层,并且融合的图像是基本层和细节层的线性组合。视觉检测和融合度量被考虑来验证图像融合的性能。在多个图像对上测试表明,该方法优于或可与现有方法相比较。
{"title":"Singular value decomposition and saliency - map based image fusion for visible and infrared images","authors":"C. Rajakumar, S. Satheeskumaran","doi":"10.1080/19479832.2020.1864786","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864786","url":null,"abstract":"ABSTRACT Multiple sensors capture many images and these images are fused as a single image in many applications to obtain high spatial and spectral resolution. A new image fusion method is proposed in this work to enhance the fusion of infrared and visible images. Image fusion methods based on convolutional neural networks, edge-preserving filters and lower rank approximation require high computational complexity and it is very slow for complex tasks. To overcome these drawbacks, singular value decomposition (SVD) based image fusion is proposed. In SVD, accurate decomposition is performed and most of the information is packed in few singular values for a given image. Singular value decomposition decomposes the source images into base and detail layers. Visual saliency and weight map are constructed to integrate information and complimentary information into detail layers. Statistical techniques are used to fuse base layers and the fused image is a linear combination of base and detail layers. Visual inspection and fusion metrics are considered to validate the performance of image fusion. Testing the proposed method on several image pairs indicates that it is superior or comparable to the existing methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"21 - 43"},"PeriodicalIF":2.3,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864786","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43491845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
UWB positioning algorithm and accuracy evaluation for different indoor scenes UWB定位算法及其在不同室内场景下的精度评估
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-01-04 DOI: 10.1080/19479832.2020.1864788
Jian Wang, Minmin Wang, Deng Yang, Fei Liu, Zheng Wen
ABSTRACT UWB indoor positioning is a research hotspot, but there are few literatures systematically describing different positioning algorithms for different scenes. Therefore, several positioning algorithms are proposed for different indoor scenes. Firstly, for the sensing positioning scenes, a sensing positioning algorithm is proposed. Secondly, for the straight and narrow scenes, a two anchors robust positioning algorithm based on high pass filter is proposed. Experimental results show that this algorithm has better positioning accuracy and robustness than the traditional algorithm. Then, for ordinary indoor scenes, a robust indoor positioning model is proposed based on robust Kalman filter and total LS, which considers the coordinate error of UWB anchors. The positioning accuracy is 0.093m, which is about 29.54% higher than that of the traditional LS algorithm. Finally, for indoor scenes with map information, a map aided indoor positioning algorithm is proposed based on two UWB anchors. This algorithm can effectively improve the reliability and reduce the cost of UWB indoor positioning system, which average positioning accuracy is 0.238m. The biggest innovation of this paper lies in the systematic description of multi-scene positioning algorithm and the realisation of indoor positioning based on double anchors.
UWB室内定位是一个研究热点,但很少有文献系统地描述针对不同场景的不同定位算法。因此,针对不同的室内场景,提出了几种定位算法。首先,针对传感定位场景,提出了一种传感定位算法。其次,针对直线和狭窄场景,提出了一种基于高通滤波器的双锚稳健定位算法。实验结果表明,该算法比传统算法具有更好的定位精度和鲁棒性。然后,针对普通室内场景,考虑超宽带锚点的坐标误差,提出了一种基于鲁棒卡尔曼滤波器和全LS的鲁棒室内定位模型。定位精度为0.093m,比传统LS算法高出约29.54%。最后,针对具有地图信息的室内场景,提出了一种基于两个UWB锚点的地图辅助室内定位算法。该算法可以有效地提高UWB室内定位系统的可靠性,降低成本,其平均定位精度为0.238m。本文最大的创新在于系统地描述了多场景定位算法,并实现了基于双锚的室内定位。
{"title":"UWB positioning algorithm and accuracy evaluation for different indoor scenes","authors":"Jian Wang, Minmin Wang, Deng Yang, Fei Liu, Zheng Wen","doi":"10.1080/19479832.2020.1864788","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864788","url":null,"abstract":"ABSTRACT UWB indoor positioning is a research hotspot, but there are few literatures systematically describing different positioning algorithms for different scenes. Therefore, several positioning algorithms are proposed for different indoor scenes. Firstly, for the sensing positioning scenes, a sensing positioning algorithm is proposed. Secondly, for the straight and narrow scenes, a two anchors robust positioning algorithm based on high pass filter is proposed. Experimental results show that this algorithm has better positioning accuracy and robustness than the traditional algorithm. Then, for ordinary indoor scenes, a robust indoor positioning model is proposed based on robust Kalman filter and total LS, which considers the coordinate error of UWB anchors. The positioning accuracy is 0.093m, which is about 29.54% higher than that of the traditional LS algorithm. Finally, for indoor scenes with map information, a map aided indoor positioning algorithm is proposed based on two UWB anchors. This algorithm can effectively improve the reliability and reduce the cost of UWB indoor positioning system, which average positioning accuracy is 0.238m. The biggest innovation of this paper lies in the systematic description of multi-scene positioning algorithm and the realisation of indoor positioning based on double anchors.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"203 - 225"},"PeriodicalIF":2.3,"publicationDate":"2021-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864788","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48018199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Modified PLVP with Optimised Deep Learning for Morphological based Road Extraction 基于形态学道路提取的改进PLVP优化深度学习
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-01-04 DOI: 10.1080/19479832.2020.1864785
Abhay K. Kolhe, A. Bhise
ABSTRACT This paper introduces a new modified local pattern descriptor to extract road from rural areas’ aerial imagery. The introduced local pattern descriptor is actually the modification of the proposed local vector pattern (P-LVP), and it is named as Modified-PLVP (M-PLVP). In fact, M-PLVP extracts the texture features from both road and non-road pixels. The features are subjected to train the Deep belief Network (DBN); thereby the unknown aerial imagery is classified into road and non-road pixel. Further, to improve the classification rate of DBN, morphological operations and grey thresholding operations are performed and so that the road segmentation is performed. Apart from this improvement, this paper incorporates the optimisation concept in the DBN classifier, where the activation function and the count of hidden neurons are optimally selected by a new Trail-based WOA (T-WOA) algorithm, which is the improvement of the Whale Optimisation Algorithm (WOA). Finally, the performance of proposed M-PLVP is compared over other local pattern descriptors concerning measures like Accuracy, Sensitivity, Specificity, Precision, Negative Predictive Value (NPV), F1Score and Mathews correlation coefficient (MCC), False positive rate (FPR), False negative rate (FNR), and False Discovery Rate (FDR), and proves the betterments of M-PLVP over others.
提出了一种新的改进的局部模式描述符,用于从农村航拍图像中提取道路。引入的局部模式描述符实际上是对提出的局部向量模式(P-LVP)的修改,并将其命名为Modified-PLVP (M-PLVP)。实际上,M-PLVP同时提取道路和非道路像素的纹理特征。对特征进行深度信念网络(DBN)训练;从而将未知航拍图像分为道路像素和非道路像素。进一步,为了提高DBN的分类率,对DBN进行形态学运算和灰度阈值运算,从而进行道路分割。除了这种改进之外,本文还在DBN分类器中引入了优化概念,其中激活函数和隐藏神经元的数量通过一种新的基于trail的WOA (T-WOA)算法进行优化选择,该算法是对鲸鱼优化算法(WOA)的改进。最后,将所提出的M-PLVP与其他局部模式描述符在准确性、灵敏度、特异性、精密度、负预测值(NPV)、F1Score和Mathews相关系数(MCC)、假阳性率(FPR)、假阴性率(FNR)和假发现率(FDR)等方面的性能进行了比较,并证明了M-PLVP优于其他局部模式描述符。
{"title":"Modified PLVP with Optimised Deep Learning for Morphological based Road Extraction","authors":"Abhay K. Kolhe, A. Bhise","doi":"10.1080/19479832.2020.1864785","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864785","url":null,"abstract":"ABSTRACT This paper introduces a new modified local pattern descriptor to extract road from rural areas’ aerial imagery. The introduced local pattern descriptor is actually the modification of the proposed local vector pattern (P-LVP), and it is named as Modified-PLVP (M-PLVP). In fact, M-PLVP extracts the texture features from both road and non-road pixels. The features are subjected to train the Deep belief Network (DBN); thereby the unknown aerial imagery is classified into road and non-road pixel. Further, to improve the classification rate of DBN, morphological operations and grey thresholding operations are performed and so that the road segmentation is performed. Apart from this improvement, this paper incorporates the optimisation concept in the DBN classifier, where the activation function and the count of hidden neurons are optimally selected by a new Trail-based WOA (T-WOA) algorithm, which is the improvement of the Whale Optimisation Algorithm (WOA). Finally, the performance of proposed M-PLVP is compared over other local pattern descriptors concerning measures like Accuracy, Sensitivity, Specificity, Precision, Negative Predictive Value (NPV), F1Score and Mathews correlation coefficient (MCC), False positive rate (FPR), False negative rate (FNR), and False Discovery Rate (FDR), and proves the betterments of M-PLVP over others.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"155 - 179"},"PeriodicalIF":2.3,"publicationDate":"2021-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864785","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47047947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Colour band fusion and region enhancement of spectral image using multivariate histogram 基于多元直方图的光谱图像波段融合与区域增强
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-01-02 DOI: 10.1080/19479832.2020.1870578
Dhiman Karmakar, Rajib Sarkar, Madhura Datta
ABSTRACT Multi-spectral satellite remote sensing imagery have several applications including detection of objects or distinguishing land surface areas based on amount of greenery or water etc. The enhancement of spectral images helps extracting and visualizing spatial and spectral features. This paper identifies some specific regions of interest (RoI) of the earth's surface from the remotely sensed spectral or satellite image. The RoI are extracted and identified as major segments. Trivially, uni-variate histogram thresholding is used for gray images as a tool of segmentation. However, for color images multivariate histogram is effective to get control on color bands. It also helps emphasizing color information for clustering purpose. In this paper, the 2D and 3D histograms are used for clustering pixels in order to extract the RoI. The RGB color bands along with the infrared (IR) band information are used to form the multivariate histogram. Two datasets are used to carry out the experiment. The first one is an artificially designed dataset and the next is Indian Remotely Sensed (IRS-1A) satellite imagery. This paper proves the correctness of the proposed mathematical implication on the artificial dataset and consequently perform the application on LandSat Spectral data. The test result is found to be satisfactory.
多光谱卫星遥感图像有多种应用,包括检测目标或根据绿化或水的数量区分陆地表面区域等。光谱图像的增强有助于提取和可视化空间和光谱特征。本文从遥感光谱或卫星图像中识别出地球表面的特定感兴趣区域(RoI)。提取RoI并将其识别为主要部分。通常,单变量直方图阈值被用于灰度图像作为分割工具。而对于彩色图像,多元直方图可以有效地控制颜色带。它还有助于强调用于聚类目的的颜色信息。本文采用二维直方图和三维直方图对像素进行聚类,提取RoI。利用RGB色带和红外波段信息组成多元直方图。实验中使用了两个数据集。第一个是人工设计的数据集,第二个是印度遥感(IRS-1A)卫星图像。本文在人工数据集上验证了所提数学含义的正确性,并在LandSat光谱数据上进行了应用。测试结果令人满意。
{"title":"Colour band fusion and region enhancement of spectral image using multivariate histogram","authors":"Dhiman Karmakar, Rajib Sarkar, Madhura Datta","doi":"10.1080/19479832.2020.1870578","DOIUrl":"https://doi.org/10.1080/19479832.2020.1870578","url":null,"abstract":"ABSTRACT Multi-spectral satellite remote sensing imagery have several applications including detection of objects or distinguishing land surface areas based on amount of greenery or water etc. The enhancement of spectral images helps extracting and visualizing spatial and spectral features. This paper identifies some specific regions of interest (RoI) of the earth's surface from the remotely sensed spectral or satellite image. The RoI are extracted and identified as major segments. Trivially, uni-variate histogram thresholding is used for gray images as a tool of segmentation. However, for color images multivariate histogram is effective to get control on color bands. It also helps emphasizing color information for clustering purpose. In this paper, the 2D and 3D histograms are used for clustering pixels in order to extract the RoI. The RGB color bands along with the infrared (IR) band information are used to form the multivariate histogram. Two datasets are used to carry out the experiment. The first one is an artificially designed dataset and the next is Indian Remotely Sensed (IRS-1A) satellite imagery. This paper proves the correctness of the proposed mathematical implication on the artificial dataset and consequently perform the application on LandSat Spectral data. The test result is found to be satisfactory.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"64 - 82"},"PeriodicalIF":2.3,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1870578","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43685667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An ensemble method based on rotation calibrated least squares support vector machine for multi-source data classification 一种基于旋转校准最小二乘支持向量机的多源数据分类集成方法
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-01-02 DOI: 10.1080/19479832.2020.1821101
Iman Khosravi, Y. Razoumny, Javad Hatami Afkoueieh, S. K. Alavipanah
ABSTRACT This paper proposed an extended rotation-based ensemble method for the classification of a multi-source optical-radar data. The proposed method was actually inspired by the rotation-based support vector machine ensemble (RoSVM) with several fundamental refinements. In the first modification, a least squares support vector machine was used rather than the support vector machine due to its higher speed. The second modification was to apply a Platt calibrated version instead of a classical non-probabilistic version in order to consider more suitable probabilities for the classes. In the third modification, a filter-based feature selection algorithm was used rather than a wrapper algorithm in order to further speed up the proposed method. In the final modification, instead of a classical majority voting, an objective majority voting, which has better performance and less ambiguity, was employed for fusing the single classifiers. Accordingly, the proposed method was entitled rotation calibrated least squares support vector machine (RoCLSSVM). Then, it was compared to other SVM-based versions and also the RoSVM. The results implied higher accuracy, efficiency and diversity of the RoCLSSVM than the RoSVM for the classification of the data set of this paper. Furthermore, the RoCLSSVM had lower sensitivity to the training size than the RoSVM.
提出了一种扩展的基于旋转的多源光学雷达数据集成分类方法。该方法实际上是受到基于旋转的支持向量机集成(RoSVM)的启发,并进行了一些基本改进。在第一次修改中,由于最小二乘支持向量机的速度更快,因此使用最小二乘支持向量机代替支持向量机。第二个修改是应用Platt校准版本而不是经典的非概率版本,以便考虑更适合类的概率。在第三次改进中,为了进一步提高所提出方法的速度,使用了基于滤波器的特征选择算法而不是包装算法。在最后的改进中,将传统的多数投票方法替换为性能更好、模糊度更小的客观多数投票方法来融合单个分类器。因此,所提出的方法被称为旋转校准最小二乘支持向量机(RoCLSSVM)。然后,将其与其他基于svm的版本以及RoSVM进行比较。结果表明,对于本文数据集的分类,RoCLSSVM比RoSVM具有更高的准确率、效率和多样性。此外,RoCLSSVM对训练大小的敏感性低于RoSVM。
{"title":"An ensemble method based on rotation calibrated least squares support vector machine for multi-source data classification","authors":"Iman Khosravi, Y. Razoumny, Javad Hatami Afkoueieh, S. K. Alavipanah","doi":"10.1080/19479832.2020.1821101","DOIUrl":"https://doi.org/10.1080/19479832.2020.1821101","url":null,"abstract":"ABSTRACT This paper proposed an extended rotation-based ensemble method for the classification of a multi-source optical-radar data. The proposed method was actually inspired by the rotation-based support vector machine ensemble (RoSVM) with several fundamental refinements. In the first modification, a least squares support vector machine was used rather than the support vector machine due to its higher speed. The second modification was to apply a Platt calibrated version instead of a classical non-probabilistic version in order to consider more suitable probabilities for the classes. In the third modification, a filter-based feature selection algorithm was used rather than a wrapper algorithm in order to further speed up the proposed method. In the final modification, instead of a classical majority voting, an objective majority voting, which has better performance and less ambiguity, was employed for fusing the single classifiers. Accordingly, the proposed method was entitled rotation calibrated least squares support vector machine (RoCLSSVM). Then, it was compared to other SVM-based versions and also the RoSVM. The results implied higher accuracy, efficiency and diversity of the RoCLSSVM than the RoSVM for the classification of the data set of this paper. Furthermore, the RoCLSSVM had lower sensitivity to the training size than the RoSVM.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"48 - 63"},"PeriodicalIF":2.3,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1821101","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44538181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Acknowledgement to Reviewers of the International Journal of Image and Data Fusion in 2020 对2020年《国际图像与数据融合杂志》审稿人的致谢
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-01-02 DOI: 10.1080/19479832.2021.1874635
Farhang Aliyari, T. Bouwmans, Yinguo Cao, Yushi Chen, A. Cherukuri, Srinivasa Rao Dammavalam, Vaidehi Deshmukh, Songlin Du, A. Erturk, S. Goh, Qing Guo, Marcus Hammer, Zhaozheng Hu, Jincai Huang, Shuying Huang, Maryam Imani, A. Jenerowicz, Bin Jia, W. Kainz, Singara Singh Kasana, J. Keighobadi, M. F. A. Khanan, Beibei Li, Dong Li, Zengke Li, Huimin Liu, Menghua Liu, Qingjie Liu, Ran Liu, Shengheng Liu, Zhengyi Liu, D. Lizcano, D. Lu, Xiaocheng Lu, P. Marpu, Deepak Mishra, P. Nepa, Yi-Ning Ning, Teerapong Panboonyuen, Rakesh C. Patel, Z. Shao, Huan-si Shen, Weina Song, A. Stein, Jianbo Tang, Yunwei Tang, Ling Tong, J. Torres-Sospedra, Md Azher Uddin, Kishor P. Upla, Sowmya V jian wang, Mingwen Wang, Qi Wang, Siye Wang, Kai Wen, Mengquan Wu, Youxi Wu, Fu Xiao, Bo-Lun Xu, Gong-Tao Yan, Hongbo Yan, Feng-Mei Yang, Xue Yang, Yuegang Yu, X. Yuan, C. Yuen, Yun Zhang, Bobai Zhao, Wen-long Zhao, Chao Zhou, Guoqing Zhou, Haiyang Zhou, Weidong Zou
Farhang Aliyari Thierry Bouwmans Ying Cao Yushi Chen Aswani Kumar Cherukuri Srinivasa Rao Dammavalam Vaidehi Deshmukh Songlin Du Alp Erturk Shu Ting Goh Qing Guo Marcus Hammer Zhaozheng Hu Jincai Huang Shuying Huang Maryam Imani Agnieszka Jenerowicz Binghao Jia Wolfgang Kainz Singara Singh Kasana Jafar Keighobadi M. F. Abdul Khanan Beibei Li Dong Li Zengke Li Huimin Liu Meng Liu Qingjie Liu Ran Liu Shengheng Liu Zhengyi Liu David Lizcano Dengsheng Lu INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 1, i–ii https://doi.org/10.1080/19479832.2021.1874635
Farhang Aliyari Thierry Bouwmans Ying Cao Yushi Chen Aswani Kumar Cherukuri Srinivasa Rao Dammavalam Vaidehi Deshmukh Sonlin Du Alp Erturk Shu Ting Goh Qing Guo Marcus Hammer Zhaozheng Hu Jincai Huang Shuying Huang Maryam Imani Agnieszka Jenerowicz Binghao Jia Wolfgang Kainz Singara Singh Kasana Jafar Keighobadi M.F。AbdulKhanan Beibei李东李增科李惠民刘萌刘庆杰刘然刘胜恒刘正义刘大卫Lizcano Dengsheng Lu国际图像与数据融合杂志2021,第12卷,第1期,i–iihttps://doi.org/10.1080/19479832.2021.1874635
{"title":"Acknowledgement to Reviewers of the International Journal of Image and Data Fusion in 2020","authors":"Farhang Aliyari, T. Bouwmans, Yinguo Cao, Yushi Chen, A. Cherukuri, Srinivasa Rao Dammavalam, Vaidehi Deshmukh, Songlin Du, A. Erturk, S. Goh, Qing Guo, Marcus Hammer, Zhaozheng Hu, Jincai Huang, Shuying Huang, Maryam Imani, A. Jenerowicz, Bin Jia, W. Kainz, Singara Singh Kasana, J. Keighobadi, M. F. A. Khanan, Beibei Li, Dong Li, Zengke Li, Huimin Liu, Menghua Liu, Qingjie Liu, Ran Liu, Shengheng Liu, Zhengyi Liu, D. Lizcano, D. Lu, Xiaocheng Lu, P. Marpu, Deepak Mishra, P. Nepa, Yi-Ning Ning, Teerapong Panboonyuen, Rakesh C. Patel, Z. Shao, Huan-si Shen, Weina Song, A. Stein, Jianbo Tang, Yunwei Tang, Ling Tong, J. Torres-Sospedra, Md Azher Uddin, Kishor P. Upla, Sowmya V jian wang, Mingwen Wang, Qi Wang, Siye Wang, Kai Wen, Mengquan Wu, Youxi Wu, Fu Xiao, Bo-Lun Xu, Gong-Tao Yan, Hongbo Yan, Feng-Mei Yang, Xue Yang, Yuegang Yu, X. Yuan, C. Yuen, Yun Zhang, Bobai Zhao, Wen-long Zhao, Chao Zhou, Guoqing Zhou, Haiyang Zhou, Weidong Zou","doi":"10.1080/19479832.2021.1874635","DOIUrl":"https://doi.org/10.1080/19479832.2021.1874635","url":null,"abstract":"Farhang Aliyari Thierry Bouwmans Ying Cao Yushi Chen Aswani Kumar Cherukuri Srinivasa Rao Dammavalam Vaidehi Deshmukh Songlin Du Alp Erturk Shu Ting Goh Qing Guo Marcus Hammer Zhaozheng Hu Jincai Huang Shuying Huang Maryam Imani Agnieszka Jenerowicz Binghao Jia Wolfgang Kainz Singara Singh Kasana Jafar Keighobadi M. F. Abdul Khanan Beibei Li Dong Li Zengke Li Huimin Liu Meng Liu Qingjie Liu Ran Liu Shengheng Liu Zhengyi Liu David Lizcano Dengsheng Lu INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 1, i–ii https://doi.org/10.1080/19479832.2021.1874635","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":" ","pages":"i - ii"},"PeriodicalIF":2.3,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2021.1874635","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46334267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving damage classification via hybrid deep learning feature representations derived from post-earthquake aerial images 通过地震后航空图像的混合深度学习特征表示改进损伤分类
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-12-30 DOI: 10.1080/19479832.2020.1864787
Tarablesse Settou, M. Kholladi, Abdelkamel Ben Ali
ABSTRACT One of the crucial problems after earthquakes is how to quickly and accurately detect and identify damaged areas. Several automated methods have been developed to analyse remote sensing (RS) images for earthquake damage classification. The performance of damage classification is mainly depending on powerful learning feature representations. Though the hand-crafted features can achieve satisfactory performance to some extent, the performance gain is small and does not generalise well. Recently, the convolutional neural network (CNN) has demonstrated its capability of deriving more powerful feature representations than hand-crafted features in many domains. Our main contribution in this paper is the investigation of hybrid feature representations derived from several pre-trained CNN models for earthquake damage classification. Also, in this study, in contrast to previous works, we explore the combination of feature representations extracted from the last two fully connected layers of a particular CNN model. We validated our proposals on two large datasets, including images highly varying in scene characteristics, lighting conditions, and image characteristics, captured from different earthquake events and several geographic locations. Extensive experiments showed that our proposals can improve significantly the performance.
如何快速准确地探测和识别地震后的受损区域是地震后的关键问题之一。目前已有几种自动分析遥感影像进行震害分类的方法。损伤分类的性能主要依赖于强大的学习特征表示。虽然手工制作的特征在一定程度上可以获得令人满意的性能,但性能增益很小,不能很好地泛化。近年来,卷积神经网络(CNN)在许多领域已经证明了其比手工特征更强大的特征表示能力。我们在本文中的主要贡献是研究了从几个预训练的CNN模型中获得的用于地震损伤分类的混合特征表示。此外,在本研究中,与之前的工作相比,我们探索了从特定CNN模型的最后两个完全连接层提取的特征表示的组合。我们在两个大型数据集上验证了我们的建议,包括从不同地震事件和几个地理位置捕获的场景特征、照明条件和图像特征高度变化的图像。大量的实验表明,我们的建议可以显著提高性能。
{"title":"Improving damage classification via hybrid deep learning feature representations derived from post-earthquake aerial images","authors":"Tarablesse Settou, M. Kholladi, Abdelkamel Ben Ali","doi":"10.1080/19479832.2020.1864787","DOIUrl":"https://doi.org/10.1080/19479832.2020.1864787","url":null,"abstract":"ABSTRACT One of the crucial problems after earthquakes is how to quickly and accurately detect and identify damaged areas. Several automated methods have been developed to analyse remote sensing (RS) images for earthquake damage classification. The performance of damage classification is mainly depending on powerful learning feature representations. Though the hand-crafted features can achieve satisfactory performance to some extent, the performance gain is small and does not generalise well. Recently, the convolutional neural network (CNN) has demonstrated its capability of deriving more powerful feature representations than hand-crafted features in many domains. Our main contribution in this paper is the investigation of hybrid feature representations derived from several pre-trained CNN models for earthquake damage classification. Also, in this study, in contrast to previous works, we explore the combination of feature representations extracted from the last two fully connected layers of a particular CNN model. We validated our proposals on two large datasets, including images highly varying in scene characteristics, lighting conditions, and image characteristics, captured from different earthquake events and several geographic locations. Extensive experiments showed that our proposals can improve significantly the performance.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"1 - 20"},"PeriodicalIF":2.3,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1864787","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46508751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Evaluation and correction of smartphone-based fine time range measurements 基于智能手机的精细时间范围测量的评估与校正
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-12-03 DOI: 10.1080/19479832.2020.1853614
Y. Bai, A. Kealy, Lucas Holden
ABSTRACT Wi-Fi-based positioning technology has been recognised as a useful and important technology for location-based service (LBS) accompanied by the rapid development and application of smartphones since the beginning of the 21st century. However, no mature technology or method of Wi-Fi-based positioning had provided a satisfying output in the past 20 years, until recently, when the IEEE 802.11mc standard was released and hardware-supported in the market, in which a fine time measurement (FTM) protocol and multiple round-trip time (RTT) was used for more accurate and robust ranging without the received signal strength indicator (RSSI) involved. This paper provided an evaluation and ranging offset correction approach for Wi-Fi FTM based ranging. The characteristics of the ranging offset deviation errors are specifically examined through two well-designed evaluation tests. In addition, the offset deviation errors from a CompuLab WILD router and a Google access point (AP) are also compared. An average of 0.181 m accuracy was achieved after a typical offset correction process to the ranging estimates obtained from a complex surrounding environment with line-of-sight (LOS) condition. The research outcome will become a useful resource for implementing other algorithms such as machine learning and multi-lateration for our future research projects.
进入21世纪以来,随着智能手机的快速发展和应用,基于wi - fi的定位技术已被公认为定位服务(LBS)的一项有用而重要的技术。然而,在过去的20年里,没有一种成熟的基于wi - fi的定位技术或方法能够提供令人满意的输出,直到最近,IEEE 802.11mc标准发布并在市场上得到硬件支持,该标准在不涉及接收信号强度指示器(RSSI)的情况下,使用精细时间测量(FTM)协议和多次往返时间(RTT)来实现更精确和鲁棒的测距。提出了一种基于Wi-Fi FTM的测距评估和测距偏移校正方法。通过两个精心设计的评估试验,具体考察了测距偏移误差的特性。此外,还比较了强迫症野生路由器和谷歌接入点(AP)的偏移误差。经过典型的偏移校正过程,从具有视距(LOS)条件的复杂周围环境中获得的距离估计获得了0.181 m的平均精度。该研究成果将成为我们未来研究项目中实现其他算法(如机器学习和多迭代)的有用资源。
{"title":"Evaluation and correction of smartphone-based fine time range measurements","authors":"Y. Bai, A. Kealy, Lucas Holden","doi":"10.1080/19479832.2020.1853614","DOIUrl":"https://doi.org/10.1080/19479832.2020.1853614","url":null,"abstract":"ABSTRACT Wi-Fi-based positioning technology has been recognised as a useful and important technology for location-based service (LBS) accompanied by the rapid development and application of smartphones since the beginning of the 21st century. However, no mature technology or method of Wi-Fi-based positioning had provided a satisfying output in the past 20 years, until recently, when the IEEE 802.11mc standard was released and hardware-supported in the market, in which a fine time measurement (FTM) protocol and multiple round-trip time (RTT) was used for more accurate and robust ranging without the received signal strength indicator (RSSI) involved. This paper provided an evaluation and ranging offset correction approach for Wi-Fi FTM based ranging. The characteristics of the ranging offset deviation errors are specifically examined through two well-designed evaluation tests. In addition, the offset deviation errors from a CompuLab WILD router and a Google access point (AP) are also compared. An average of 0.181 m accuracy was achieved after a typical offset correction process to the ranging estimates obtained from a complex surrounding environment with line-of-sight (LOS) condition. The research outcome will become a useful resource for implementing other algorithms such as machine learning and multi-lateration for our future research projects.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"185 - 202"},"PeriodicalIF":2.3,"publicationDate":"2020-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1853614","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41485134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Method of urban land change detection that is based on GF-2 high-resolution RS images 基于GF-2高分辨率遥感影像的城市土地变化检测方法
IF 2.3 Q3 REMOTE SENSING Pub Date : 2020-11-30 DOI: 10.1080/19479832.2020.1845246
Zhongbin Li, Ping Wang, M. Fan, Yifan Long
ABSTRACT With the successful launch of China’s high spatial resolution satellite Gaofen-2 (GF-2), the use of high spatial resolution satellite images for land change detection has high research potential. Based on the images from GF-2, this study combines principal component analysis and the spectral feature change method to identify different land changes in the form of different coloured patches. Then, three decision tree classification models are constructed to automatically detect the change, which includes information on the increase in the number of airports and buildings and increased or decreased vegetation. Further, through Quick Bird images for identical regions in the same periods, a sample of 2624 pixels is selected using a stratified random sampling method to verify the accuracy of the results indicating a change. The results show that the overall accuracy of the extracted information on land change was 98.21%, and the Kappa coefficient was 0.9604. Therefore, the method for land change detection and extraction of land change information used in this study is proven to be effective.
摘要随着中国高空间分辨率卫星高分二号的成功发射,利用高空间分辨率的卫星图像进行土地变化探测具有很高的研究潜力。基于GF-2的图像,本研究将主成分分析和光谱特征变化方法相结合,以不同颜色斑块的形式识别不同的土地变化。然后,构建三个决策树分类模型来自动检测变化,其中包括机场和建筑物数量增加以及植被增加或减少的信息。此外,通过在相同周期内的相同区域的Quick Bird图像,使用分层随机采样方法选择2624个像素的样本,以验证指示变化的结果的准确性。结果表明,提取的土地变化信息总体准确率为98.21%,Kappa系数为0.9604。因此,本研究中使用的土地变化检测和土地变化信息提取方法被证明是有效的。
{"title":"Method of urban land change detection that is based on GF-2 high-resolution RS images","authors":"Zhongbin Li, Ping Wang, M. Fan, Yifan Long","doi":"10.1080/19479832.2020.1845246","DOIUrl":"https://doi.org/10.1080/19479832.2020.1845246","url":null,"abstract":"ABSTRACT With the successful launch of China’s high spatial resolution satellite Gaofen-2 (GF-2), the use of high spatial resolution satellite images for land change detection has high research potential. Based on the images from GF-2, this study combines principal component analysis and the spectral feature change method to identify different land changes in the form of different coloured patches. Then, three decision tree classification models are constructed to automatically detect the change, which includes information on the increase in the number of airports and buildings and increased or decreased vegetation. Further, through Quick Bird images for identical regions in the same periods, a sample of 2624 pixels is selected using a stratified random sampling method to verify the accuracy of the results indicating a change. The results show that the overall accuracy of the extracted information on land change was 98.21%, and the Kappa coefficient was 0.9604. Therefore, the method for land change detection and extraction of land change information used in this study is proven to be effective.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"278 - 295"},"PeriodicalIF":2.3,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1845246","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46391844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
International Journal of Image and Data Fusion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1