Pub Date : 2021-12-26DOI: 10.1080/19479832.2021.2019132
Bikash Meher, S. Agrawal, Rutuparna Panda, A. Abraham
ABSTRACT The aim of remote sensing image fusion is to merge the high spectral resolution multispectral (MS) image with high spatial resolution panchromatic (PAN) image to get a high spatial resolution MS image with less spectral distortion. The conventional pixel level fusion techniques suffer from the halo effect and gradient reversal. To solve this problem, a new region-based method using anisotropic diffusion (AD) for remote sensing image fusion is investigated. The basic idea is to fuse the ‘Y’ component only (of YCbCr colour space) of the MS image with the PAN image. The base layers and detail layers of the input images obtained using the AD process are segmented using the fuzzy c-means (FCM) algorithm and combined based on their spatial frequency. The fusion experiment uses three data sets. The contributions of this paper are as follows: i) it solves the chromaticity loss problem at the time of fusion, ii) the AD filter with the region-based fusion approach is brought into the context of remote sensing application for the first time, and iii) the edge info in the input images is retained. A qualitative and quantitative comparison is made with classic and recent state-of-the-art methods. The experimental results reveal that the proposed method produces promising fusion results.
{"title":"A region based remote sensing image fusion using anisotropic diffusion process","authors":"Bikash Meher, S. Agrawal, Rutuparna Panda, A. Abraham","doi":"10.1080/19479832.2021.2019132","DOIUrl":"https://doi.org/10.1080/19479832.2021.2019132","url":null,"abstract":"ABSTRACT The aim of remote sensing image fusion is to merge the high spectral resolution multispectral (MS) image with high spatial resolution panchromatic (PAN) image to get a high spatial resolution MS image with less spectral distortion. The conventional pixel level fusion techniques suffer from the halo effect and gradient reversal. To solve this problem, a new region-based method using anisotropic diffusion (AD) for remote sensing image fusion is investigated. The basic idea is to fuse the ‘Y’ component only (of YCbCr colour space) of the MS image with the PAN image. The base layers and detail layers of the input images obtained using the AD process are segmented using the fuzzy c-means (FCM) algorithm and combined based on their spatial frequency. The fusion experiment uses three data sets. The contributions of this paper are as follows: i) it solves the chromaticity loss problem at the time of fusion, ii) the AD filter with the region-based fusion approach is brought into the context of remote sensing application for the first time, and iii) the edge info in the input images is retained. A qualitative and quantitative comparison is made with classic and recent state-of-the-art methods. The experimental results reveal that the proposed method produces promising fusion results.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"219 - 243"},"PeriodicalIF":2.3,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46689838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-22DOI: 10.1080/19479832.2021.2019133
Achala Shakya, M. Biswas, M. Pal
ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.
{"title":"Fusion and classification of multi-temporal SAR and optical imagery using convolutional neural network","authors":"Achala Shakya, M. Biswas, M. Pal","doi":"10.1080/19479832.2021.2019133","DOIUrl":"https://doi.org/10.1080/19479832.2021.2019133","url":null,"abstract":"ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"113 - 135"},"PeriodicalIF":2.3,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44768512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-15DOI: 10.1080/19479832.2021.2003446
T. Ghoniemy, M. Hammad, A. Amein, T. Mahmoud
ABSTRACT Synthetic aperture radar (SAR) images depend on the dielectric properties of objects with certain incident angles. Thus, vessels and other metallic objects appear clear in SAR images however, they are difficult to be distinguished in optical images. Synergy of these two types of images leads to not only high spatial and spectral resolutions but also good explanation of the image scene. In this paper, a hybrid pixel-level image fusion method is proposed for integrating panchromatic (PAN), multispectral (MS) and SAR images. The fusion method is performed using Multi-stage guided filter (MGF) for optical images pansharpening, to get high preserving spatial details and nested Gram-Schmidt (GS) and Curvelet-Transform (CVT) methods for SAR and optical images,to increase the quality of the final fused image and benefit from the SAR image properties. The accuracy and performance of the proposed method are appraised using Landsat-8 Operational-Land-Imager (OLI) and Sentinel-1 images subjectively as well as objectively using different quality metrics. Moreover, the proposed method is compared to a number of state-of-the-art fusion techniques. The results show significant improvements in both visual quality and the spatial and spectral evaluation metrics. Consequently, the proposed method is capable of highlighting maritime activity for further processing.
{"title":"Multi-stage guided-filter for SAR and optical satellites images fusion using Curvelet and Gram Schmidt transforms for maritime surveillance","authors":"T. Ghoniemy, M. Hammad, A. Amein, T. Mahmoud","doi":"10.1080/19479832.2021.2003446","DOIUrl":"https://doi.org/10.1080/19479832.2021.2003446","url":null,"abstract":"ABSTRACT Synthetic aperture radar (SAR) images depend on the dielectric properties of objects with certain incident angles. Thus, vessels and other metallic objects appear clear in SAR images however, they are difficult to be distinguished in optical images. Synergy of these two types of images leads to not only high spatial and spectral resolutions but also good explanation of the image scene. In this paper, a hybrid pixel-level image fusion method is proposed for integrating panchromatic (PAN), multispectral (MS) and SAR images. The fusion method is performed using Multi-stage guided filter (MGF) for optical images pansharpening, to get high preserving spatial details and nested Gram-Schmidt (GS) and Curvelet-Transform (CVT) methods for SAR and optical images,to increase the quality of the final fused image and benefit from the SAR image properties. The accuracy and performance of the proposed method are appraised using Landsat-8 Operational-Land-Imager (OLI) and Sentinel-1 images subjectively as well as objectively using different quality metrics. Moreover, the proposed method is compared to a number of state-of-the-art fusion techniques. The results show significant improvements in both visual quality and the spatial and spectral evaluation metrics. Consequently, the proposed method is capable of highlighting maritime activity for further processing.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"14 1","pages":"38 - 57"},"PeriodicalIF":2.3,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43470804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-14DOI: 10.1080/19479832.2021.2001051
Reza Seifi Majdar, H. Ghassemian
ABSTRACT Spectral data and spatial information such as shape and texture features can be fused to improve the classification of the hyperspectral images. In this paper, a novel approach of the spectral and spatial features (texture features and shape features) fusion in the probabilistic framework is proposed. The Gabor filters are applied to obtain the texture features and the morphological profiles (MPs) are used to obtain the shape features. These features are classified separately by the support vector machine (SVM); therefore, the per-pixel probabilities can be estimated. A novel meta-heuristic optimization method called Arithmetic Optimization Algorithm (AOA) is used to weighted combinations of these probabilities. Three parameters, α, β and γ determine the weight of each feature in the combination. The optimal value of these parameters is calculated by AOA. The proposed method is evaluated on three useful hyperspectral data sets: Indian Pines, Pavia University and Salinas. The experimental results demonstrate the effectiveness of the proposed combination in hyperspectral image classification, particularly with few labelled samples. As well as, this method is more accurate than a number of new spectral-spatial classification methods.
{"title":"Spectral-spatial classification fusion for hyperspectral images in the probabilistic framework via arithmetic optimization Algorithm","authors":"Reza Seifi Majdar, H. Ghassemian","doi":"10.1080/19479832.2021.2001051","DOIUrl":"https://doi.org/10.1080/19479832.2021.2001051","url":null,"abstract":"ABSTRACT Spectral data and spatial information such as shape and texture features can be fused to improve the classification of the hyperspectral images. In this paper, a novel approach of the spectral and spatial features (texture features and shape features) fusion in the probabilistic framework is proposed. The Gabor filters are applied to obtain the texture features and the morphological profiles (MPs) are used to obtain the shape features. These features are classified separately by the support vector machine (SVM); therefore, the per-pixel probabilities can be estimated. A novel meta-heuristic optimization method called Arithmetic Optimization Algorithm (AOA) is used to weighted combinations of these probabilities. Three parameters, α, β and γ determine the weight of each feature in the combination. The optimal value of these parameters is calculated by AOA. The proposed method is evaluated on three useful hyperspectral data sets: Indian Pines, Pavia University and Salinas. The experimental results demonstrate the effectiveness of the proposed combination in hyperspectral image classification, particularly with few labelled samples. As well as, this method is more accurate than a number of new spectral-spatial classification methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"262 - 277"},"PeriodicalIF":2.3,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45322277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/19479832.2021.1970931
Jiping Liu, M. Konečný, Qingyun Du, Shenghua Xu, F. Ren, Xianghong Che
Looking back over the past decade, superstorms, wildfires, floods, geological hazards, and monster earthquakes have taken unimaginable tolls all over the planet. In 2020, nearly 138 million people suffered from various natural disasters throughout China, where 591 people died and disappeared, and 5.89 million people were relocated for emergency resettlement. This led to direct economic losses of 370.15 billion CNY. With the advances of data acquisition technologies, i.e. remote sensing and Internet of Things, disasterrelated data can be collected rapidly and easily. However, disaster-related data vary in the acquiring methodology and, as such, vary in geographic scope and resolution; thus, how to fuse various disaster-related data is of significance for emergency disaster reduction (Liu et al. 2020). Disaster-related data are essential in understanding the impacts and costs of disasters, and data fusion plays an essential role in disaster prediction, reduction, assessment, and intelligent services. Using multisource data can improve the information availability and quality derived at various levels (Liu et al. 2018, Liu et al. 2020). Especially, for the emergency response, it is particularly imperative to integrate multisource data to provide the latest, accurate and timely information with various scales for disaster reduction services. For example, a large-scale landslide occurred in the Jinsha River Basin at the border of Sichuan and Tibet on 10 October 2018 and formed the barrier lake, which posed a great threat to the lives and property of people in the downstream Jinsha River region (Qiu et al. 2017, Li et al. 2020a). Using disaster multisource data fusion (Gamba 2014), spatiotemporal process simulation (Wang et al. 2020), visual analysis and risk assessment (Li et al. 2020), and disaster information intelligent service, decision-making information were generated to support disaster emergency management (Liu et al. 2018). This special issue on Data Fusion for Integrated Disaster Reduction Intelligence Service focuses on the latest theoretical and technical issues related to disaster-related data fusion, which aims at clarifying the current research progress to provide an opportunity to learn and communicate with each other in this field. This special issue is supported by the National Key Research and Development Program of China under Grant No. 2016YFC0803101 and includes six articles spanning various topics. Specifically, an improved frequency domain integration approach is proposed by combining GNSS and Accelerometers using GNSS to gain an accurate initial position to reconstruct dynamic displacements. An online emergency mapping framework based on a disaster scenario model is introduced for mapping, knowledge rules, mapping templates, map symbol engines, and a simple wizard to shorten the mapping cycle in emergencies. A suitability visualisation method is realised for flood fusion 3D scene guided by disaster information through the fusi
回顾过去的十年,超级风暴、野火、洪水、地质灾害和特大地震在全球造成了难以想象的损失。2020年,全国各类自然灾害受灾人数近1.38亿人,死亡失踪591人,紧急安置589万人。直接经济损失370.15亿元。随着遥感、物联网等数据采集技术的发展,灾害相关数据的采集变得快速、便捷。但是,与灾害有关的数据的获取方法各不相同,因此地理范围和分辨率也各不相同;因此,如何融合各种灾害相关数据对于应急减灾具有重要意义(Liu et al. 2020)。灾害相关数据对于了解灾害的影响和成本至关重要,数据融合在灾害预测、减灾、评估和智能服务中发挥着重要作用。使用多源数据可以提高各个层次的信息可用性和质量(Liu et al. 2018, Liu et al. 2020)。特别是在应急工作中,整合多源数据,为减灾服务提供最新、准确、及时、不同尺度的信息尤为重要。例如,2018年10月10日,川藏交界的金沙江流域发生大规模滑坡,形成堰塞湖,对下游金沙江地区人民生命财产造成极大威胁(Qiu et al. 2017, Li et al. 2020a)。利用灾害多源数据融合(Gamba 2014)、时空过程模拟(Wang et al. 2020)、可视化分析和风险评估(Li et al. 2020)以及灾害信息智能服务,生成决策信息,支持灾害应急管理(Liu et al. 2018)。本期《综合减灾情报服务数据融合》专刊重点介绍了灾情数据融合的最新理论和技术问题,旨在梳理当前的研究进展,为在这一领域相互学习和交流提供机会。本特刊由国家重点研究发展计划资助(资助号:2016YFC0803101),包含6篇文章,涵盖多个主题。具体而言,提出了一种改进的频域积分方法,将GNSS与加速度计结合起来,利用GNSS获得精确的初始位置来重建动态位移。介绍了一种基于灾难场景模型的在线应急映射框架,实现了映射、知识规则、映射模板、地图符号引擎和简单向导等功能,缩短了应急映射周期。通过基础地理场景、洪水时空过程和灾害对象模型的融合,实现以灾害信息为导向的洪水融合三维场景的适宜性可视化方法,帮助用户快速获取洪水灾害信息。一种无监督的中文地址提取方法。国际图像与数据融合学报,2021,VOL. 12, NO. 5。4,265 - 267 https://doi.org/10.1080/19479832.2021.1970931
{"title":"The latest progress of data fusion for integrated disaster reduction intelligence service","authors":"Jiping Liu, M. Konečný, Qingyun Du, Shenghua Xu, F. Ren, Xianghong Che","doi":"10.1080/19479832.2021.1970931","DOIUrl":"https://doi.org/10.1080/19479832.2021.1970931","url":null,"abstract":"Looking back over the past decade, superstorms, wildfires, floods, geological hazards, and monster earthquakes have taken unimaginable tolls all over the planet. In 2020, nearly 138 million people suffered from various natural disasters throughout China, where 591 people died and disappeared, and 5.89 million people were relocated for emergency resettlement. This led to direct economic losses of 370.15 billion CNY. With the advances of data acquisition technologies, i.e. remote sensing and Internet of Things, disasterrelated data can be collected rapidly and easily. However, disaster-related data vary in the acquiring methodology and, as such, vary in geographic scope and resolution; thus, how to fuse various disaster-related data is of significance for emergency disaster reduction (Liu et al. 2020). Disaster-related data are essential in understanding the impacts and costs of disasters, and data fusion plays an essential role in disaster prediction, reduction, assessment, and intelligent services. Using multisource data can improve the information availability and quality derived at various levels (Liu et al. 2018, Liu et al. 2020). Especially, for the emergency response, it is particularly imperative to integrate multisource data to provide the latest, accurate and timely information with various scales for disaster reduction services. For example, a large-scale landslide occurred in the Jinsha River Basin at the border of Sichuan and Tibet on 10 October 2018 and formed the barrier lake, which posed a great threat to the lives and property of people in the downstream Jinsha River region (Qiu et al. 2017, Li et al. 2020a). Using disaster multisource data fusion (Gamba 2014), spatiotemporal process simulation (Wang et al. 2020), visual analysis and risk assessment (Li et al. 2020), and disaster information intelligent service, decision-making information were generated to support disaster emergency management (Liu et al. 2018). This special issue on Data Fusion for Integrated Disaster Reduction Intelligence Service focuses on the latest theoretical and technical issues related to disaster-related data fusion, which aims at clarifying the current research progress to provide an opportunity to learn and communicate with each other in this field. This special issue is supported by the National Key Research and Development Program of China under Grant No. 2016YFC0803101 and includes six articles spanning various topics. Specifically, an improved frequency domain integration approach is proposed by combining GNSS and Accelerometers using GNSS to gain an accurate initial position to reconstruct dynamic displacements. An online emergency mapping framework based on a disaster scenario model is introduced for mapping, knowledge rules, mapping templates, map symbol engines, and a simple wizard to shorten the mapping cycle in emergencies. A suitability visualisation method is realised for flood fusion 3D scene guided by disaster information through the fusi","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"265 - 267"},"PeriodicalIF":2.3,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47516818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/19479832.2021.1995136
M. Abdelkareem, S. Auer, A. B. Pour, Jianguo Chen, Jian Cheng, M. Datcu, Huihui Feng, Shubham Gupta, M. Hashim, Maryam Imani, W. Kainz, M. S. Karoui, T. Kavzoglu, Fatemeh Kowkabi, Anil Kumar, Xue Li, Zengke Li, Feng
The editors of the International Journal of Image and Data Fusion wish to express their sincere gratitude to the following reviewers for their valued contribution to the journal in 2021. Mohamed Abdelkareem Stefan Auer Amin Beiranvand Pour Jianguo Chen Jian Cheng Mihai Datcu Huihui Feng Shubham Gupta Mazlan Hashim Maryam Imani Wolfgang Kainz Moussa Sofiane Karoui Taskin Kavzoglu Fatemeh Kowkabi Anil Kumar Xue Li Zengke Li Feng Ling Zhong Lu Arash Malekian Lamin R. Mansaray Seyed Jalaleddin Mousavirad Mircea Paul Muresan Henry Y.T. Ngan Mohammad Parsa Shengliang Pu Jinxi Qian Omeid Rahmani H. Ranjbar Wellington Pinheiro dos Santos Hadi Shahriari Huanfeng Shen Yuqi Tang Kishor Upla INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 4, i–ii https://doi.org/10.1080/19479832.2021.1995136
《国际图像与数据融合杂志》的编辑们衷心感谢以下审稿人在2021年为该杂志做出的宝贵贡献。Mohamed Abdelkareem Stefan Auer Amin Beiranvand Pour Jianguo Chen Jian Cheng Mihai Datcu Huizui Feng Shubham Gupta Mazlan Hashim Maryam Imani Wolfgang Kainz Moussa Sofiane Karoui Taskin Kavzoglu Fatemeh Kowkabi Anil Kumar Xue Li Zengke Li Feng Ling Zhong Lu Arash Malekian Lamin R。Mansaray Seyed Jalaleddin Mousavirad Mircea Paul Muresan Henry Y.T.Ngan Mohammad Parsa Shengliang Pu Jinxi Qian Omeid Rahmani H.Ranjbar Wellington Pinheiro dos Santos Hadi Shahriari Huanfeng Shen Yuqi Tang Kishor Upla INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021,第12卷,第4期,i–iihttps://doi.org/10.1080/19479832.2021.1995136
{"title":"Acknowledgement to Reviewers of the International Journal of Image and Data Fusion in 2021","authors":"M. Abdelkareem, S. Auer, A. B. Pour, Jianguo Chen, Jian Cheng, M. Datcu, Huihui Feng, Shubham Gupta, M. Hashim, Maryam Imani, W. Kainz, M. S. Karoui, T. Kavzoglu, Fatemeh Kowkabi, Anil Kumar, Xue Li, Zengke Li, Feng","doi":"10.1080/19479832.2021.1995136","DOIUrl":"https://doi.org/10.1080/19479832.2021.1995136","url":null,"abstract":"The editors of the International Journal of Image and Data Fusion wish to express their sincere gratitude to the following reviewers for their valued contribution to the journal in 2021. Mohamed Abdelkareem Stefan Auer Amin Beiranvand Pour Jianguo Chen Jian Cheng Mihai Datcu Huihui Feng Shubham Gupta Mazlan Hashim Maryam Imani Wolfgang Kainz Moussa Sofiane Karoui Taskin Kavzoglu Fatemeh Kowkabi Anil Kumar Xue Li Zengke Li Feng Ling Zhong Lu Arash Malekian Lamin R. Mansaray Seyed Jalaleddin Mousavirad Mircea Paul Muresan Henry Y.T. Ngan Mohammad Parsa Shengliang Pu Jinxi Qian Omeid Rahmani H. Ranjbar Wellington Pinheiro dos Santos Hadi Shahriari Huanfeng Shen Yuqi Tang Kishor Upla INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 4, i–ii https://doi.org/10.1080/19479832.2021.1995136","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"i - ii"},"PeriodicalIF":2.3,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46102818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-30DOI: 10.1080/19479832.2021.1972047
M. Elkholy, M. Mostafa, H. M. Ebeid, M. Tolba
ABSTRACT Hyperspectral imaging (HSI) is a beneficial source of information for numerous civil and military applications, but high dimensionality and strong correlation limits HSI classification performance. Band selection aims at selecting the most informative bands to minimise the computational cost and eliminate redundant information. In this paper, we propose a new unsupervised band selection approach that benefits from the current dominant stream of deep learning frameworks. The proposed approach consists of two consecutive phases: unmixing and cluster. In the unmixing phase, we utilised a nonlinear deep autoencoder to extract accurate material spectra. In the cluster phase, we calculate the variance for each obtained endmember to construct a variances vector. Then, classical K-mean was adopted to cluster the variances vectors. Finally, the optimal band subset was obtained by choosing only one spectral band for each cluster. We carried out several experiments on three hyperspectral datasets to test the feasibility and generality of the proposed approach. Experimental results indicate that the proposed approach surpasses several state-of-the-art counterparts by an average of 4% in terms of overall accuracy.
{"title":"Unsupervised hyperspectral band selection with deep autoencoder unmixing","authors":"M. Elkholy, M. Mostafa, H. M. Ebeid, M. Tolba","doi":"10.1080/19479832.2021.1972047","DOIUrl":"https://doi.org/10.1080/19479832.2021.1972047","url":null,"abstract":"ABSTRACT Hyperspectral imaging (HSI) is a beneficial source of information for numerous civil and military applications, but high dimensionality and strong correlation limits HSI classification performance. Band selection aims at selecting the most informative bands to minimise the computational cost and eliminate redundant information. In this paper, we propose a new unsupervised band selection approach that benefits from the current dominant stream of deep learning frameworks. The proposed approach consists of two consecutive phases: unmixing and cluster. In the unmixing phase, we utilised a nonlinear deep autoencoder to extract accurate material spectra. In the cluster phase, we calculate the variance for each obtained endmember to construct a variances vector. Then, classical K-mean was adopted to cluster the variances vectors. Finally, the optimal band subset was obtained by choosing only one spectral band for each cluster. We carried out several experiments on three hyperspectral datasets to test the feasibility and generality of the proposed approach. Experimental results indicate that the proposed approach surpasses several state-of-the-art counterparts by an average of 4% in terms of overall accuracy.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"244 - 261"},"PeriodicalIF":2.3,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43736633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-29DOI: 10.1080/19479832.2021.1967468
Xu Liu, Jian Wang, Jie Zhen, Houzeng Han, C. Hancock
ABSTRACT The accelerometer frequency domain integration approach (FDIA) is being actively applied to calculate dynamic displacement responses of large engineering structures. However, it is a relative acceleration measurement as the initial position is unavailable. GNSS offers direct displacement measurements, but has the limitation of relatively low frequency of data compared with alternative measurement techniques. Therefore, this paper proposes an improved FDIA utilising the advantages of GNSS to gain accurate information about the initial position. The performance of the proposed approach is first validated through software simulation. Following the validation, a series of shaking table tests using various vibration frequencies (0.5 HZ, 1 HZ, 1.5 HZ, 2 HZ and 2.5 HZ) are performed at the south square of Beijing University of Civil Engineering and Architecture (BUCEA) using one GNSS receiver and one accelerometer. The results show that the proposed approach can effectively avoid the uncertainty of the initial value and thus enhance the direct measurement accuracy of the dynamic displacements of structures, with root mean square error (RMSE) decreasing from 11.4 mm to 6.8 mm.
{"title":"GNSS-aided accelerometer frequency domain integration approach to monitor structural dynamic displacements","authors":"Xu Liu, Jian Wang, Jie Zhen, Houzeng Han, C. Hancock","doi":"10.1080/19479832.2021.1967468","DOIUrl":"https://doi.org/10.1080/19479832.2021.1967468","url":null,"abstract":"ABSTRACT The accelerometer frequency domain integration approach (FDIA) is being actively applied to calculate dynamic displacement responses of large engineering structures. However, it is a relative acceleration measurement as the initial position is unavailable. GNSS offers direct displacement measurements, but has the limitation of relatively low frequency of data compared with alternative measurement techniques. Therefore, this paper proposes an improved FDIA utilising the advantages of GNSS to gain accurate information about the initial position. The performance of the proposed approach is first validated through software simulation. Following the validation, a series of shaking table tests using various vibration frequencies (0.5 HZ, 1 HZ, 1.5 HZ, 2 HZ and 2.5 HZ) are performed at the south square of Beijing University of Civil Engineering and Architecture (BUCEA) using one GNSS receiver and one accelerometer. The results show that the proposed approach can effectively avoid the uncertainty of the initial value and thus enhance the direct measurement accuracy of the dynamic displacements of structures, with root mean square error (RMSE) decreasing from 11.4 mm to 6.8 mm.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"268 - 281"},"PeriodicalIF":2.3,"publicationDate":"2021-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48502597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-12DOI: 10.1080/19479832.2021.1953620
Asma Elyounsi, H. Tlijani, M. Bouhlel
ABSTRACT Detection and classification with traditional neural networks methods such as multilayer perceptron (MLP), feed forward network and back propagation neural networks show several drawbacks including the rate of convergence and the incapacity facing the problems of size of the image especially for radar images. As a result, these methods are being replaced by other evolutional classification methods such as Higher Order Neural Networks (HONN) (Functional Link Artificial Neural Network (FLANN), Pi Sigma Neural Network (PSNN), Neural Network Product Unit (PUNN) and Neural Network of the Higher Order Processing Unit. So, in this paper, we address radar object detection and classification problems with a new strategy by using PSNN and a new proposed method HOGE for edges features extraction based on morphological operators and histogram of oriented gradient. Thus, in order to recognise radar object, we extract HOG features of the object region and classify our target with PSNN. The HOGE features vector is used as input of pi-sigma NN. The proposed method was tested and confirmed based on experiments through the use of 2D and 3D ISAR images.
{"title":"A moving ISAR-object recognition using pi-sigma neural networks based on histogram of oriented gradient of edge","authors":"Asma Elyounsi, H. Tlijani, M. Bouhlel","doi":"10.1080/19479832.2021.1953620","DOIUrl":"https://doi.org/10.1080/19479832.2021.1953620","url":null,"abstract":"ABSTRACT Detection and classification with traditional neural networks methods such as multilayer perceptron (MLP), feed forward network and back propagation neural networks show several drawbacks including the rate of convergence and the incapacity facing the problems of size of the image especially for radar images. As a result, these methods are being replaced by other evolutional classification methods such as Higher Order Neural Networks (HONN) (Functional Link Artificial Neural Network (FLANN), Pi Sigma Neural Network (PSNN), Neural Network Product Unit (PUNN) and Neural Network of the Higher Order Processing Unit. So, in this paper, we address radar object detection and classification problems with a new strategy by using PSNN and a new proposed method HOGE for edges features extraction based on morphological operators and histogram of oriented gradient. Thus, in order to recognise radar object, we extract HOG features of the object region and classify our target with PSNN. The HOGE features vector is used as input of pi-sigma NN. The proposed method was tested and confirmed based on experiments through the use of 2D and 3D ISAR images.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"297 - 315"},"PeriodicalIF":2.3,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49082665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1080/19479832.2021.1961316
Mengmeng Liu, Jiping Liu, Shenghua Xu, Tao Zhou, Yu Ma, Fuhao Zhang, M. Konečný
ABSTRACT The quality of “non-landslide’ samples data impacts the accuracy of geological hazard risk assessment. This research proposed a method to improve the performance of support vector machine (SVM) by perfecting the quality of ‘non-landslide’ samples in the landslide susceptibility evaluation model through fuzzy c-means (FCM) cluster to generate more reliable susceptibility maps. Firstly, three sample selection scenarios for ‘non-landslide’ samples include the following principles: 1) select randomly from low-slope areas (scenario-SS), 2) select randomly from areas with no hazards (scenario-RS), 3) obtain samples from the optimal FCM model (scenario-FCM), and then three sample scenarios are constructed with 10,193 landslide positive samples. Next, we have compared and evaluated the performance of three sample scenarios in the SVM models based on the statistical indicators such as the proportion of disaster points, density of disaster points precision, receiver operating characteristic (ROC) curve, and area under the ROC curve (AUC). Finally, The evaluation results show that the ‘non-landslide’ negative samples based on the FCM model are more reasonable. Furthermore, the hybrid method supported by SVM and FCM models exhibits the highest prediction efficiency. Scenario FCM produces an overall accuracy of approximately 89.7% (AUC), followed by scenario-SS (86.7%) and scenario-RS (85.6%).
{"title":"Landslide susceptibility mapping with the fusion of multi-feature SVM model based FCM sampling strategy: A case study from Shaanxi Province","authors":"Mengmeng Liu, Jiping Liu, Shenghua Xu, Tao Zhou, Yu Ma, Fuhao Zhang, M. Konečný","doi":"10.1080/19479832.2021.1961316","DOIUrl":"https://doi.org/10.1080/19479832.2021.1961316","url":null,"abstract":"ABSTRACT The quality of “non-landslide’ samples data impacts the accuracy of geological hazard risk assessment. This research proposed a method to improve the performance of support vector machine (SVM) by perfecting the quality of ‘non-landslide’ samples in the landslide susceptibility evaluation model through fuzzy c-means (FCM) cluster to generate more reliable susceptibility maps. Firstly, three sample selection scenarios for ‘non-landslide’ samples include the following principles: 1) select randomly from low-slope areas (scenario-SS), 2) select randomly from areas with no hazards (scenario-RS), 3) obtain samples from the optimal FCM model (scenario-FCM), and then three sample scenarios are constructed with 10,193 landslide positive samples. Next, we have compared and evaluated the performance of three sample scenarios in the SVM models based on the statistical indicators such as the proportion of disaster points, density of disaster points precision, receiver operating characteristic (ROC) curve, and area under the ROC curve (AUC). Finally, The evaluation results show that the ‘non-landslide’ negative samples based on the FCM model are more reasonable. Furthermore, the hybrid method supported by SVM and FCM models exhibits the highest prediction efficiency. Scenario FCM produces an overall accuracy of approximately 89.7% (AUC), followed by scenario-SS (86.7%) and scenario-RS (85.6%).","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"349 - 366"},"PeriodicalIF":2.3,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43625646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}