首页 > 最新文献

International Journal of Image and Data Fusion最新文献

英文 中文
A region based remote sensing image fusion using anisotropic diffusion process 一种基于区域的各向异性扩散遥感图像融合方法
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-12-26 DOI: 10.1080/19479832.2021.2019132
Bikash Meher, S. Agrawal, Rutuparna Panda, A. Abraham
ABSTRACT The aim of remote sensing image fusion is to merge the high spectral resolution multispectral (MS) image with high spatial resolution panchromatic (PAN) image to get a high spatial resolution MS image with less spectral distortion. The conventional pixel level fusion techniques suffer from the halo effect and gradient reversal. To solve this problem, a new region-based method using anisotropic diffusion (AD) for remote sensing image fusion is investigated. The basic idea is to fuse the ‘Y’ component only (of YCbCr colour space) of the MS image with the PAN image. The base layers and detail layers of the input images obtained using the AD process are segmented using the fuzzy c-means (FCM) algorithm and combined based on their spatial frequency. The fusion experiment uses three data sets. The contributions of this paper are as follows: i) it solves the chromaticity loss problem at the time of fusion, ii) the AD filter with the region-based fusion approach is brought into the context of remote sensing application for the first time, and iii) the edge info in the input images is retained. A qualitative and quantitative comparison is made with classic and recent state-of-the-art methods. The experimental results reveal that the proposed method produces promising fusion results.
摘要遥感图像融合的目的是将高光谱分辨率多光谱(MS)图像与高空间分辨率全色(PAN)图像融合,得到光谱失真较小的高空间分辨率多光谱图像。传统的像素级融合技术受到光晕效应和梯度反转的影响。为了解决这个问题,研究了一种新的基于区域的各向异性扩散遥感图像融合方法。其基本思想是将MS图像的仅“Y”分量(YCbCr颜色空间的)与PAN图像融合。使用模糊c-均值(FCM)算法对使用AD处理获得的输入图像的基本层和细节层进行分割,并基于它们的空间频率进行组合。融合实验使用了三个数据集。本文的贡献如下:i)它解决了融合时的色度损失问题,ii)首次将基于区域的融合方法的AD滤波器引入遥感应用中,以及iii)保留了输入图像中的边缘信息。将经典方法和最新的最先进方法进行了定性和定量比较。实验结果表明,该方法具有良好的融合效果。
{"title":"A region based remote sensing image fusion using anisotropic diffusion process","authors":"Bikash Meher, S. Agrawal, Rutuparna Panda, A. Abraham","doi":"10.1080/19479832.2021.2019132","DOIUrl":"https://doi.org/10.1080/19479832.2021.2019132","url":null,"abstract":"ABSTRACT The aim of remote sensing image fusion is to merge the high spectral resolution multispectral (MS) image with high spatial resolution panchromatic (PAN) image to get a high spatial resolution MS image with less spectral distortion. The conventional pixel level fusion techniques suffer from the halo effect and gradient reversal. To solve this problem, a new region-based method using anisotropic diffusion (AD) for remote sensing image fusion is investigated. The basic idea is to fuse the ‘Y’ component only (of YCbCr colour space) of the MS image with the PAN image. The base layers and detail layers of the input images obtained using the AD process are segmented using the fuzzy c-means (FCM) algorithm and combined based on their spatial frequency. The fusion experiment uses three data sets. The contributions of this paper are as follows: i) it solves the chromaticity loss problem at the time of fusion, ii) the AD filter with the region-based fusion approach is brought into the context of remote sensing application for the first time, and iii) the edge info in the input images is retained. A qualitative and quantitative comparison is made with classic and recent state-of-the-art methods. The experimental results reveal that the proposed method produces promising fusion results.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"219 - 243"},"PeriodicalIF":2.3,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46689838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fusion and classification of multi-temporal SAR and optical imagery using convolutional neural network 基于卷积神经网络的多时相SAR与光学图像的融合与分类
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-12-22 DOI: 10.1080/19479832.2021.2019133
Achala Shakya, M. Biswas, M. Pal
ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.
遥感图像分类是一个难点,特别是对物候生长期相同的农作物进行分类。在这种情况下,多传感器图像融合允许生物物理和结构信息的全面表示。近年来,基于卷积神经网络(CNN)的方法由于其空间光谱可解释性而被广泛应用。因此,本研究探讨了融合多时相Sentinel 1 (S1)和Sentinel 2 (S2)图像在印度农业区土地利用/土地覆盖分类中的潜力。分类使用了贝叶斯优化的二维cnn分类器和基于像素的SVM分类器。在融合方面,采用基于cnn的siamese网络和拉普拉斯金字塔比例法对整个冬季种植期的图像进行融合。这种融合策略使得结果具有更好的可解释性,并且还发现基于2D cnn的深度学习分类器在单月(95.14%和96.11%)和多时间(99.87%和99.91%)融合的分类准确率与单月(80.02%和81.36%)和多时间融合(95.69%和95.84%)的SVM分类器相比都有较好的表现。结果表明,垂直-垂直极化融合图像的性能优于垂直-水平极化融合图像。因此,这意味着需要分析DL分类器获得的分类图像以及分类精度。
{"title":"Fusion and classification of multi-temporal SAR and optical imagery using convolutional neural network","authors":"Achala Shakya, M. Biswas, M. Pal","doi":"10.1080/19479832.2021.2019133","DOIUrl":"https://doi.org/10.1080/19479832.2021.2019133","url":null,"abstract":"ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"113 - 135"},"PeriodicalIF":2.3,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44768512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Multi-stage guided-filter for SAR and optical satellites images fusion using Curvelet and Gram Schmidt transforms for maritime surveillance 基于Curvelet变换和Gram Schmidt变换的SAR和光学卫星图像融合多级制导滤波器用于海上监视
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-11-15 DOI: 10.1080/19479832.2021.2003446
T. Ghoniemy, M. Hammad, A. Amein, T. Mahmoud
ABSTRACT Synthetic aperture radar (SAR) images depend on the dielectric properties of objects with certain incident angles. Thus, vessels and other metallic objects appear clear in SAR images however, they are difficult to be distinguished in optical images. Synergy of these two types of images leads to not only high spatial and spectral resolutions but also good explanation of the image scene. In this paper, a hybrid pixel-level image fusion method is proposed for integrating panchromatic (PAN), multispectral (MS) and SAR images. The fusion method is performed using Multi-stage guided filter (MGF) for optical images pansharpening, to get high preserving spatial details and nested Gram-Schmidt (GS) and Curvelet-Transform (CVT) methods for SAR and optical images,to increase the quality of the final fused image and benefit from the SAR image properties. The accuracy and performance of the proposed method are appraised using Landsat-8 Operational-Land-Imager (OLI) and Sentinel-1 images subjectively as well as objectively using different quality metrics. Moreover, the proposed method is compared to a number of state-of-the-art fusion techniques. The results show significant improvements in both visual quality and the spatial and spectral evaluation metrics. Consequently, the proposed method is capable of highlighting maritime activity for further processing.
合成孔径雷达(SAR)图像依赖于具有一定入射角的物体的介电特性。因此,血管等金属物体在SAR图像中清晰可见,而在光学图像中却难以区分。这两种类型的图像的协同不仅可以获得高的空间和光谱分辨率,而且可以很好地解释图像场景。提出了一种混合像素级图像融合方法,用于全色(PAN)、多光谱(MS)和SAR图像的融合。该融合方法采用多级引导滤波(Multi-stage guided filter, MGF)对光学图像进行全锐化,以获得高保留度的空间细节;采用嵌套的Gram-Schmidt (GS)和Curvelet-Transform (CVT)方法对SAR图像和光学图像进行融合,以提高最终融合图像的质量并充分利用SAR图像的特性。利用Landsat-8操作陆地成像仪(OLI)和Sentinel-1图像主观上和客观上使用不同的质量指标对所提方法的精度和性能进行了评价。此外,所提出的方法与许多最先进的融合技术进行了比较。结果表明,在视觉质量、空间和光谱评价指标方面都有显著改善。因此,所提出的方法能够突出显示海事活动以供进一步处理。
{"title":"Multi-stage guided-filter for SAR and optical satellites images fusion using Curvelet and Gram Schmidt transforms for maritime surveillance","authors":"T. Ghoniemy, M. Hammad, A. Amein, T. Mahmoud","doi":"10.1080/19479832.2021.2003446","DOIUrl":"https://doi.org/10.1080/19479832.2021.2003446","url":null,"abstract":"ABSTRACT Synthetic aperture radar (SAR) images depend on the dielectric properties of objects with certain incident angles. Thus, vessels and other metallic objects appear clear in SAR images however, they are difficult to be distinguished in optical images. Synergy of these two types of images leads to not only high spatial and spectral resolutions but also good explanation of the image scene. In this paper, a hybrid pixel-level image fusion method is proposed for integrating panchromatic (PAN), multispectral (MS) and SAR images. The fusion method is performed using Multi-stage guided filter (MGF) for optical images pansharpening, to get high preserving spatial details and nested Gram-Schmidt (GS) and Curvelet-Transform (CVT) methods for SAR and optical images,to increase the quality of the final fused image and benefit from the SAR image properties. The accuracy and performance of the proposed method are appraised using Landsat-8 Operational-Land-Imager (OLI) and Sentinel-1 images subjectively as well as objectively using different quality metrics. Moreover, the proposed method is compared to a number of state-of-the-art fusion techniques. The results show significant improvements in both visual quality and the spatial and spectral evaluation metrics. Consequently, the proposed method is capable of highlighting maritime activity for further processing.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"14 1","pages":"38 - 57"},"PeriodicalIF":2.3,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43470804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Spectral-spatial classification fusion for hyperspectral images in the probabilistic framework via arithmetic optimization Algorithm 基于算术优化算法的概率框架下高光谱图像光谱空间分类融合
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-11-14 DOI: 10.1080/19479832.2021.2001051
Reza Seifi Majdar, H. Ghassemian
ABSTRACT Spectral data and spatial information such as shape and texture features can be fused to improve the classification of the hyperspectral images. In this paper, a novel approach of the spectral and spatial features (texture features and shape features) fusion in the probabilistic framework is proposed. The Gabor filters are applied to obtain the texture features and the morphological profiles (MPs) are used to obtain the shape features. These features are classified separately by the support vector machine (SVM); therefore, the per-pixel probabilities can be estimated. A novel meta-heuristic optimization method called Arithmetic Optimization Algorithm (AOA) is used to weighted combinations of these probabilities. Three parameters, α, β and γ determine the weight of each feature in the combination. The optimal value of these parameters is calculated by AOA. The proposed method is evaluated on three useful hyperspectral data sets: Indian Pines, Pavia University and Salinas. The experimental results demonstrate the effectiveness of the proposed combination in hyperspectral image classification, particularly with few labelled samples. As well as, this method is more accurate than a number of new spectral-spatial classification methods.
摘要可以将光谱数据与形状和纹理特征等空间信息融合,以提高高光谱图像的分类效果。本文提出了一种在概率框架下进行光谱和空间特征(纹理特征和形状特征)融合的新方法。Gabor滤波器用于获得纹理特征,形态轮廓(MP)用于获得形状特征。这些特征由支持向量机(SVM)单独分类;因此,可以估计每像素的概率。一种新的元启发式优化方法称为算术优化算法(AOA),用于加权这些概率的组合。α、β和γ三个参数决定了组合中每个特征的权重。这些参数的最优值是通过AOA来计算的。在三个有用的高光谱数据集上对所提出的方法进行了评估:印度松树、帕维亚大学和萨利纳斯。实验结果证明了所提出的组合在高光谱图像分类中的有效性,特别是在标记样本较少的情况下。此外,该方法比许多新的光谱空间分类方法更准确。
{"title":"Spectral-spatial classification fusion for hyperspectral images in the probabilistic framework via arithmetic optimization Algorithm","authors":"Reza Seifi Majdar, H. Ghassemian","doi":"10.1080/19479832.2021.2001051","DOIUrl":"https://doi.org/10.1080/19479832.2021.2001051","url":null,"abstract":"ABSTRACT Spectral data and spatial information such as shape and texture features can be fused to improve the classification of the hyperspectral images. In this paper, a novel approach of the spectral and spatial features (texture features and shape features) fusion in the probabilistic framework is proposed. The Gabor filters are applied to obtain the texture features and the morphological profiles (MPs) are used to obtain the shape features. These features are classified separately by the support vector machine (SVM); therefore, the per-pixel probabilities can be estimated. A novel meta-heuristic optimization method called Arithmetic Optimization Algorithm (AOA) is used to weighted combinations of these probabilities. Three parameters, α, β and γ determine the weight of each feature in the combination. The optimal value of these parameters is calculated by AOA. The proposed method is evaluated on three useful hyperspectral data sets: Indian Pines, Pavia University and Salinas. The experimental results demonstrate the effectiveness of the proposed combination in hyperspectral image classification, particularly with few labelled samples. As well as, this method is more accurate than a number of new spectral-spatial classification methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"262 - 277"},"PeriodicalIF":2.3,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45322277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The latest progress of data fusion for integrated disaster reduction intelligence service 综合减灾情报服务数据融合的最新进展
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-10-02 DOI: 10.1080/19479832.2021.1970931
Jiping Liu, M. Konečný, Qingyun Du, Shenghua Xu, F. Ren, Xianghong Che
Looking back over the past decade, superstorms, wildfires, floods, geological hazards, and monster earthquakes have taken unimaginable tolls all over the planet. In 2020, nearly 138 million people suffered from various natural disasters throughout China, where 591 people died and disappeared, and 5.89 million people were relocated for emergency resettlement. This led to direct economic losses of 370.15 billion CNY. With the advances of data acquisition technologies, i.e. remote sensing and Internet of Things, disasterrelated data can be collected rapidly and easily. However, disaster-related data vary in the acquiring methodology and, as such, vary in geographic scope and resolution; thus, how to fuse various disaster-related data is of significance for emergency disaster reduction (Liu et al. 2020). Disaster-related data are essential in understanding the impacts and costs of disasters, and data fusion plays an essential role in disaster prediction, reduction, assessment, and intelligent services. Using multisource data can improve the information availability and quality derived at various levels (Liu et al. 2018, Liu et al. 2020). Especially, for the emergency response, it is particularly imperative to integrate multisource data to provide the latest, accurate and timely information with various scales for disaster reduction services. For example, a large-scale landslide occurred in the Jinsha River Basin at the border of Sichuan and Tibet on 10 October 2018 and formed the barrier lake, which posed a great threat to the lives and property of people in the downstream Jinsha River region (Qiu et al. 2017, Li et al. 2020a). Using disaster multisource data fusion (Gamba 2014), spatiotemporal process simulation (Wang et al. 2020), visual analysis and risk assessment (Li et al. 2020), and disaster information intelligent service, decision-making information were generated to support disaster emergency management (Liu et al. 2018). This special issue on Data Fusion for Integrated Disaster Reduction Intelligence Service focuses on the latest theoretical and technical issues related to disaster-related data fusion, which aims at clarifying the current research progress to provide an opportunity to learn and communicate with each other in this field. This special issue is supported by the National Key Research and Development Program of China under Grant No. 2016YFC0803101 and includes six articles spanning various topics. Specifically, an improved frequency domain integration approach is proposed by combining GNSS and Accelerometers using GNSS to gain an accurate initial position to reconstruct dynamic displacements. An online emergency mapping framework based on a disaster scenario model is introduced for mapping, knowledge rules, mapping templates, map symbol engines, and a simple wizard to shorten the mapping cycle in emergencies. A suitability visualisation method is realised for flood fusion 3D scene guided by disaster information through the fusi
回顾过去的十年,超级风暴、野火、洪水、地质灾害和特大地震在全球造成了难以想象的损失。2020年,全国各类自然灾害受灾人数近1.38亿人,死亡失踪591人,紧急安置589万人。直接经济损失370.15亿元。随着遥感、物联网等数据采集技术的发展,灾害相关数据的采集变得快速、便捷。但是,与灾害有关的数据的获取方法各不相同,因此地理范围和分辨率也各不相同;因此,如何融合各种灾害相关数据对于应急减灾具有重要意义(Liu et al. 2020)。灾害相关数据对于了解灾害的影响和成本至关重要,数据融合在灾害预测、减灾、评估和智能服务中发挥着重要作用。使用多源数据可以提高各个层次的信息可用性和质量(Liu et al. 2018, Liu et al. 2020)。特别是在应急工作中,整合多源数据,为减灾服务提供最新、准确、及时、不同尺度的信息尤为重要。例如,2018年10月10日,川藏交界的金沙江流域发生大规模滑坡,形成堰塞湖,对下游金沙江地区人民生命财产造成极大威胁(Qiu et al. 2017, Li et al. 2020a)。利用灾害多源数据融合(Gamba 2014)、时空过程模拟(Wang et al. 2020)、可视化分析和风险评估(Li et al. 2020)以及灾害信息智能服务,生成决策信息,支持灾害应急管理(Liu et al. 2018)。本期《综合减灾情报服务数据融合》专刊重点介绍了灾情数据融合的最新理论和技术问题,旨在梳理当前的研究进展,为在这一领域相互学习和交流提供机会。本特刊由国家重点研究发展计划资助(资助号:2016YFC0803101),包含6篇文章,涵盖多个主题。具体而言,提出了一种改进的频域积分方法,将GNSS与加速度计结合起来,利用GNSS获得精确的初始位置来重建动态位移。介绍了一种基于灾难场景模型的在线应急映射框架,实现了映射、知识规则、映射模板、地图符号引擎和简单向导等功能,缩短了应急映射周期。通过基础地理场景、洪水时空过程和灾害对象模型的融合,实现以灾害信息为导向的洪水融合三维场景的适宜性可视化方法,帮助用户快速获取洪水灾害信息。一种无监督的中文地址提取方法。国际图像与数据融合学报,2021,VOL. 12, NO. 5。4,265 - 267 https://doi.org/10.1080/19479832.2021.1970931
{"title":"The latest progress of data fusion for integrated disaster reduction intelligence service","authors":"Jiping Liu, M. Konečný, Qingyun Du, Shenghua Xu, F. Ren, Xianghong Che","doi":"10.1080/19479832.2021.1970931","DOIUrl":"https://doi.org/10.1080/19479832.2021.1970931","url":null,"abstract":"Looking back over the past decade, superstorms, wildfires, floods, geological hazards, and monster earthquakes have taken unimaginable tolls all over the planet. In 2020, nearly 138 million people suffered from various natural disasters throughout China, where 591 people died and disappeared, and 5.89 million people were relocated for emergency resettlement. This led to direct economic losses of 370.15 billion CNY. With the advances of data acquisition technologies, i.e. remote sensing and Internet of Things, disasterrelated data can be collected rapidly and easily. However, disaster-related data vary in the acquiring methodology and, as such, vary in geographic scope and resolution; thus, how to fuse various disaster-related data is of significance for emergency disaster reduction (Liu et al. 2020). Disaster-related data are essential in understanding the impacts and costs of disasters, and data fusion plays an essential role in disaster prediction, reduction, assessment, and intelligent services. Using multisource data can improve the information availability and quality derived at various levels (Liu et al. 2018, Liu et al. 2020). Especially, for the emergency response, it is particularly imperative to integrate multisource data to provide the latest, accurate and timely information with various scales for disaster reduction services. For example, a large-scale landslide occurred in the Jinsha River Basin at the border of Sichuan and Tibet on 10 October 2018 and formed the barrier lake, which posed a great threat to the lives and property of people in the downstream Jinsha River region (Qiu et al. 2017, Li et al. 2020a). Using disaster multisource data fusion (Gamba 2014), spatiotemporal process simulation (Wang et al. 2020), visual analysis and risk assessment (Li et al. 2020), and disaster information intelligent service, decision-making information were generated to support disaster emergency management (Liu et al. 2018). This special issue on Data Fusion for Integrated Disaster Reduction Intelligence Service focuses on the latest theoretical and technical issues related to disaster-related data fusion, which aims at clarifying the current research progress to provide an opportunity to learn and communicate with each other in this field. This special issue is supported by the National Key Research and Development Program of China under Grant No. 2016YFC0803101 and includes six articles spanning various topics. Specifically, an improved frequency domain integration approach is proposed by combining GNSS and Accelerometers using GNSS to gain an accurate initial position to reconstruct dynamic displacements. An online emergency mapping framework based on a disaster scenario model is introduced for mapping, knowledge rules, mapping templates, map symbol engines, and a simple wizard to shorten the mapping cycle in emergencies. A suitability visualisation method is realised for flood fusion 3D scene guided by disaster information through the fusi","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"265 - 267"},"PeriodicalIF":2.3,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47516818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Acknowledgement to Reviewers of the International Journal of Image and Data Fusion in 2021 2021年《国际图像与数据融合杂志》评审员致谢
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-10-02 DOI: 10.1080/19479832.2021.1995136
M. Abdelkareem, S. Auer, A. B. Pour, Jianguo Chen, Jian Cheng, M. Datcu, Huihui Feng, Shubham Gupta, M. Hashim, Maryam Imani, W. Kainz, M. S. Karoui, T. Kavzoglu, Fatemeh Kowkabi, Anil Kumar, Xue Li, Zengke Li, Feng
The editors of the International Journal of Image and Data Fusion wish to express their sincere gratitude to the following reviewers for their valued contribution to the journal in 2021. Mohamed Abdelkareem Stefan Auer Amin Beiranvand Pour Jianguo Chen Jian Cheng Mihai Datcu Huihui Feng Shubham Gupta Mazlan Hashim Maryam Imani Wolfgang Kainz Moussa Sofiane Karoui Taskin Kavzoglu Fatemeh Kowkabi Anil Kumar Xue Li Zengke Li Feng Ling Zhong Lu Arash Malekian Lamin R. Mansaray Seyed Jalaleddin Mousavirad Mircea Paul Muresan Henry Y.T. Ngan Mohammad Parsa Shengliang Pu Jinxi Qian Omeid Rahmani H. Ranjbar Wellington Pinheiro dos Santos Hadi Shahriari Huanfeng Shen Yuqi Tang Kishor Upla INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 4, i–ii https://doi.org/10.1080/19479832.2021.1995136
《国际图像与数据融合杂志》的编辑们衷心感谢以下审稿人在2021年为该杂志做出的宝贵贡献。Mohamed Abdelkareem Stefan Auer Amin Beiranvand Pour Jianguo Chen Jian Cheng Mihai Datcu Huizui Feng Shubham Gupta Mazlan Hashim Maryam Imani Wolfgang Kainz Moussa Sofiane Karoui Taskin Kavzoglu Fatemeh Kowkabi Anil Kumar Xue Li Zengke Li Feng Ling Zhong Lu Arash Malekian Lamin R。Mansaray Seyed Jalaleddin Mousavirad Mircea Paul Muresan Henry Y.T.Ngan Mohammad Parsa Shengliang Pu Jinxi Qian Omeid Rahmani H.Ranjbar Wellington Pinheiro dos Santos Hadi Shahriari Huanfeng Shen Yuqi Tang Kishor Upla INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021,第12卷,第4期,i–iihttps://doi.org/10.1080/19479832.2021.1995136
{"title":"Acknowledgement to Reviewers of the International Journal of Image and Data Fusion in 2021","authors":"M. Abdelkareem, S. Auer, A. B. Pour, Jianguo Chen, Jian Cheng, M. Datcu, Huihui Feng, Shubham Gupta, M. Hashim, Maryam Imani, W. Kainz, M. S. Karoui, T. Kavzoglu, Fatemeh Kowkabi, Anil Kumar, Xue Li, Zengke Li, Feng","doi":"10.1080/19479832.2021.1995136","DOIUrl":"https://doi.org/10.1080/19479832.2021.1995136","url":null,"abstract":"The editors of the International Journal of Image and Data Fusion wish to express their sincere gratitude to the following reviewers for their valued contribution to the journal in 2021. Mohamed Abdelkareem Stefan Auer Amin Beiranvand Pour Jianguo Chen Jian Cheng Mihai Datcu Huihui Feng Shubham Gupta Mazlan Hashim Maryam Imani Wolfgang Kainz Moussa Sofiane Karoui Taskin Kavzoglu Fatemeh Kowkabi Anil Kumar Xue Li Zengke Li Feng Ling Zhong Lu Arash Malekian Lamin R. Mansaray Seyed Jalaleddin Mousavirad Mircea Paul Muresan Henry Y.T. Ngan Mohammad Parsa Shengliang Pu Jinxi Qian Omeid Rahmani H. Ranjbar Wellington Pinheiro dos Santos Hadi Shahriari Huanfeng Shen Yuqi Tang Kishor Upla INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2021, VOL. 12, NO. 4, i–ii https://doi.org/10.1080/19479832.2021.1995136","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"i - ii"},"PeriodicalIF":2.3,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46102818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised hyperspectral band selection with deep autoencoder unmixing 无监督高光谱波段选择与深度自动编码器解混
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-08-30 DOI: 10.1080/19479832.2021.1972047
M. Elkholy, M. Mostafa, H. M. Ebeid, M. Tolba
ABSTRACT Hyperspectral imaging (HSI) is a beneficial source of information for numerous civil and military applications, but high dimensionality and strong correlation limits HSI classification performance. Band selection aims at selecting the most informative bands to minimise the computational cost and eliminate redundant information. In this paper, we propose a new unsupervised band selection approach that benefits from the current dominant stream of deep learning frameworks. The proposed approach consists of two consecutive phases: unmixing and cluster. In the unmixing phase, we utilised a nonlinear deep autoencoder to extract accurate material spectra. In the cluster phase, we calculate the variance for each obtained endmember to construct a variances vector. Then, classical K-mean was adopted to cluster the variances vectors. Finally, the optimal band subset was obtained by choosing only one spectral band for each cluster. We carried out several experiments on three hyperspectral datasets to test the feasibility and generality of the proposed approach. Experimental results indicate that the proposed approach surpasses several state-of-the-art counterparts by an average of 4% in terms of overall accuracy.
高光谱成像(HSI)是许多民用和军事应用的有益信息来源,但高维数和强相关性限制了高光谱成像的分类性能。波段选择的目的是选择信息量最大的波段,以最小化计算成本和消除冗余信息。在本文中,我们提出了一种新的无监督波段选择方法,该方法受益于当前主流的深度学习框架。该方法包括两个连续的阶段:解混和聚类。在解混阶段,我们利用非线性深度自编码器提取准确的物质光谱。在聚类阶段,我们计算每个获得的端元的方差,以构造方差向量。然后,采用经典k均值对方差向量进行聚类。最后,每个聚类只选择一个光谱波段,得到最优波段子集。我们在三个高光谱数据集上进行了多次实验,以测试所提出方法的可行性和通用性。实验结果表明,所提出的方法在总体精度方面比几种最先进的方法平均高出4%。
{"title":"Unsupervised hyperspectral band selection with deep autoencoder unmixing","authors":"M. Elkholy, M. Mostafa, H. M. Ebeid, M. Tolba","doi":"10.1080/19479832.2021.1972047","DOIUrl":"https://doi.org/10.1080/19479832.2021.1972047","url":null,"abstract":"ABSTRACT Hyperspectral imaging (HSI) is a beneficial source of information for numerous civil and military applications, but high dimensionality and strong correlation limits HSI classification performance. Band selection aims at selecting the most informative bands to minimise the computational cost and eliminate redundant information. In this paper, we propose a new unsupervised band selection approach that benefits from the current dominant stream of deep learning frameworks. The proposed approach consists of two consecutive phases: unmixing and cluster. In the unmixing phase, we utilised a nonlinear deep autoencoder to extract accurate material spectra. In the cluster phase, we calculate the variance for each obtained endmember to construct a variances vector. Then, classical K-mean was adopted to cluster the variances vectors. Finally, the optimal band subset was obtained by choosing only one spectral band for each cluster. We carried out several experiments on three hyperspectral datasets to test the feasibility and generality of the proposed approach. Experimental results indicate that the proposed approach surpasses several state-of-the-art counterparts by an average of 4% in terms of overall accuracy.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"244 - 261"},"PeriodicalIF":2.3,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43736633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
GNSS-aided accelerometer frequency domain integration approach to monitor structural dynamic displacements GNSS辅助加速度计频域积分法监测结构动态位移
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-08-29 DOI: 10.1080/19479832.2021.1967468
Xu Liu, Jian Wang, Jie Zhen, Houzeng Han, C. Hancock
ABSTRACT The accelerometer frequency domain integration approach (FDIA) is being actively applied to calculate dynamic displacement responses of large engineering structures. However, it is a relative acceleration measurement as the initial position is unavailable. GNSS offers direct displacement measurements, but has the limitation of relatively low frequency of data compared with alternative measurement techniques. Therefore, this paper proposes an improved FDIA utilising the advantages of GNSS to gain accurate information about the initial position. The performance of the proposed approach is first validated through software simulation. Following the validation, a series of shaking table tests using various vibration frequencies (0.5 HZ, 1 HZ, 1.5 HZ, 2 HZ and 2.5 HZ) are performed at the south square of Beijing University of Civil Engineering and Architecture (BUCEA) using one GNSS receiver and one accelerometer. The results show that the proposed approach can effectively avoid the uncertainty of the initial value and thus enhance the direct measurement accuracy of the dynamic displacements of structures, with root mean square error (RMSE) decreasing from 11.4 mm to 6.8 mm.
加速度计频域积分法(FDIA)正被积极应用于大型工程结构的动力位移响应计算。然而,这是一个相对的加速度测量,因为初始位置是不可用的。GNSS提供了直接的位移测量,但与其他测量技术相比,其数据频率相对较低。因此,本文提出了一种改进的FDIA,利用GNSS的优势来获得关于初始位置的准确信息。首先通过软件仿真验证了该方法的有效性。验证后,在北京建筑大学南广场使用一台GNSS接收机和一台加速度计进行了一系列不同振动频率(0.5 HZ、1 HZ、1.5 HZ、2 HZ和2.5 HZ)的振动台试验。结果表明,该方法有效避免了初始值的不确定性,提高了结构动态位移的直接测量精度,均方根误差(RMSE)由11.4 mm降至6.8 mm。
{"title":"GNSS-aided accelerometer frequency domain integration approach to monitor structural dynamic displacements","authors":"Xu Liu, Jian Wang, Jie Zhen, Houzeng Han, C. Hancock","doi":"10.1080/19479832.2021.1967468","DOIUrl":"https://doi.org/10.1080/19479832.2021.1967468","url":null,"abstract":"ABSTRACT The accelerometer frequency domain integration approach (FDIA) is being actively applied to calculate dynamic displacement responses of large engineering structures. However, it is a relative acceleration measurement as the initial position is unavailable. GNSS offers direct displacement measurements, but has the limitation of relatively low frequency of data compared with alternative measurement techniques. Therefore, this paper proposes an improved FDIA utilising the advantages of GNSS to gain accurate information about the initial position. The performance of the proposed approach is first validated through software simulation. Following the validation, a series of shaking table tests using various vibration frequencies (0.5 HZ, 1 HZ, 1.5 HZ, 2 HZ and 2.5 HZ) are performed at the south square of Beijing University of Civil Engineering and Architecture (BUCEA) using one GNSS receiver and one accelerometer. The results show that the proposed approach can effectively avoid the uncertainty of the initial value and thus enhance the direct measurement accuracy of the dynamic displacements of structures, with root mean square error (RMSE) decreasing from 11.4 mm to 6.8 mm.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"268 - 281"},"PeriodicalIF":2.3,"publicationDate":"2021-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48502597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A moving ISAR-object recognition using pi-sigma neural networks based on histogram of oriented gradient of edge 基于边缘方向梯度直方图的pi-sigma神经网络运动isar目标识别
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-08-12 DOI: 10.1080/19479832.2021.1953620
Asma Elyounsi, H. Tlijani, M. Bouhlel
ABSTRACT Detection and classification with traditional neural networks methods such as multilayer perceptron (MLP), feed forward network and back propagation neural networks show several drawbacks including the rate of convergence and the incapacity facing the problems of size of the image especially for radar images. As a result, these methods are being replaced by other evolutional classification methods such as Higher Order Neural Networks (HONN) (Functional Link Artificial Neural Network (FLANN), Pi Sigma Neural Network (PSNN), Neural Network Product Unit (PUNN) and Neural Network of the Higher Order Processing Unit. So, in this paper, we address radar object detection and classification problems with a new strategy by using PSNN and a new proposed method HOGE for edges features extraction based on morphological operators and histogram of oriented gradient. Thus, in order to recognise radar object, we extract HOG features of the object region and classify our target with PSNN. The HOGE features vector is used as input of pi-sigma NN. The proposed method was tested and confirmed based on experiments through the use of 2D and 3D ISAR images.
传统的神经网络方法,如多层感知器(MLP)、前馈网络和反向传播神经网络,在检测和分类方面存在着一些缺点,包括收敛速度和无法解决图像大小问题,尤其是对于雷达图像。结果,这些方法正被其他进化分类方法所取代,例如高阶神经网络(HONN)(功能链接人工神经网络(FLANN))、Pi-Sigma神经网络(PSNN)、神经网络乘积单元(PUNN)和高阶处理单元的神经网络。因此,在本文中,我们使用PSNN的一种新策略和一种新的基于形态学算子和定向梯度直方图的边缘特征提取方法HOGE来解决雷达目标检测和分类问题。因此,为了识别雷达目标,我们提取目标区域的HOG特征,并使用PSNN对目标进行分类。HOGE特征向量被用作pi西格玛神经网络的输入。通过使用二维和三维ISAR图像,在实验的基础上对所提出的方法进行了测试和验证。
{"title":"A moving ISAR-object recognition using pi-sigma neural networks based on histogram of oriented gradient of edge","authors":"Asma Elyounsi, H. Tlijani, M. Bouhlel","doi":"10.1080/19479832.2021.1953620","DOIUrl":"https://doi.org/10.1080/19479832.2021.1953620","url":null,"abstract":"ABSTRACT Detection and classification with traditional neural networks methods such as multilayer perceptron (MLP), feed forward network and back propagation neural networks show several drawbacks including the rate of convergence and the incapacity facing the problems of size of the image especially for radar images. As a result, these methods are being replaced by other evolutional classification methods such as Higher Order Neural Networks (HONN) (Functional Link Artificial Neural Network (FLANN), Pi Sigma Neural Network (PSNN), Neural Network Product Unit (PUNN) and Neural Network of the Higher Order Processing Unit. So, in this paper, we address radar object detection and classification problems with a new strategy by using PSNN and a new proposed method HOGE for edges features extraction based on morphological operators and histogram of oriented gradient. Thus, in order to recognise radar object, we extract HOG features of the object region and classify our target with PSNN. The HOGE features vector is used as input of pi-sigma NN. The proposed method was tested and confirmed based on experiments through the use of 2D and 3D ISAR images.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"13 1","pages":"297 - 315"},"PeriodicalIF":2.3,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49082665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Landslide susceptibility mapping with the fusion of multi-feature SVM model based FCM sampling strategy: A case study from Shaanxi Province 基于FCM采样策略的多特征SVM模型融合滑坡敏感性制图——以陕西省为例
IF 2.3 Q3 REMOTE SENSING Pub Date : 2021-08-11 DOI: 10.1080/19479832.2021.1961316
Mengmeng Liu, Jiping Liu, Shenghua Xu, Tao Zhou, Yu Ma, Fuhao Zhang, M. Konečný
ABSTRACT The quality of “non-landslide’ samples data impacts the accuracy of geological hazard risk assessment. This research proposed a method to improve the performance of support vector machine (SVM) by perfecting the quality of ‘non-landslide’ samples in the landslide susceptibility evaluation model through fuzzy c-means (FCM) cluster to generate more reliable susceptibility maps. Firstly, three sample selection scenarios for ‘non-landslide’ samples include the following principles: 1) select randomly from low-slope areas (scenario-SS), 2) select randomly from areas with no hazards (scenario-RS), 3) obtain samples from the optimal FCM model (scenario-FCM), and then three sample scenarios are constructed with 10,193 landslide positive samples. Next, we have compared and evaluated the performance of three sample scenarios in the SVM models based on the statistical indicators such as the proportion of disaster points, density of disaster points precision, receiver operating characteristic (ROC) curve, and area under the ROC curve (AUC). Finally, The evaluation results show that the ‘non-landslide’ negative samples based on the FCM model are more reasonable. Furthermore, the hybrid method supported by SVM and FCM models exhibits the highest prediction efficiency. Scenario FCM produces an overall accuracy of approximately 89.7% (AUC), followed by scenario-SS (86.7%) and scenario-RS (85.6%).
摘要“非滑坡”样本数据的质量影响着地质灾害风险评估的准确性。本研究提出了一种通过模糊c均值(FCM)改进滑坡易发性评估模型中“非滑坡样本”的质量来提高支持向量机(SVM)性能的方法聚类以生成更可靠的易感性图。首先,“非滑坡”样本的三个样本选择场景包括以下原则:1)从低坡地区随机选择(场景SS),2)从无危险地区随机选择,3)从最优FCM模型中获取样本(场景FCM),然后用10193个滑坡正样本构建三个样本场景。接下来,我们根据灾害点比例、灾害点密度精度、受试者工作特征曲线(ROC)和ROC曲线下面积(AUC)等统计指标,对SVM模型中三个样本场景的性能进行了比较和评估。最后,评价结果表明,基于FCM模型的“非滑坡”负样本更加合理。此外,SVM和FCM模型支持的混合方法显示出最高的预测效率。方案FCM的总体准确率约为89.7%(AUC),其次是方案SS(86.7%)和方案RS(85.6%)。
{"title":"Landslide susceptibility mapping with the fusion of multi-feature SVM model based FCM sampling strategy: A case study from Shaanxi Province","authors":"Mengmeng Liu, Jiping Liu, Shenghua Xu, Tao Zhou, Yu Ma, Fuhao Zhang, M. Konečný","doi":"10.1080/19479832.2021.1961316","DOIUrl":"https://doi.org/10.1080/19479832.2021.1961316","url":null,"abstract":"ABSTRACT The quality of “non-landslide’ samples data impacts the accuracy of geological hazard risk assessment. This research proposed a method to improve the performance of support vector machine (SVM) by perfecting the quality of ‘non-landslide’ samples in the landslide susceptibility evaluation model through fuzzy c-means (FCM) cluster to generate more reliable susceptibility maps. Firstly, three sample selection scenarios for ‘non-landslide’ samples include the following principles: 1) select randomly from low-slope areas (scenario-SS), 2) select randomly from areas with no hazards (scenario-RS), 3) obtain samples from the optimal FCM model (scenario-FCM), and then three sample scenarios are constructed with 10,193 landslide positive samples. Next, we have compared and evaluated the performance of three sample scenarios in the SVM models based on the statistical indicators such as the proportion of disaster points, density of disaster points precision, receiver operating characteristic (ROC) curve, and area under the ROC curve (AUC). Finally, The evaluation results show that the ‘non-landslide’ negative samples based on the FCM model are more reasonable. Furthermore, the hybrid method supported by SVM and FCM models exhibits the highest prediction efficiency. Scenario FCM produces an overall accuracy of approximately 89.7% (AUC), followed by scenario-SS (86.7%) and scenario-RS (85.6%).","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"349 - 366"},"PeriodicalIF":2.3,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43625646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
International Journal of Image and Data Fusion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1