首页 > 最新文献

Journal of Applied Remote Sensing最新文献

英文 中文
ComS-YOLO: a combinational and sparse network for detecting vehicles in aerial thermal infrared images ComS-YOLO:用于在航空热红外图像中探测车辆的组合稀疏网络
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014508
Xunxun Zhang, Xiaoyu Lu
Vehicle detection using aerial thermal infrared images has received significant attention because of its strong capability for day and night observations to supply information for vehicle tracking, traffic monitoring, and road network planning. Compared with aerial visible images, aerial thermal infrared images are not sensitive to lighting conditions. However, they have low contrast and blurred edges. Therefore, a combinational and sparse you-only-look-once (ComS-YOLO) neural network is put forward to accurately and quickly detect vehicles in aerial thermal infrared images. Therein, we adjust the structure of the deep neural network to balance the detection accuracy and running time. In addition, we propose an objective function that utilizes the diagonal distance of the corresponding minimum external rectangle, which prevents non-convergence when there is an inclusion relationship between the prediction and true boxes or in the case of width and height alignment. Furthermore, to avoid over-fitting in the training stage, we eliminate some redundant parameters via constraints and on-line pruning. Finally, experimental results on the NWPU VHR-10 and DARPA VIVID datasets show that the proposed ComS-YOLO network effectively and efficiently identifies the vehicles with a low missed rate and false detection rate. Compared with the Faster R-CNN and a series of YOLO neural networks, the proposed neural network presents satisfactory and competitive results in terms of the detection accuracy and running time. Furthermore, vehicle detection experiments under different environments are also carried out, which shows that our method can achieve an excellent and desired performance on detection accuracy and robustness of vehicle detection.
利用航空热红外图像进行车辆探测受到了广泛关注,因为它具有很强的昼夜观测能力,可为车辆跟踪、交通监控和路网规划提供信息。与航空可见光图像相比,航空热红外图像对光照条件不敏感。然而,它们的对比度低,边缘模糊。因此,我们提出了一种组合稀疏只看一次(ComS-YOLO)神经网络,以准确、快速地检测航空热红外图像中的车辆。其中,我们调整了深度神经网络的结构,以平衡检测精度和运行时间。此外,我们还提出了利用相应最小外部矩形对角线距离的目标函数,从而避免了在预测框与真实框之间存在包含关系或在宽度和高度对齐的情况下出现不收敛。此外,为了避免训练阶段的过度拟合,我们通过约束和在线剪枝消除了一些冗余参数。最后,在 NWPU VHR-10 和 DARPA VIVID 数据集上的实验结果表明,所提出的 ComS-YOLO 网络能有效识别车辆,漏检率和误检率都很低。与 Faster R-CNN 和一系列 YOLO 神经网络相比,所提出的神经网络在检测精度和运行时间方面都取得了令人满意和具有竞争力的结果。此外,我们还进行了不同环境下的车辆检测实验,结果表明我们的方法在检测精度和车辆检测鲁棒性方面都能达到理想的优异性能。
{"title":"ComS-YOLO: a combinational and sparse network for detecting vehicles in aerial thermal infrared images","authors":"Xunxun Zhang, Xiaoyu Lu","doi":"10.1117/1.jrs.18.014508","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014508","url":null,"abstract":"Vehicle detection using aerial thermal infrared images has received significant attention because of its strong capability for day and night observations to supply information for vehicle tracking, traffic monitoring, and road network planning. Compared with aerial visible images, aerial thermal infrared images are not sensitive to lighting conditions. However, they have low contrast and blurred edges. Therefore, a combinational and sparse you-only-look-once (ComS-YOLO) neural network is put forward to accurately and quickly detect vehicles in aerial thermal infrared images. Therein, we adjust the structure of the deep neural network to balance the detection accuracy and running time. In addition, we propose an objective function that utilizes the diagonal distance of the corresponding minimum external rectangle, which prevents non-convergence when there is an inclusion relationship between the prediction and true boxes or in the case of width and height alignment. Furthermore, to avoid over-fitting in the training stage, we eliminate some redundant parameters via constraints and on-line pruning. Finally, experimental results on the NWPU VHR-10 and DARPA VIVID datasets show that the proposed ComS-YOLO network effectively and efficiently identifies the vehicles with a low missed rate and false detection rate. Compared with the Faster R-CNN and a series of YOLO neural networks, the proposed neural network presents satisfactory and competitive results in terms of the detection accuracy and running time. Furthermore, vehicle detection experiments under different environments are also carried out, which shows that our method can achieve an excellent and desired performance on detection accuracy and robustness of vehicle detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"285 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139657029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining multisource remote sensing data to calculate individual tree biomass in complex stands 结合多源遥感数据计算复杂林分中单棵树木的生物量
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014515
Xugang Lian, Hailang Zhang, Leixue Wang, Yulu Gao, Lifan Shi, Yu Li, Jiang Chang
Accurate estimation of forest individual tree characteristics and biomass is very important for monitoring global carbon storage and carbon cycle. In order to solve the problem of calculating individual biomass of various tree species in complex stands, we take terrestrial laser scanning data, unmanned aerial vehicle-laser scanning data, and multispectral data as data sources and extract spectral characteristics, vegetation index characteristics, texture characteristics, and tree height characteristics of diverse forest areas through multispectral classification of tree species. Based on the random forest (RF) algorithm, the extracted features were superimposed and optimized, and the tree species were classified according to the multispectral data combined with field investigation. Then multispectral classification data combined with light detection and ranging (LIDAR) point cloud data were used to classify point cloud species, and then individual tree parameters were extracted for the divided point cloud species, and stand biomass was obtained using the tree biomass calculation model. The results showed that all kinds of tree species could be identified based on RF algorithm by combining multispectral data and LIDAR data. The overall classification accuracy was 66% and the kappa coefficient was 0.59. The recall rate of poplar, cypress, and lacebark-pine was about 75%, except for willow and clove trees, which were blocked by large crown width and caused multiple detection and missed detection. The R2 of diameter at breast height was 0.85, and the root-mean-square error (RMSE) was 5.90 cm. The R2 of the tree height was 0.90, and the RMSE was 1.78 m. Finally, the biomass of each tree species was calculated, and the stand biomass was 66.76 t/hm2, which realized the classification of the whole stand and the measurement of the biomass of each tree. Our study proves that the application of combined multisource remote sensing data to forest biomass estimation has good feasibility.
准确估算林木个体特征和生物量对于监测全球碳储存和碳循环非常重要。为了解决复杂林分中各种树种个体生物量的计算问题,我们以地面激光扫描数据、无人机激光扫描数据和多光谱数据为数据源,通过树种的多光谱分类,提取不同林区的光谱特征、植被指数特征、纹理特征和树高特征。基于随机森林(RF)算法,对提取的特征进行叠加和优化,并根据多光谱数据结合实地调查对树种进行分类。然后利用多光谱分类数据结合光探测与测距(LIDAR)点云数据对点云树种进行分类,再对划分后的点云树种提取单株树木参数,利用树木生物量计算模型得出林分生物量。结果表明,基于射频算法,结合多光谱数据和激光雷达数据,可以识别出各种树种。总体分类准确率为 66%,卡帕系数为 0.59。除柳树和丁香树因冠幅过大而被遮挡导致多检和漏检外,杨树、柏树和白皮松的召回率均在 75% 左右。胸径的 R2 为 0.85,均方根误差为 5.90 厘米。最后计算了各树种的生物量,林分生物量为 66.76 t/hm2,实现了对整个林分的分类和各树种生物量的测量。我们的研究证明,将多源遥感数据综合应用于森林生物量估算具有良好的可行性。
{"title":"Combining multisource remote sensing data to calculate individual tree biomass in complex stands","authors":"Xugang Lian, Hailang Zhang, Leixue Wang, Yulu Gao, Lifan Shi, Yu Li, Jiang Chang","doi":"10.1117/1.jrs.18.014515","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014515","url":null,"abstract":"Accurate estimation of forest individual tree characteristics and biomass is very important for monitoring global carbon storage and carbon cycle. In order to solve the problem of calculating individual biomass of various tree species in complex stands, we take terrestrial laser scanning data, unmanned aerial vehicle-laser scanning data, and multispectral data as data sources and extract spectral characteristics, vegetation index characteristics, texture characteristics, and tree height characteristics of diverse forest areas through multispectral classification of tree species. Based on the random forest (RF) algorithm, the extracted features were superimposed and optimized, and the tree species were classified according to the multispectral data combined with field investigation. Then multispectral classification data combined with light detection and ranging (LIDAR) point cloud data were used to classify point cloud species, and then individual tree parameters were extracted for the divided point cloud species, and stand biomass was obtained using the tree biomass calculation model. The results showed that all kinds of tree species could be identified based on RF algorithm by combining multispectral data and LIDAR data. The overall classification accuracy was 66% and the kappa coefficient was 0.59. The recall rate of poplar, cypress, and lacebark-pine was about 75%, except for willow and clove trees, which were blocked by large crown width and caused multiple detection and missed detection. The R2 of diameter at breast height was 0.85, and the root-mean-square error (RMSE) was 5.90 cm. The R2 of the tree height was 0.90, and the RMSE was 1.78 m. Finally, the biomass of each tree species was calculated, and the stand biomass was 66.76 t/hm2, which realized the classification of the whole stand and the measurement of the biomass of each tree. Our study proves that the application of combined multisource remote sensing data to forest biomass estimation has good feasibility.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"22 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal feature extraction from multidimensional remote sensing data for orchard identification based on deep learning methods 基于深度学习方法从多维遥感数据中提取最佳特征,用于果园识别
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014514
Junjie Luo, Jiao Guo, Zhe Zhu, Yunlong Du, Yongkai Ye
Accurate orchard spatial distribution information can help government departments to formulate scientific and reasonable agricultural economic policies. However, it is prominent to apply remote sensing images to obtain orchard planting structure information. The traditional multidimensional remote sensing data processing, dimension reduction and classification, which are two separate steps, cannot guarantee that final classification results can be benefited from dimension reduction process. Consequently, to make connection between dimension reduction and classification, this work proposes two neural networks that fuse stack autoencoder and convolutional neural network (CNN) at one-dimension and three-dimension, namely one-dimension and three-dimension fusion stacked autoencoder (FSA) and CNN networks (1D-FSA-CNN and 3D-FSA-CNN). In both networks, the front-end uses a stacked autoencoder (SAE) for dimension reduction, and the back-end uses a CNN with a Softmax classifier for classification. In the experiments, based on Google Earth Engine platform, two groups of orchard datasets are constructed using multi-source remote sensing data (i.e., GaoFen-1, Sentinel-2 and GaoFen-1, and GaoFen-3). Meanwhile, DenseNet201, 3D-CNN, 1D-CNN, and SAE are used for conduct two comparative experiments. The experimental results show that the proposed fusion neural networks achieve the state-of-the-art performance, both accuracies of 3D-FSA-CNN and 1D-FSA-CNN are higher than 95%.
准确的果园空间分布信息有助于政府部门制定科学合理的农业经济政策。然而,应用遥感影像获取果园种植结构信息的问题比较突出。传统的多维遥感数据处理,降维和分类是两个独立的步骤,无法保证最终的分类结果能够从降维过程中获益。因此,为了将降维与分类联系起来,本研究提出了两种在一维和三维上融合堆栈自动编码器和卷积神经网络(CNN)的神经网络,即一维和三维融合堆栈自动编码器(FSA)和 CNN 网络(1D-FSA-CNN 和 3D-FSA-CNN )。在这两种网络中,前端使用堆叠自动编码器(SAE)进行降维,后端使用带有 Softmax 分类器的 CNN 进行分类。在实验中,基于谷歌地球引擎平台,利用多源遥感数据构建了两组果园数据集(即高分一号、哨兵二号和高分一号、高分三号)。同时,利用 DenseNet201、3D-CNN、1D-CNN 和 SAE 进行了两次对比实验。实验结果表明,所提出的融合神经网络达到了最先进的性能,3D-FSA-CNN 和 1D-FSA-CNN 的准确率均高于 95%。
{"title":"Optimal feature extraction from multidimensional remote sensing data for orchard identification based on deep learning methods","authors":"Junjie Luo, Jiao Guo, Zhe Zhu, Yunlong Du, Yongkai Ye","doi":"10.1117/1.jrs.18.014514","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014514","url":null,"abstract":"Accurate orchard spatial distribution information can help government departments to formulate scientific and reasonable agricultural economic policies. However, it is prominent to apply remote sensing images to obtain orchard planting structure information. The traditional multidimensional remote sensing data processing, dimension reduction and classification, which are two separate steps, cannot guarantee that final classification results can be benefited from dimension reduction process. Consequently, to make connection between dimension reduction and classification, this work proposes two neural networks that fuse stack autoencoder and convolutional neural network (CNN) at one-dimension and three-dimension, namely one-dimension and three-dimension fusion stacked autoencoder (FSA) and CNN networks (1D-FSA-CNN and 3D-FSA-CNN). In both networks, the front-end uses a stacked autoencoder (SAE) for dimension reduction, and the back-end uses a CNN with a Softmax classifier for classification. In the experiments, based on Google Earth Engine platform, two groups of orchard datasets are constructed using multi-source remote sensing data (i.e., GaoFen-1, Sentinel-2 and GaoFen-1, and GaoFen-3). Meanwhile, DenseNet201, 3D-CNN, 1D-CNN, and SAE are used for conduct two comparative experiments. The experimental results show that the proposed fusion neural networks achieve the state-of-the-art performance, both accuracies of 3D-FSA-CNN and 1D-FSA-CNN are higher than 95%.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"26 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of synthetic aperture radar data for the determination of normalized difference vegetation index and normalized difference water index 利用合成孔径雷达数据确定归一化差异植被指数和归一化差异水指数
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014516
Amazonino Lemos de Castro, Miqueias Lima Duarte, Henrique Ewbank, Roberto Wagner Lourenço
This study was based on analysis of Sentinel-1 (SAR) data to estimate the normalized difference vegetation index (NDVI) and the normalized difference water index (NDWI) during the period 2019 to 2020 in a region with a range of different land uses. The methodology adopted involved the construction of four regression models: linear regression (LR), support vector machine (SVM), random forest (RF), and artificial neural network (ANN). These models aimed to determine vegetation indices based on Sentinel-1 backscattering data, which were used as independent variables. As dependent variables, the NDVI and NDWI obtained via Sentinel-2 data were used. The implementation of the models included the application of cross-validation with an analysis of performance metrics to identify the most effective model. The results revealed that, based on the post-hoc test, the SVM model presented the best performance in the estimation of NDVI and NDWI, with mean R2 values of 0.74 and 0.70, respectively. It is relevant to note that the backscattering coefficient of the vertical-vertical (VV) and vertical-horizontal (VH) polarizations emerged as the variable with the greatest contribution to the models. This finding reinforces the importance of these parameters in the accuracy of estimates. Ultimately, this approach is promising for the creation of time series of NDVI and NDWI in regions that are frequently affected by cloud cover, thus representing a valuable complement to optical sensor data. This integration is particularly valuable for monitoring agricultural crops.
本研究基于对 Sentinel-1(合成孔径雷达)数据的分析,以估算 2019 年至 2020 年期间一个具有不同土地用途的地区的归一化差异植被指数(NDVI)和归一化差异水指数(NDWI)。采用的方法包括构建四个回归模型:线性回归(LR)、支持向量机(SVM)、随机森林(RF)和人工神经网络(ANN)。这些模型旨在根据哨兵 1 号的反向散射数据确定植被指数,并将其作为自变量。作为因变量,使用了通过哨兵-2 数据获得的 NDVI 和 NDWI。模型的实施包括交叉验证和性能指标分析,以确定最有效的模型。结果显示,根据事后检验,SVM 模型在估计 NDVI 和 NDWI 方面表现最佳,平均 R2 值分别为 0.74 和 0.70。值得注意的是,垂直-垂直(VV)和垂直-水平(VH)极化的后向散射系数是对模型贡献最大的变量。这一发现加强了这些参数对估算精度的重要性。最终,这种方法有望在经常受云层影响的地区建立 NDVI 和 NDWI 时间序列,从而成为光学传感器数据的重要补充。这种整合对于监测农作物尤为重要。
{"title":"Use of synthetic aperture radar data for the determination of normalized difference vegetation index and normalized difference water index","authors":"Amazonino Lemos de Castro, Miqueias Lima Duarte, Henrique Ewbank, Roberto Wagner Lourenço","doi":"10.1117/1.jrs.18.014516","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014516","url":null,"abstract":"This study was based on analysis of Sentinel-1 (SAR) data to estimate the normalized difference vegetation index (NDVI) and the normalized difference water index (NDWI) during the period 2019 to 2020 in a region with a range of different land uses. The methodology adopted involved the construction of four regression models: linear regression (LR), support vector machine (SVM), random forest (RF), and artificial neural network (ANN). These models aimed to determine vegetation indices based on Sentinel-1 backscattering data, which were used as independent variables. As dependent variables, the NDVI and NDWI obtained via Sentinel-2 data were used. The implementation of the models included the application of cross-validation with an analysis of performance metrics to identify the most effective model. The results revealed that, based on the post-hoc test, the SVM model presented the best performance in the estimation of NDVI and NDWI, with mean R2 values of 0.74 and 0.70, respectively. It is relevant to note that the backscattering coefficient of the vertical-vertical (VV) and vertical-horizontal (VH) polarizations emerged as the variable with the greatest contribution to the models. This finding reinforces the importance of these parameters in the accuracy of estimates. Ultimately, this approach is promising for the creation of time series of NDVI and NDWI in regions that are frequently affected by cloud cover, thus representing a valuable complement to optical sensor data. This integration is particularly valuable for monitoring agricultural crops.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"101 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised burned areas detection using multitemporal synthetic aperture radar data 利用多时合成孔径雷达数据进行无监督烧毁区域探测
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014513
José Victor Orlandi Simões, Rogerio Galante Negri, Felipe Nascimento Souza, Tatiana Sussel Gonçalves Mendes, Adriano Bressane
Climate change is a critical concern that has been greatly affected by human activities, resulting in a rise in greenhouse gas emissions. Its effects have far-reaching impacts on both living and non-living components of ecosystems, leading to alarming outcomes such as a surge in the frequency and severity of fires. This paper presents a data-driven framework that unifies time series of remote sensing images, statistical modeling, and unsupervised classification for mapping fire-damaged areas. To validate the proposed methodology, multiple remote sensing images acquired by the Sentinel-1 satellite between August and October 2021 were collected and analyzed in two case studies comprising Brazilian biomes affected by burns. Our results demonstrate that the proposed approach outperforms another method evaluated in terms of precision metrics and visual adherence. Our methodology achieves the highest overall accuracy of 58.15% and the highest F1 score of 0.72, both of which are higher than the other method. These findings suggest that our approach is more effective in detecting burned areas and may have practical applications in other environmental issues such as landslides, flooding, and deforestation.
气候变化是人类活动造成温室气体排放增加的一个重大问题。气候变化对生态系统的生物和非生物组成部分都产生了深远的影响,导致火灾的频率和严重程度激增等令人担忧的后果。本文介绍了一种数据驱动框架,它将遥感图像的时间序列、统计建模和无监督分类统一起来,用于绘制火灾受损区域图。为了验证所提出的方法,我们收集了哨兵-1 号卫星在 2021 年 8 月至 10 月间获取的多幅遥感图像,并对巴西受火灾影响的两个生物群落进行了分析。结果表明,在精确度指标和视觉一致性方面,所提出的方法优于另一种评估方法。我们的方法实现了最高的 58.15% 整体准确率和最高的 0.72 F1 分数,这两个指标都高于其他方法。这些结果表明,我们的方法能更有效地检测烧毁区域,并可实际应用于其他环境问题,如山体滑坡、洪水和森林砍伐。
{"title":"Unsupervised burned areas detection using multitemporal synthetic aperture radar data","authors":"José Victor Orlandi Simões, Rogerio Galante Negri, Felipe Nascimento Souza, Tatiana Sussel Gonçalves Mendes, Adriano Bressane","doi":"10.1117/1.jrs.18.014513","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014513","url":null,"abstract":"Climate change is a critical concern that has been greatly affected by human activities, resulting in a rise in greenhouse gas emissions. Its effects have far-reaching impacts on both living and non-living components of ecosystems, leading to alarming outcomes such as a surge in the frequency and severity of fires. This paper presents a data-driven framework that unifies time series of remote sensing images, statistical modeling, and unsupervised classification for mapping fire-damaged areas. To validate the proposed methodology, multiple remote sensing images acquired by the Sentinel-1 satellite between August and October 2021 were collected and analyzed in two case studies comprising Brazilian biomes affected by burns. Our results demonstrate that the proposed approach outperforms another method evaluated in terms of precision metrics and visual adherence. Our methodology achieves the highest overall accuracy of 58.15% and the highest F1 score of 0.72, both of which are higher than the other method. These findings suggest that our approach is more effective in detecting burned areas and may have practical applications in other environmental issues such as landslides, flooding, and deforestation.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"14 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ningaloo eclipse: moon shadow speed and land surface temperature effects from Himawari-9 satellite measurements 宁加洛日食:Himawari-9 卫星测量得出的月影速度和地表温度效应
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014511
Fred Prata
A total solar eclipse occurred on April 20, 2023, with the umbral shadow touching the Australian continent over the Ningaloo coastal region, near the town of Exmouth, Western Australia. Eclipse totality lasted ∼1 min, reaching totality at ∼03:29 UTC and happened under cloudless skies. Here, we show that the speed of the Moon’s shadow over the land surface can be estimated from 10 min sampling in both the infrared and visible bands of the Himawari-9 geostationary satellite sensor. The cooling of the land surface due to the passage of the Moon’s shadow over the land is investigated, and temperature drops of 7 K to 15 K are found with cooling rates of 2±1.5 mK s−1. By tracking the time of maximum cooling, the speed of the Moon’s shadow was estimated from thermal data to be 2788±21 km h−1 and from the time of minimum reflectance in the visible data to be 2598±181 km h−1, with a notable time dependence. The methodology and analyses are new and the results compare favorably with NASA’s eclipse data computed using Besselian elements.
2023 年 4 月 20 日发生了日全食,本影在西澳大利亚埃克斯茅斯镇附近的宁格鲁沿海地区上空触及澳大利亚大陆。日全食持续了 1 分钟,在世界协调时 03:29 分达到全食,发生在万里无云的天空下。在此,我们展示了通过对向日葵9号地球静止卫星传感器的红外波段和可见光波段进行10分钟取样,可以估算出月影掠过陆地表面的速度。研究了月影掠过陆地导致的陆地表面降温,发现温度下降了 7 K 至 15 K,降温速率为 2±1.5 mK s-1。通过跟踪最大降温时间,从热数据估算出月影的速度为 2788±21 km h-1,从可见光数据中的最小反射率时间估算出月影的速度为 2598±181 km h-1,两者具有显著的时间依赖性。该方法和分析都是全新的,其结果与美国国家航空航天局使用贝塞尔元素计算的月食数据相比效果更佳。
{"title":"Ningaloo eclipse: moon shadow speed and land surface temperature effects from Himawari-9 satellite measurements","authors":"Fred Prata","doi":"10.1117/1.jrs.18.014511","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014511","url":null,"abstract":"A total solar eclipse occurred on April 20, 2023, with the umbral shadow touching the Australian continent over the Ningaloo coastal region, near the town of Exmouth, Western Australia. Eclipse totality lasted ∼1 min, reaching totality at ∼03:29 UTC and happened under cloudless skies. Here, we show that the speed of the Moon’s shadow over the land surface can be estimated from 10 min sampling in both the infrared and visible bands of the Himawari-9 geostationary satellite sensor. The cooling of the land surface due to the passage of the Moon’s shadow over the land is investigated, and temperature drops of 7 K to 15 K are found with cooling rates of 2±1.5 mK s−1. By tracking the time of maximum cooling, the speed of the Moon’s shadow was estimated from thermal data to be 2788±21 km h−1 and from the time of minimum reflectance in the visible data to be 2598±181 km h−1, with a notable time dependence. The methodology and analyses are new and the results compare favorably with NASA’s eclipse data computed using Besselian elements.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"14 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMFD: an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition SMFD:基于共享个体多尺度特征分解的端到端红外与可见光图像融合模型
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.022203
Mingrui Xu, Jun Kong, Min Jiang, Tianshan Liu
By leveraging the characteristics of different optical sensors, infrared and visible image fusion generates a fused image that combines prominent thermal radiation targets with clear texture details. Existing methods often focus on a single modality or treat two modalities equally, which overlook the distinctive characteristics of each modality and fail to fully utilize their complementary information. To address this problem, we propose an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition. First, to extract multi-scale features from source images, a symmetric multi-scale decomposition encoder consisting of nest connections and a multi-scale receptive field network is designed to capture small, medium, and large-scale features. Second, to sufficiently utilize complementary information, common edge feature maps are introduced to the feature decomposition loss function to decompose extracted features into shared and individual features. Third, to aggregate shared and individual features, a shared-individual self-augmented decoder is proposed to take the individual fusion feature maps as the main input and the shared fusion feature maps as the residual input to assist the decoding process and the reconstruct the fused image. Finally, through comparing subjective evaluations and objective metrics, our method demonstrates its superiority compared with the state-of-the-art approaches.
通过利用不同光学传感器的特性,红外和可见光图像融合生成的融合图像将突出的热辐射目标和清晰的纹理细节结合在一起。现有的方法通常只关注一种模式或对两种模式一视同仁,从而忽略了每种模式的独特性,未能充分利用它们的互补信息。针对这一问题,我们提出了一种基于共享个体多尺度特征分解的端到端红外与可见光图像融合模型。首先,为了从源图像中提取多尺度特征,我们设计了一个由巢连接和多尺度感受野网络组成的对称多尺度分解编码器,以捕捉小、中、大尺度特征。其次,为了充分利用互补信息,在特征分解损失函数中引入了公共边缘特征图,将提取的特征分解为共享特征和个体特征。第三,为了聚合共享特征和个体特征,提出了共享-个体自增强解码器,将个体融合特征图作为主输入,共享融合特征图作为剩余输入,以辅助解码过程并重建融合图像。最后,通过比较主观评价和客观指标,我们的方法证明了它与最先进方法相比的优越性。
{"title":"SMFD: an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition","authors":"Mingrui Xu, Jun Kong, Min Jiang, Tianshan Liu","doi":"10.1117/1.jrs.18.022203","DOIUrl":"https://doi.org/10.1117/1.jrs.18.022203","url":null,"abstract":"By leveraging the characteristics of different optical sensors, infrared and visible image fusion generates a fused image that combines prominent thermal radiation targets with clear texture details. Existing methods often focus on a single modality or treat two modalities equally, which overlook the distinctive characteristics of each modality and fail to fully utilize their complementary information. To address this problem, we propose an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition. First, to extract multi-scale features from source images, a symmetric multi-scale decomposition encoder consisting of nest connections and a multi-scale receptive field network is designed to capture small, medium, and large-scale features. Second, to sufficiently utilize complementary information, common edge feature maps are introduced to the feature decomposition loss function to decompose extracted features into shared and individual features. Third, to aggregate shared and individual features, a shared-individual self-augmented decoder is proposed to take the individual fusion feature maps as the main input and the shared fusion feature maps as the residual input to assist the decoding process and the reconstruct the fused image. Finally, through comparing subjective evaluations and objective metrics, our method demonstrates its superiority compared with the state-of-the-art approaches.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"67 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation-based VHR SAR images built-up area change detection: a coarse-to-fine approach 基于分割的 VHR SAR 图像建成区变化检测:一种从粗到细的方法
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016503
Jingxing Zhu, Feng Wang, Hongjian You
The change detection in built-up areas within very high resolution synthetic aperture radar images is a very challenging task due to speckle noise and geometric distortions caused by the unique imaging mechanism. To tackle this issue, we propose an object-based coarse-to-fine change detection method that integrates segmentation and uncertainty analysis techniques. First, we propose a multi-temporal joint multi-scale segmentation method for generating multi-scale segmentation masks with hierarchical nested relationships. Second, we use the neighborhood ratio detector and Jensen–Shannon distance to produce both pixel-level and object-level change maps, respectively. These maps are fused using the Demeter–Shafer evidence theory, resulting in an initial change map. We then apply a threshold to classify parcels within the initial change map into three categories: changed, unchanged, and uncertain. Third, we perform uncertainty analysis and implement progressive classification by support vector machine for uncertain parcels, moving from coarse to fine segmentation levels. Finally, we integrate change maps across all scales to obtain the final change map. The proposed method is evaluated on three datasets from the GF-3 and ICEYE-X6 satellites. The results show that our approach outperforms alternative methods in extracting more comprehensive changed regions.
由于独特的成像机制造成的斑点噪声和几何失真,在超高分辨率合成孔径雷达图像中检测建筑密集区的变化是一项极具挑战性的任务。为解决这一问题,我们提出了一种基于对象的从粗到细的变化检测方法,该方法集成了分割和不确定性分析技术。首先,我们提出了一种多时空联合多尺度分割方法,用于生成具有分层嵌套关系的多尺度分割掩膜。其次,我们使用邻域比率检测器和詹森-香农距离分别生成像素级和对象级变化图。利用 Demeter-Shafer 证据理论将这些图融合在一起,形成初始变化图。然后,我们使用阈值将初始变化图中的地块分为三类:变化、不变和不确定。第三,我们对不确定的地块进行不确定性分析,并通过支持向量机实施渐进式分类,由粗到细划分等级。最后,我们整合所有尺度的变化图,得到最终的变化图。我们在来自 GF-3 和 ICEYE-X6 卫星的三个数据集上对所提出的方法进行了评估。结果表明,在提取更全面的变化区域方面,我们的方法优于其他方法。
{"title":"Segmentation-based VHR SAR images built-up area change detection: a coarse-to-fine approach","authors":"Jingxing Zhu, Feng Wang, Hongjian You","doi":"10.1117/1.jrs.18.016503","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016503","url":null,"abstract":"The change detection in built-up areas within very high resolution synthetic aperture radar images is a very challenging task due to speckle noise and geometric distortions caused by the unique imaging mechanism. To tackle this issue, we propose an object-based coarse-to-fine change detection method that integrates segmentation and uncertainty analysis techniques. First, we propose a multi-temporal joint multi-scale segmentation method for generating multi-scale segmentation masks with hierarchical nested relationships. Second, we use the neighborhood ratio detector and Jensen–Shannon distance to produce both pixel-level and object-level change maps, respectively. These maps are fused using the Demeter–Shafer evidence theory, resulting in an initial change map. We then apply a threshold to classify parcels within the initial change map into three categories: changed, unchanged, and uncertain. Third, we perform uncertainty analysis and implement progressive classification by support vector machine for uncertain parcels, moving from coarse to fine segmentation levels. Finally, we integrate change maps across all scales to obtain the final change map. The proposed method is evaluated on three datasets from the GF-3 and ICEYE-X6 satellites. The results show that our approach outperforms alternative methods in extracting more comprehensive changed regions.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"101 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139422836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale graph convolution residual network for hyperspectral image classification 用于高光谱图像分类的多尺度图卷积残差网络
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014504
Ao Li, Yuegong Sun, Cong Feng, Yuan Cheng, Liang Xi
In recent years, graph convolutional networks (GCNs) have attracted increased attention in hyperspectral image (HSI) classification through the utilization of data and their connection graph. However, most existing GCN-based methods have two main drawbacks. First, the constructed graph with pixel-level nodes loses many useful spatial information while high computational cost is required due to large graph size. Second, the joint spatial-spectral structure hidden in HSI are not fully explored for better neighbor correlation preservation, which limits the GCN to achieve promising performance on discriminative feature extraction. To address these problems, we propose a multiscale graph convolutional residual network (MSGCRN) for HSI classification. First, to explore the local spatial–spectral structure, superpixel segmentation is performed on the spectral principal component of HSI at different scales. Thus, the obtained multiscale superpixel areas can capture rich spatial texture division. Second, multiple superpixel-level subgraphs are constructed with adaptive weighted node aggregation, which not only effectively reduces the graph size, but also preserves local neighbor correlation in varying subgraph scales. Finally, a graph convolution residual network is designed for multiscale hierarchical features extraction, which are further integrated into the final discriminative features for HSI classification via a diffusion operation. Moreover, a mini-batch branch is adopted to the large-scale superpixel branch of MSGCRN to further reduce computational cost. Extensive experiments on three public HSI datasets demonstrate the advantages of our MSGCRN model compared to several cutting-edge approaches.
近年来,图卷积网络(GCN)通过利用数据及其连接图,在高光谱图像(HSI)分类领域吸引了越来越多的关注。然而,大多数现有的基于 GCN 的方法有两个主要缺点。首先,利用像素级节点构建的图会丢失许多有用的空间信息,同时由于图的规模较大,需要较高的计算成本。其次,为了更好地保留相邻相关性,HSI 中隐藏的空间-光谱联合结构没有被充分挖掘,这就限制了 GCN 在鉴别特征提取方面取得良好的性能。为了解决这些问题,我们提出了一种用于 HSI 分类的多尺度图卷积残差网络(MSGCRN)。首先,为了探索局部空间-光谱结构,我们对不同尺度的 HSI 光谱主成分进行了超像素分割。因此,得到的多尺度超像素区域可以捕捉到丰富的空间纹理划分。其次,利用自适应加权节点聚合法构建多个超像素级子图,这不仅能有效减小图的大小,还能在不同子图尺度上保留局部邻域相关性。最后,设计了一个图卷积残差网络,用于多尺度分层特征提取,并通过扩散操作将这些特征进一步整合到最终的鉴别特征中,用于 HSI 分类。此外,MSGCRN 的大规模超像素分支采用了迷你批处理分支,以进一步降低计算成本。在三个公共人脸图像数据集上进行的广泛实验证明了我们的 MSGCRN 模型与几种前沿方法相比所具有的优势。
{"title":"Multiscale graph convolution residual network for hyperspectral image classification","authors":"Ao Li, Yuegong Sun, Cong Feng, Yuan Cheng, Liang Xi","doi":"10.1117/1.jrs.18.014504","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014504","url":null,"abstract":"In recent years, graph convolutional networks (GCNs) have attracted increased attention in hyperspectral image (HSI) classification through the utilization of data and their connection graph. However, most existing GCN-based methods have two main drawbacks. First, the constructed graph with pixel-level nodes loses many useful spatial information while high computational cost is required due to large graph size. Second, the joint spatial-spectral structure hidden in HSI are not fully explored for better neighbor correlation preservation, which limits the GCN to achieve promising performance on discriminative feature extraction. To address these problems, we propose a multiscale graph convolutional residual network (MSGCRN) for HSI classification. First, to explore the local spatial–spectral structure, superpixel segmentation is performed on the spectral principal component of HSI at different scales. Thus, the obtained multiscale superpixel areas can capture rich spatial texture division. Second, multiple superpixel-level subgraphs are constructed with adaptive weighted node aggregation, which not only effectively reduces the graph size, but also preserves local neighbor correlation in varying subgraph scales. Finally, a graph convolution residual network is designed for multiscale hierarchical features extraction, which are further integrated into the final discriminative features for HSI classification via a diffusion operation. Moreover, a mini-batch branch is adopted to the large-scale superpixel branch of MSGCRN to further reduce computational cost. Extensive experiments on three public HSI datasets demonstrate the advantages of our MSGCRN model compared to several cutting-edge approaches.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"7 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale contrastive learning method for PolSAR image classification 用于 PolSAR 图像分类的多尺度对比学习方法
IF 1.7 4区 地球科学 Q4 ENVIRONMENTAL SCIENCES Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014502
Wenqiang Hua, Chen Wang, Nan Sun, Lin Liu
Although deep learning-based methods have made remarkable achievements in polarimetric synthetic aperture radar (PolSAR) image classification, these methods require a large number of labeled samples. However, for PolSAR image classification, it is difficult to obtain a large number of labeled samples, which requires extensive human labor and material resources. Therefore, a new PolSAR image classification method based on multi-scale contrastive learning is proposed, which can achieve good classification results with only a small number of labeled samples. During the pre-training process, we propose a multi-scale contrastive learning network model that uses the characteristics of the data itself to train the network by contrastive training. In addition, to capture richer feature information, a multi-scale network structure is introduced. In the training process, considering the diversity and complexity of PolSAR images, we design a hybrid loss function combining the supervised and unsupervised information to achieve better classification performance with limited labeled samples. The experimental results on three real PolSAR datasets have demonstrated that the proposed method outperforms other comparison methods, even with limited labeled samples.
虽然基于深度学习的方法在偏振合成孔径雷达(PolSAR)图像分类方面取得了显著成就,但这些方法需要大量的标记样本。然而,对于 PolSAR 图像分类来说,很难获得大量的标记样本,这需要大量的人力和物力。因此,本文提出了一种基于多尺度对比学习的新 PolSAR 图像分类方法,只需少量标注样本即可获得良好的分类效果。在预训练过程中,我们提出了一种多尺度对比学习网络模型,利用数据本身的特点通过对比训练来训练网络。此外,为了捕捉更丰富的特征信息,我们还引入了多尺度网络结构。在训练过程中,考虑到 PolSAR 图像的多样性和复杂性,我们设计了一种混合损失函数,将监督信息和非监督信息相结合,从而在有限的标注样本下获得更好的分类性能。在三个真实 PolSAR 数据集上的实验结果表明,即使在标注样本有限的情况下,所提出的方法也优于其他比较方法。
{"title":"Multi-scale contrastive learning method for PolSAR image classification","authors":"Wenqiang Hua, Chen Wang, Nan Sun, Lin Liu","doi":"10.1117/1.jrs.18.014502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014502","url":null,"abstract":"Although deep learning-based methods have made remarkable achievements in polarimetric synthetic aperture radar (PolSAR) image classification, these methods require a large number of labeled samples. However, for PolSAR image classification, it is difficult to obtain a large number of labeled samples, which requires extensive human labor and material resources. Therefore, a new PolSAR image classification method based on multi-scale contrastive learning is proposed, which can achieve good classification results with only a small number of labeled samples. During the pre-training process, we propose a multi-scale contrastive learning network model that uses the characteristics of the data itself to train the network by contrastive training. In addition, to capture richer feature information, a multi-scale network structure is introduced. In the training process, considering the diversity and complexity of PolSAR images, we design a hybrid loss function combining the supervised and unsupervised information to achieve better classification performance with limited labeled samples. The experimental results on three real PolSAR datasets have demonstrated that the proposed method outperforms other comparison methods, even with limited labeled samples.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"129 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139092110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Applied Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1