首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Precise Retrieval of Sentinel-1 Data by Minimizing the Redundancy With Greedy Algorithm 利用贪婪算法尽量减少冗余,精确检索哨兵-1 数据
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JSTARS.2024.3485771
Kaiwen Yang;Lei Zhang;Jicang Wu;Jinsong Qian
With a widespread adoption of synthetic aperture radar (SAR) observations in Earth sciences, the volume of annual data updates has soared to petabyte scales. Consequently, the accurate retrieval and efficient storage of SAR data have become pressing concerns. The existing data searching method exhibits significant redundancy, leading to wasteful consumption of bandwidth and storage resources. Aiming to address this issue, we present here an optimized retrieval method grounded in a greedy algorithm, which can substantially reduce redundant data by approximately 20–65% while ensuring comprehensive data coverage over the areas of interest. By significantly minimizing redundant data, the proposed method markedly enhances data acquisition efficiency and conserves storage space. Validation experiments with Sentinel-1 data, employing various keyhole markup language scope files as inputs, affirm the effectiveness and reliability of the method. The application of the proposed method is expected to pave the way for efficient data management and fully automatic InSAR processing.
随着合成孔径雷达(SAR)观测在地球科学领域的广泛应用,每年的数据更新量已飙升至 PB 级。因此,如何准确检索和高效存储合成孔径雷达数据已成为亟待解决的问题。现有的数据搜索方法存在大量冗余,导致带宽和存储资源的浪费。为了解决这一问题,我们在此提出了一种基于贪婪算法的优化检索方法,在确保数据全面覆盖感兴趣区域的同时,可将冗余数据大幅减少约 20-65%。通过大幅减少冗余数据,该方法显著提高了数据采集效率并节省了存储空间。采用各种锁孔标记语言范围文件作为输入,对哨兵-1 数据进行的验证实验证实了该方法的有效性和可靠性。建议方法的应用有望为高效数据管理和全自动 InSAR 处理铺平道路。
{"title":"Precise Retrieval of Sentinel-1 Data by Minimizing the Redundancy With Greedy Algorithm","authors":"Kaiwen Yang;Lei Zhang;Jicang Wu;Jinsong Qian","doi":"10.1109/JSTARS.2024.3485771","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485771","url":null,"abstract":"With a widespread adoption of synthetic aperture radar (SAR) observations in Earth sciences, the volume of annual data updates has soared to petabyte scales. Consequently, the accurate retrieval and efficient storage of SAR data have become pressing concerns. The existing data searching method exhibits significant redundancy, leading to wasteful consumption of bandwidth and storage resources. Aiming to address this issue, we present here an optimized retrieval method grounded in a greedy algorithm, which can substantially reduce redundant data by approximately 20–65% while ensuring comprehensive data coverage over the areas of interest. By significantly minimizing redundant data, the proposed method markedly enhances data acquisition efficiency and conserves storage space. Validation experiments with Sentinel-1 data, employing various keyhole markup language scope files as inputs, affirm the effectiveness and reliability of the method. The application of the proposed method is expected to pave the way for efficient data management and fully automatic InSAR processing.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19478-19486"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10733747","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deformable Transformer and Spectral U-Net for Large-Scale Hyperspectral Image Semantic Segmentation 用于大规模高光谱图像语义分割的可变形变换器和光谱 U-Net
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JSTARS.2024.3485239
Tianjian Zhang;Zhaohui Xue;Hongjun Su
Remote sensing semantic segmentation tasks aim to automatically extract land cover types by accurately classifying each pixel. However, large-scale hyperspectral remote sensing images possess rich spectral information, complex and diverse spatial distributions, significant scale variations, and a wide variety of land cover types with detailed features, which pose significant challenges for segmentation tasks. To overcome these challenges, this study introduces a U-shaped semantic segmentation network that combines global spectral attention and deformable Transformer for segmenting large-scale hyperspectral remote sensing images. First, convolution and global spectral attention are utilized to emphasize features with the richest spectral information, effectively extracting spectral characteristics. Second, deformable self-attention is employed to capture global-local information, addressing the complex scale and distribution of objects. Finally, deformable cross-attention is used to aggregate deep and shallow features, enabling comprehensive semantic information mining. Experiments conducted on a large-scale hyperspectral remote sensing dataset (WHU-OHS) demonstrate that: first, in different cities including Changchun, Shanghai, Guangzhou, and Karamay, DTSU-Net achieved the highest performance in terms of mIoU compared to the baseline methods, reaching 56.19%, 37.89%, 52.90%, and 63.54%, with an average improvement of 7.57% to 34.13%, respectively; second, module ablation experiments confirm the effectiveness of our proposed modules, and deformable Transformer significantly reduces training costs compared to conventional Transformers; third, our approach achieves the highest mIoU of 57.22% across the entire dataset, with a balanced trade-off between accuracy and parameter efficiency, demonstrating an improvement of 1.65% to 56.58% compared to the baseline methods.
遥感语义分割任务旨在通过对每个像素进行精确分类,自动提取土地覆被类型。然而,大尺度高光谱遥感图像具有丰富的光谱信息、复杂多样的空间分布、显著的尺度变化以及种类繁多、特征细致的土地覆被类型,这些都给分割任务带来了巨大挑战。为了克服这些挑战,本研究引入了一种结合全局光谱关注和可变形变换器的 U 型语义分割网络,用于分割大尺度高光谱遥感图像。首先,利用卷积和全局光谱注意力来强调光谱信息最丰富的特征,从而有效地提取光谱特征。其次,利用可变形自注意力捕捉全局-局部信息,解决物体复杂的尺度和分布问题。最后,利用可变形交叉注意力来聚合深层和浅层特征,从而实现全面的语义信息挖掘。在大规模高光谱遥感数据集(WHU-OHS)上进行的实验表明:首先,在长春、上海、广州和克拉玛依等不同城市中,DTSU-Net 的 mIoU 性能与基线方法相比最高,分别达到 56.19%、37.89%、52.90% 和 63.54%,平均提高了 7.57% 至 34.13%;其次,模块化的高光谱遥感数据集的 mIoU 性能与基线方法相比最高,分别达到 56.19%、37.89%、52.90% 和 63.54%,平均提高了 7.57% 至 34.13%。13% ;第二,模块消融实验证实了我们提出的模块的有效性,与传统变形器相比,可变形变形器显著降低了训练成本;第三,我们的方法在整个数据集上实现了最高的 mIoU,达到了 57.22%,在准确性和参数效率之间实现了平衡取舍,与基线方法相比提高了 1.65% 至 56.58%。
{"title":"Deformable Transformer and Spectral U-Net for Large-Scale Hyperspectral Image Semantic Segmentation","authors":"Tianjian Zhang;Zhaohui Xue;Hongjun Su","doi":"10.1109/JSTARS.2024.3485239","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485239","url":null,"abstract":"Remote sensing semantic segmentation tasks aim to automatically extract land cover types by accurately classifying each pixel. However, large-scale hyperspectral remote sensing images possess rich spectral information, complex and diverse spatial distributions, significant scale variations, and a wide variety of land cover types with detailed features, which pose significant challenges for segmentation tasks. To overcome these challenges, this study introduces a U-shaped semantic segmentation network that combines global spectral attention and deformable Transformer for segmenting large-scale hyperspectral remote sensing images. First, convolution and global spectral attention are utilized to emphasize features with the richest spectral information, effectively extracting spectral characteristics. Second, deformable self-attention is employed to capture global-local information, addressing the complex scale and distribution of objects. Finally, deformable cross-attention is used to aggregate deep and shallow features, enabling comprehensive semantic information mining. Experiments conducted on a large-scale hyperspectral remote sensing dataset (WHU-OHS) demonstrate that: first, in different cities including Changchun, Shanghai, Guangzhou, and Karamay, DTSU-Net achieved the highest performance in terms of mIoU compared to the baseline methods, reaching 56.19%, 37.89%, 52.90%, and 63.54%, with an average improvement of 7.57% to 34.13%, respectively; second, module ablation experiments confirm the effectiveness of our proposed modules, and deformable Transformer significantly reduces training costs compared to conventional Transformers; third, our approach achieves the highest mIoU of 57.22% across the entire dataset, with a balanced trade-off between accuracy and parameter efficiency, demonstrating an improvement of 1.65% to 56.58% compared to the baseline methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"20227-20244"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10729869","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSAVI-Enhanced CASA Model for Estimating the Carbon Sink in Coastal Wetland Area: A Case Study of Shandong Province 用于估算沿海湿地碳汇的 MSAVI 增强型 CASA 模型:山东省案例研究
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JSTARS.2024.3485642
Huaqiao Xing;Yuqing Zhang;Linye Zhu;Na Xu;Xin Lan
Coastal wetland ecosystem is vital for carbon sequestration, making the accurate carbon sink estimation essential for its protection and management. Traditional carbon sink estimation methods have overlooked the influence of moist soil on sparse vegetation, resulting in the inaccurate estimation of net primary productivity (NPP), especially in coastal areas with mixed wetlands and vegetation. To address this challenge, this study proposed an improved Carnegie–Ames–Stanford approach model for NPP estimation, which utilizes the modified soil-adjusted vegetation index (MSAVI) to eliminate the background noise of moist soils and calculate the fraction of photosynthetically active radiation. By using MOD17A3 as reference data for comparative experiment, the accuracy of NPP results is improved by 89.6 gC·m−2. The proposed model was then used for carbon sink estimation and analysis of Shandong coastal area. The results indicate the following: First, the average NPPMSAVI across Shandong coastal area was improved by 99.12 gC·m−2, 36.17%, and 60.53 gC·m−2 in BIAS, relative bias, and root-mean-square error, respectively. Second, the spatial distribution of net ecosystem productivity (NEP) in Shandong coastal area is higher in the east and lower in the west, with mean values of approximately 210 gC·m−2 in the east and 60 gC·m−2 in the west. The seasonal differences in NEP among different land types are significant. Third, NEP exhibits a strong correlation with temperature, precipitation, and solar radiation, with mean r of 0.78, 0.8, and 0.84, respectively.
沿海湿地生态系统对碳封存至关重要,因此准确估算碳汇对其保护和管理至关重要。传统的碳汇估算方法忽视了潮湿土壤对稀疏植被的影响,导致对净初级生产力(NPP)的估算不准确,尤其是在湿地和植被混杂的沿海地区。为解决这一难题,本研究提出了一种改进的卡内基-梅斯-斯坦福法净初级生产力估算模型,该模型利用修正的土壤调整植被指数(MSAVI)来消除潮湿土壤的背景噪声,并计算光合有效辐射的部分。通过使用 MOD17A3 作为对比实验的参考数据,NPP 结果的准确性提高了 89.6 gC-m-2。随后,利用所提出的模型对山东沿海地区进行了碳汇估算和分析。结果表明首先,山东沿海地区的平均 NPPMSAVI 在 BIAS、相对偏差和均方根误差方面分别提高了 99.12 gC-m-2、36.17% 和 60.53 gC-m-2。其次,山东沿海地区生态系统净生产力(NEP)的空间分布为东高西低,东部平均值约为 210 gC-m-2,西部平均值约为 60 gC-m-2。不同土地类型的 NEP 季节性差异显著。第三,NEP 与气温、降水和太阳辐射有很强的相关性,平均 r 分别为 0.78、0.8 和 0.84。
{"title":"MSAVI-Enhanced CASA Model for Estimating the Carbon Sink in Coastal Wetland Area: A Case Study of Shandong Province","authors":"Huaqiao Xing;Yuqing Zhang;Linye Zhu;Na Xu;Xin Lan","doi":"10.1109/JSTARS.2024.3485642","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485642","url":null,"abstract":"Coastal wetland ecosystem is vital for carbon sequestration, making the accurate carbon sink estimation essential for its protection and management. Traditional carbon sink estimation methods have overlooked the influence of moist soil on sparse vegetation, resulting in the inaccurate estimation of net primary productivity (NPP), especially in coastal areas with mixed wetlands and vegetation. To address this challenge, this study proposed an improved Carnegie–Ames–Stanford approach model for NPP estimation, which utilizes the modified soil-adjusted vegetation index (MSAVI) to eliminate the background noise of moist soils and calculate the fraction of photosynthetically active radiation. By using MOD17A3 as reference data for comparative experiment, the accuracy of NPP results is improved by 89.6 gC·m\u0000<sup>−2</sup>\u0000. The proposed model was then used for carbon sink estimation and analysis of Shandong coastal area. The results indicate the following: First, the average NPP\u0000<sub>MSAVI</sub>\u0000 across Shandong coastal area was improved by 99.12 gC·m\u0000<sup>−2</sup>\u0000, 36.17%, and 60.53 gC·m\u0000<sup>−2</sup>\u0000 in BIAS, relative bias, and root-mean-square error, respectively. Second, the spatial distribution of net ecosystem productivity (NEP) in Shandong coastal area is higher in the east and lower in the west, with mean values of approximately 210 gC·m\u0000<sup>−2</sup>\u0000 in the east and 60 gC·m\u0000<sup>−2</sup>\u0000 in the west. The seasonal differences in NEP among different land types are significant. Third, NEP exhibits a strong correlation with temperature, precipitation, and solar radiation, with mean \u0000<italic>r</i>\u0000 of 0.78, 0.8, and 0.84, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19698-19712"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10733757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fast Target Detection Model for Remote Sensing Images Leveraging Roofline Analysis on Edge Computing Devices 利用边缘计算设备上的屋顶线分析建立遥感图像的快速目标检测模型
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-21 DOI: 10.1109/JSTARS.2024.3483749
Boya Zhao;Zihan Qin;Yuanfeng Wu;Yuhang Song;Haoyang Yu;Lianru Gao
Deploying image target detection algorithms on embedded devices is critical. Previous studies assumed that fewer model parameters and computations improved the inference speed. However, many models with few parameters and computations have slow inference speeds. Therefore, developing a remote sensing image target detection model that can perform real-time inference on embedded devices is required. We propose a fast target detection model for remote sensing images leveraging roofline analysis on edge computing devices (FTD-RLE). It comprises three parts: (1) We analyze the hardware characteristics of embedded devices using RoofLine and incorporate global features to design a model structure based on the operational intensity (OI) and arithmetic intensity (AI) of embedded devices. (2) The mirror ring convolution (MRC) is designed for extracting global features. The global information-aware module (GIAM) extracts local features from key areas using the global feature guidance model. The global-local feature pyramid module (GLFPM) is proposed to combine global and local features. (3) Additionally, hardware deployment and inference acceleration technologies are implemented to enable the model's deployment on edge devices. TensorRT and quantization methods are used to ensure fast inference speed. The proposed algorithm achieves an average detection accuracy of 92.3% on the VHR-10 dataset and 95.2% on the RSOD dataset. It has 1.26 M model parameters, and the inference time for processing one image on Jetson Orin Nx is 8.43ms, which is 1.90 ms and 1.98 ms faster than the mainstream lightweight algorithms ShufflenetV2 and GhostNet, respectively.
在嵌入式设备上部署图像目标检测算法至关重要。以往的研究认为,减少模型参数和计算量可提高推理速度。然而,许多参数和计算量较少的模型推理速度较慢。因此,需要开发一种能在嵌入式设备上进行实时推理的遥感图像目标检测模型。我们提出了一种利用边缘计算设备上的屋顶线分析的遥感图像快速目标检测模型(FTD-RLE)。该模型由三部分组成:(1)利用屋顶线分析嵌入式设备的硬件特征,并结合全局特征,设计出基于嵌入式设备运算强度(OI)和算术强度(AI)的模型结构。(2) 设计了用于提取全局特征的镜像环卷积(MRC)。全局信息感知模块(GIAM)利用全局特征引导模型提取关键区域的局部特征。提出了全局-局部特征金字塔模块(GLFPM),以结合全局和局部特征。(3) 此外,还采用了硬件部署和推理加速技术,以便在边缘设备上部署模型。采用 TensorRT 和量化方法确保快速推理。所提出的算法在 VHR-10 数据集上实现了 92.3% 的平均检测准确率,在 RSOD 数据集上实现了 95.2% 的平均检测准确率。它有 1.26 M 个模型参数,在 Jetson Orin Nx 上处理一幅图像的推理时间为 8.43ms,比主流轻量级算法 ShufflenetV2 和 GhostNet 分别快 1.90 ms 和 1.98 ms。
{"title":"A Fast Target Detection Model for Remote Sensing Images Leveraging Roofline Analysis on Edge Computing Devices","authors":"Boya Zhao;Zihan Qin;Yuanfeng Wu;Yuhang Song;Haoyang Yu;Lianru Gao","doi":"10.1109/JSTARS.2024.3483749","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3483749","url":null,"abstract":"Deploying image target detection algorithms on embedded devices is critical. Previous studies assumed that fewer model parameters and computations improved the inference speed. However, many models with few parameters and computations have slow inference speeds. Therefore, developing a remote sensing image target detection model that can perform real-time inference on embedded devices is required. We propose a fast target detection model for remote sensing images leveraging roofline analysis on edge computing devices (FTD-RLE). It comprises three parts: (1) We analyze the hardware characteristics of embedded devices using RoofLine and incorporate global features to design a model structure based on the operational intensity (OI) and arithmetic intensity (AI) of embedded devices. (2) The mirror ring convolution (MRC) is designed for extracting global features. The global information-aware module (GIAM) extracts local features from key areas using the global feature guidance model. The global-local feature pyramid module (GLFPM) is proposed to combine global and local features. (3) Additionally, hardware deployment and inference acceleration technologies are implemented to enable the model's deployment on edge devices. TensorRT and quantization methods are used to ensure fast inference speed. The proposed algorithm achieves an average detection accuracy of 92.3% on the VHR-10 dataset and 95.2% on the RSOD dataset. It has 1.26 M model parameters, and the inference time for processing one image on Jetson Orin Nx is 8.43ms, which is 1.90 ms and 1.98 ms faster than the mainstream lightweight algorithms ShufflenetV2 and GhostNet, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19343-19360"},"PeriodicalIF":4.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10723288","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scale-Frequency Dual-Modulation Method for Remote Sensing Image Continuous Super-Resolution 用于遥感图像连续超分辨率的标频双调制方法
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-21 DOI: 10.1109/JSTARS.2024.3483991
Shize Gao;Guoqing Wang;Baorong Xie;Xin Wei;Jue Wang;Wenchao Liu
In recent years, the development of continuous-scale super-resolution (SR) methods in the field of remote sensing (RS) has garnered significant attention. These innovative methods are capable of delivering arbitrary-scale image SR through a single unified network. However, the majority of these methods employ the same feature extractor for different SR scales, which constrains the enhancement of network performance. Furthermore, the utilization of a multilayer perceptron for image reconstruction results in the loss of a substantial quantity of high-frequency information, which is of particular significance in the context of RS images. This, in turn, gives rise to the generation of blurred SR results. In order to address the aforementioned issues, the scale-frequency dual-modulation network (SFMNet) is proposed as a means of achieving RS image continuous SR. First, scale modulation feature fusion can modulate different levels of feature fusion according to different scale factors, thereby fully integrating the scale information into the feature extraction process of the network. Subsequently, frequency modulation reconstruction can modulate the frequency-domain information at the root of the image reconstruction process, thereby enhancing the ability of the network to learn high-frequency information. The experimental results demonstrate that the proposed SFMNet outperforms existing RS image continuous SR methods in terms of quantitative indices and visual quality.
近年来,遥感(RS)领域连续尺度超分辨率(SR)方法的发展备受关注。这些创新方法能够通过单一的统一网络提供任意尺度的图像超分辨率。然而,这些方法大多采用相同的特征提取器来处理不同尺度的 SR,这限制了网络性能的提升。此外,使用多层感知器进行图像重建会导致大量高频信息丢失,这对 RS 图像尤为重要。这反过来又会导致产生模糊的 SR 结果。为了解决上述问题,我们提出了尺度频率双调制网络(SFMNet)作为实现 RS 图像连续 SR 的一种手段。首先,尺度调制特征融合可以根据不同的尺度因子调制不同层次的特征融合,从而将尺度信息充分融入网络的特征提取过程。其次,频率调制重构可以在图像重构过程中调制频域信息,从而增强网络学习高频信息的能力。实验结果表明,所提出的 SFMNet 在定量指标和视觉质量方面优于现有的 RS 图像连续 SR 方法。
{"title":"Scale-Frequency Dual-Modulation Method for Remote Sensing Image Continuous Super-Resolution","authors":"Shize Gao;Guoqing Wang;Baorong Xie;Xin Wei;Jue Wang;Wenchao Liu","doi":"10.1109/JSTARS.2024.3483991","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3483991","url":null,"abstract":"In recent years, the development of continuous-scale super-resolution (SR) methods in the field of remote sensing (RS) has garnered significant attention. These innovative methods are capable of delivering arbitrary-scale image SR through a single unified network. However, the majority of these methods employ the same feature extractor for different SR scales, which constrains the enhancement of network performance. Furthermore, the utilization of a multilayer perceptron for image reconstruction results in the loss of a substantial quantity of high-frequency information, which is of particular significance in the context of RS images. This, in turn, gives rise to the generation of blurred SR results. In order to address the aforementioned issues, the scale-frequency dual-modulation network (SFMNet) is proposed as a means of achieving RS image continuous SR. First, scale modulation feature fusion can modulate different levels of feature fusion according to different scale factors, thereby fully integrating the scale information into the feature extraction process of the network. Subsequently, frequency modulation reconstruction can modulate the frequency-domain information at the root of the image reconstruction process, thereby enhancing the ability of the network to learn high-frequency information. The experimental results demonstrate that the proposed SFMNet outperforms existing RS image continuous SR methods in terms of quantitative indices and visual quality.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19682-19697"},"PeriodicalIF":4.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10723306","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Geoscience and Remote Sensing Society Information 电气和电子工程师学会地球科学与遥感学会信息
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-21 DOI: 10.1109/JSTARS.2024.3429951
{"title":"IEEE Geoscience and Remote Sensing Society Information","authors":"","doi":"10.1109/JSTARS.2024.3429951","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3429951","url":null,"abstract":"","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"C2-C2"},"PeriodicalIF":4.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10726577","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142517938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Third-Modality Collaborative Learning Approach for Visible-Infrared Vessel Reidentification 可见红外血管再识别的第三模式协作学习方法
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-18 DOI: 10.1109/JSTARS.2024.3479423
Qi Zhang;Yiming Yan;Long Gao;Congan Xu;Nan Su;Shou Feng
Visible Infrared Reidentification (VI-ReID) on vessels is an important component task in in the application of UAV remote sensing data, aiming to retrieve images with the same identity as a given vessel by retrieving it from image libraries containing vessels of different modalities. One of its main challenges is the huge modality difference between visible (VIS) and infrared (IR) images. Some state-of-the-art methods try to design complex networks or generative methods to mitigate the modality differences, ignoring the highly nonlinear relationship between the two modalities. To solve this problem, we propose a nonlinear Third-Modality Generator (TMG) to generate third-modality images to collaborate the original two modalities to learn together. In addition, in order to make the network focus on the image focus area and get rich local information, a Multidimensional Attention Guidance (MAG) module is proposed to guide the attention in both channel and spatial dimensions. By integrating TMG, MAG and the three designed losses (Generative Consistency Loss, Cross Modality Loss, and Modality Internal Loss) into an end-to-end learning framework, we propose a network utilizing the third-modality to collaborate learning, called third-modality collaborative network (TMCN), which has strong discriminative ability and significantly reduces the modality difference between VIS and IR. In addition, due to the lack of vessel data in the VI-ReID task, we have collected an airborne vessel cross-modality reidentification dataset (AVC-ReID) to promote the practical application of the VI-ReID task. Extensive experiments on the AVC-ReID dataset show that the proposed TMCN outperforms several other state-of-the-art methods.
船只的可见光红外再识别(VI-ReID)是无人机遥感数据应用中的一项重要任务,其目的是从包含不同模态船只的图像库中检索与给定船只身份相同的图像。其主要挑战之一是可见光(VIS)和红外(IR)图像之间的巨大模态差异。一些最先进的方法试图设计复杂的网络或生成方法来减轻模态差异,却忽略了两种模态之间的高度非线性关系。为了解决这个问题,我们提出了一种非线性的第三模态生成器(TMG)来生成第三模态图像,以协同原始的两种模态共同学习。此外,为了使网络聚焦于图像焦点区域并获取丰富的局部信息,我们还提出了多维注意力引导(MAG)模块,以引导通道和空间维度的注意力。通过将 TMG、MAG 和所设计的三种损失(生成一致性损失、跨模态损失和模态内部损失)整合到端到端学习框架中,我们提出了一种利用第三模态进行协作学习的网络,称为第三模态协作网络(TMCN),它具有很强的分辨能力,能显著减少 VIS 和 IR 之间的模态差异。此外,由于VI-ReID任务中缺乏船只数据,我们收集了一个机载船只跨模态再识别数据集(AVC-ReID),以促进VI-ReID任务的实际应用。在 AVC-ReID 数据集上进行的大量实验表明,所提出的 TMCN 优于其他几种最先进的方法。
{"title":"A Third-Modality Collaborative Learning Approach for Visible-Infrared Vessel Reidentification","authors":"Qi Zhang;Yiming Yan;Long Gao;Congan Xu;Nan Su;Shou Feng","doi":"10.1109/JSTARS.2024.3479423","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3479423","url":null,"abstract":"Visible Infrared Reidentification (VI-ReID) on vessels is an important component task in in the application of UAV remote sensing data, aiming to retrieve images with the same identity as a given vessel by retrieving it from image libraries containing vessels of different modalities. One of its main challenges is the huge modality difference between visible (VIS) and infrared (IR) images. Some state-of-the-art methods try to design complex networks or generative methods to mitigate the modality differences, ignoring the highly nonlinear relationship between the two modalities. To solve this problem, we propose a nonlinear Third-Modality Generator (TMG) to generate third-modality images to collaborate the original two modalities to learn together. In addition, in order to make the network focus on the image focus area and get rich local information, a Multidimensional Attention Guidance (MAG) module is proposed to guide the attention in both channel and spatial dimensions. By integrating TMG, MAG and the three designed losses (Generative Consistency Loss, Cross Modality Loss, and Modality Internal Loss) into an end-to-end learning framework, we propose a network utilizing the third-modality to collaborate learning, called third-modality collaborative network (TMCN), which has strong discriminative ability and significantly reduces the modality difference between VIS and IR. In addition, due to the lack of vessel data in the VI-ReID task, we have collected an airborne vessel cross-modality reidentification dataset (AVC-ReID) to promote the practical application of the VI-ReID task. Extensive experiments on the AVC-ReID dataset show that the proposed TMCN outperforms several other state-of-the-art methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19035-19047"},"PeriodicalIF":4.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10723277","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HTCNet: Hybrid Transformer-CNN for SAR Image Denoising HTCNet:用于合成孔径雷达图像去噪的混合变换器-CNN
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-18 DOI: 10.1109/JSTARS.2024.3483786
Min Huang;Shuaili Luo;Shuaihui Wang;Jinghang Guo;Jingyang Wang
Synthetic aperture radar (SAR) is extensively utilized in diverse fields, including military defense and resource exploration, due to its all-day, all-weather characteristics. However, the extraction of information from SAR images is severely affected by speckle noise, making denoising crucial. This article proposes a hybrid transformer-convolutional neural networks (CNNs) network, a hybrid denoising network that combines transformer and CNN. The three core designs of the network ensure its suitability for SAR image denoising: 1) The network integrates a transformer-based encoder with a CNN-based decoder, capturing both local and global dependencies inherent in SAR images, thereby enhancing the effectiveness of noise removal. 2) Patch embedding blocks enhance the convolutional neural network's perception of features at different scales. 3) Depthwise separable convolutions are fused into the Transformer block to further improve the network's ability to capture spatial information while reducing computational complexity. The proposed algorithm demonstrates excellent denoising performance in both simulated and real SAR images, as evidenced by experimental results. Compared to other denoising algorithms, this method efficiently removes speckle noise while preserving the texture information within the images.
合成孔径雷达(SAR)因其全天候的特点,被广泛应用于军事防御和资源勘探等多个领域。然而,合成孔径雷达图像的信息提取受到斑点噪声的严重影响,因此去噪至关重要。本文提出了一种混合变压器-卷积神经网络(CNN)网络,这是一种结合了变压器和 CNN 的混合去噪网络。该网络的三个核心设计确保其适用于 SAR 图像去噪:1) 该网络集成了基于变压器的编码器和基于 CNN 的解码器,可捕捉 SAR 图像固有的局部和全局依赖性,从而提高去噪效果。2) 片段嵌入块增强了卷积神经网络对不同尺度特征的感知能力。3) 深度可分离卷积融合到变换器块中,进一步提高了网络捕捉空间信息的能力,同时降低了计算复杂度。实验结果表明,所提出的算法在模拟和真实合成孔径雷达图像中都表现出卓越的去噪性能。与其他去噪算法相比,该方法能有效去除斑点噪声,同时保留图像中的纹理信息。
{"title":"HTCNet: Hybrid Transformer-CNN for SAR Image Denoising","authors":"Min Huang;Shuaili Luo;Shuaihui Wang;Jinghang Guo;Jingyang Wang","doi":"10.1109/JSTARS.2024.3483786","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3483786","url":null,"abstract":"Synthetic aperture radar (SAR) is extensively utilized in diverse fields, including military defense and resource exploration, due to its all-day, all-weather characteristics. However, the extraction of information from SAR images is severely affected by speckle noise, making denoising crucial. This article proposes a hybrid transformer-convolutional neural networks (CNNs) network, a hybrid denoising network that combines transformer and CNN. The three core designs of the network ensure its suitability for SAR image denoising: 1) The network integrates a transformer-based encoder with a CNN-based decoder, capturing both local and global dependencies inherent in SAR images, thereby enhancing the effectiveness of noise removal. 2) Patch embedding blocks enhance the convolutional neural network's perception of features at different scales. 3) Depthwise separable convolutions are fused into the Transformer block to further improve the network's ability to capture spatial information while reducing computational complexity. The proposed algorithm demonstrates excellent denoising performance in both simulated and real SAR images, as evidenced by experimental results. Compared to other denoising algorithms, this method efficiently removes speckle noise while preserving the texture information within the images.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19380-19394"},"PeriodicalIF":4.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10723208","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SingleRecon: Reconstructing Building 3-D Models of LoD1 From a Single Off-Nadir Remote Sensing Image SingleRecon:根据单张离中天线遥感图像重建 LoD1 的建筑物三维模型
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-18 DOI: 10.1109/JSTARS.2024.3483843
Ruizhe Shao;JiangJiang Wu;Jun Li;Shuang Peng;Hao Chen;Chun Du
3-D building models are one of the most intuitive and widely used forms for understanding urban buildings. Generating 3-D building models based on a single off-nadir satellite image is an economical and rapid method, particularly valuable in large-scale 3-D reconstruction scenarios with limited time. In this article, we propose a novel pipeline for automatically reconstructing level of detail 1 (LoD1) 3-D building models based on a single off-nadir satellite remote sensing image. Our pipeline is built upon a multitask neural network called off-nadir building reconstruction network (ONBuildingNet), which extracts building roof polygons and offsets from the image. Using this information, the pipeline computes the building footprint polygons and heights, constructs LoD1 building models, and then extract textures from the off-nadir image. ONBuildingNet introduces our proposed cross-field auxiliary task and multiscale mask head to extract building roof polygons with accurate shapes. We have demonstrated through extensive experiments that our pipeline can automatically and rapidly construct LoD1 3-D urban building models. In addition, our proposed ONBuildingNet outperforms current state-of-the-art methods in extracting more shape accurate building roof polygons, thereby enhancing the accuracy of the final 3-D models produced by our pipeline. Experimental results demonstrate that our method for reconstructing 3-D models of urban building scenes has strong visualization effects, with an average height error of 3.3 m.
三维建筑模型是了解城市建筑最直观和最广泛使用的形式之一。基于单张离中天线卫星图像生成三维建筑模型是一种经济、快速的方法,尤其适用于时间有限的大规模三维重建场景。在这篇文章中,我们提出了一种基于单张离天空卫星遥感图像自动重建细节等级 1(LoD1)三维建筑模型的新型管道。我们的管道建立在一个多任务神经网络的基础上,该神经网络被称为离中天线建筑物重建网络(ONBuildingNet),它能从图像中提取建筑物屋顶多边形和偏移量。利用这些信息,该管道可计算出建筑足迹多边形和高度,构建 LoD1 建筑模型,然后从偏底角图像中提取纹理。ONBuildingNet 引入了我们提出的跨场辅助任务和多尺度遮罩头,以提取具有精确形状的建筑物屋顶多边形。我们通过大量实验证明,我们的管道可以自动、快速地构建 LoD1 三维城市建筑模型。此外,我们提出的 ONBuildingNet 在提取形状更精确的建筑屋顶多边形方面优于目前最先进的方法,从而提高了我们管道最终生成的三维模型的精确度。实验结果表明,我们重建城市建筑场景三维模型的方法具有很强的可视化效果,平均高度误差为 3.3 米。
{"title":"SingleRecon: Reconstructing Building 3-D Models of LoD1 From a Single Off-Nadir Remote Sensing Image","authors":"Ruizhe Shao;JiangJiang Wu;Jun Li;Shuang Peng;Hao Chen;Chun Du","doi":"10.1109/JSTARS.2024.3483843","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3483843","url":null,"abstract":"3-D building models are one of the most intuitive and widely used forms for understanding urban buildings. Generating 3-D building models based on a single off-nadir satellite image is an economical and rapid method, particularly valuable in large-scale 3-D reconstruction scenarios with limited time. In this article, we propose a novel pipeline for automatically reconstructing level of detail 1 (LoD1) 3-D building models based on a single off-nadir satellite remote sensing image. Our pipeline is built upon a multitask neural network called off-nadir building reconstruction network (ONBuildingNet), which extracts building roof polygons and offsets from the image. Using this information, the pipeline computes the building footprint polygons and heights, constructs LoD1 building models, and then extract textures from the off-nadir image. ONBuildingNet introduces our proposed cross-field auxiliary task and multiscale mask head to extract building roof polygons with accurate shapes. We have demonstrated through extensive experiments that our pipeline can automatically and rapidly construct LoD1 3-D urban building models. In addition, our proposed ONBuildingNet outperforms current state-of-the-art methods in extracting more shape accurate building roof polygons, thereby enhancing the accuracy of the final 3-D models produced by our pipeline. Experimental results demonstrate that our method for reconstructing 3-D models of urban building scenes has strong visualization effects, with an average height error of 3.3 m.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19588-19600"},"PeriodicalIF":4.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10723241","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COUD: Continual Urbanization Detector for Time Series Building Change Detection CUD:用于时间序列建筑物变化检测的连续城市化检测器
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-17 DOI: 10.1109/JSTARS.2024.3482559
Yitao Zhao;Heng-Chao Li;Sen Lei;Nanqing Liu;Jie Pan;Turgay Celik
Building change detection on remote sensing images is an important approach to monitoring the urban expansion and sustainable development of natural resources. In conventional building change detection tasks, only changed regions between two time phases are typically concerned. The relevance and trend of spatiotemporal changes between multiple time phases are neglected in most cases. In this article, we propose a two-stage continual urbanization detector (COUD) for time series urban building change detection task. The COUD method employs self-supervised pretraining for feature refinement, and performs optimization through temporal distillation approach. Consequently, multitemporal feature extraction and changing regions localization of urban building complexes are conducted. Considering the gap in available dataset for time series change detection task, we produce and release a time series dataset named “TSCD”. Chengdu region of China is selected as the study area in this research, which is partially covered by the proposed TSCD dataset. By applying the proposed COUD method to the selected study area for exploring the changing pattern from 2016 to 2022, a comprehensive analysis is conducted in conjunction with actual planning policies published by the management department. Extensive experimental results confirm the reliability of our proposed method.
遥感图像上的建筑物变化检测是监测城市扩张和自然资源可持续发展的重要方法。在传统的建筑物变化检测任务中,通常只关注两个时间阶段之间的变化区域。在大多数情况下,多个时间阶段之间时空变化的相关性和趋势被忽视了。在本文中,我们针对时间序列城市建筑变化检测任务提出了一种两阶段持续城市化检测器(COUD)。COUD 方法采用自监督预训练进行特征提取,并通过时间蒸馏方法进行优化。因此,可以对城市建筑群进行多时特征提取和变化区域定位。考虑到时间序列变化检测任务中可用数据集的空白,我们制作并发布了名为 "TSCD "的时间序列数据集。本研究选择了中国成都地区作为研究区域,该地区部分区域已被提议的 TSCD 数据集覆盖。通过对所选研究区域应用所提出的 COUD 方法探索 2016 年至 2022 年的变化规律,并结合管理部门发布的实际规划政策进行综合分析。广泛的实验结果证实了我们提出的方法的可靠性。
{"title":"COUD: Continual Urbanization Detector for Time Series Building Change Detection","authors":"Yitao Zhao;Heng-Chao Li;Sen Lei;Nanqing Liu;Jie Pan;Turgay Celik","doi":"10.1109/JSTARS.2024.3482559","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3482559","url":null,"abstract":"Building change detection on remote sensing images is an important approach to monitoring the urban expansion and sustainable development of natural resources. In conventional building change detection tasks, only changed regions between two time phases are typically concerned. The relevance and trend of spatiotemporal changes between multiple time phases are neglected in most cases. In this article, we propose a two-stage continual urbanization detector (COUD) for time series urban building change detection task. The COUD method employs self-supervised pretraining for feature refinement, and performs optimization through temporal distillation approach. Consequently, multitemporal feature extraction and changing regions localization of urban building complexes are conducted. Considering the gap in available dataset for time series change detection task, we produce and release a time series dataset named “TSCD”. Chengdu region of China is selected as the study area in this research, which is partially covered by the proposed TSCD dataset. By applying the proposed COUD method to the selected study area for exploring the changing pattern from 2016 to 2022, a comprehensive analysis is conducted in conjunction with actual planning policies published by the management department. Extensive experimental results confirm the reliability of our proposed method.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19601-19615"},"PeriodicalIF":4.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10720916","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1