首页 > 最新文献

Journal of Applied Remote Sensing最新文献

英文 中文
Multiscale graph convolution residual network for hyperspectral image classification 用于高光谱图像分类的多尺度图卷积残差网络
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014504
Ao Li, Yuegong Sun, Cong Feng, Yuan Cheng, Liang Xi
In recent years, graph convolutional networks (GCNs) have attracted increased attention in hyperspectral image (HSI) classification through the utilization of data and their connection graph. However, most existing GCN-based methods have two main drawbacks. First, the constructed graph with pixel-level nodes loses many useful spatial information while high computational cost is required due to large graph size. Second, the joint spatial-spectral structure hidden in HSI are not fully explored for better neighbor correlation preservation, which limits the GCN to achieve promising performance on discriminative feature extraction. To address these problems, we propose a multiscale graph convolutional residual network (MSGCRN) for HSI classification. First, to explore the local spatial–spectral structure, superpixel segmentation is performed on the spectral principal component of HSI at different scales. Thus, the obtained multiscale superpixel areas can capture rich spatial texture division. Second, multiple superpixel-level subgraphs are constructed with adaptive weighted node aggregation, which not only effectively reduces the graph size, but also preserves local neighbor correlation in varying subgraph scales. Finally, a graph convolution residual network is designed for multiscale hierarchical features extraction, which are further integrated into the final discriminative features for HSI classification via a diffusion operation. Moreover, a mini-batch branch is adopted to the large-scale superpixel branch of MSGCRN to further reduce computational cost. Extensive experiments on three public HSI datasets demonstrate the advantages of our MSGCRN model compared to several cutting-edge approaches.
近年来,图卷积网络(GCN)通过利用数据及其连接图,在高光谱图像(HSI)分类领域吸引了越来越多的关注。然而,大多数现有的基于 GCN 的方法有两个主要缺点。首先,利用像素级节点构建的图会丢失许多有用的空间信息,同时由于图的规模较大,需要较高的计算成本。其次,为了更好地保留相邻相关性,HSI 中隐藏的空间-光谱联合结构没有被充分挖掘,这就限制了 GCN 在鉴别特征提取方面取得良好的性能。为了解决这些问题,我们提出了一种用于 HSI 分类的多尺度图卷积残差网络(MSGCRN)。首先,为了探索局部空间-光谱结构,我们对不同尺度的 HSI 光谱主成分进行了超像素分割。因此,得到的多尺度超像素区域可以捕捉到丰富的空间纹理划分。其次,利用自适应加权节点聚合法构建多个超像素级子图,这不仅能有效减小图的大小,还能在不同子图尺度上保留局部邻域相关性。最后,设计了一个图卷积残差网络,用于多尺度分层特征提取,并通过扩散操作将这些特征进一步整合到最终的鉴别特征中,用于 HSI 分类。此外,MSGCRN 的大规模超像素分支采用了迷你批处理分支,以进一步降低计算成本。在三个公共人脸图像数据集上进行的广泛实验证明了我们的 MSGCRN 模型与几种前沿方法相比所具有的优势。
{"title":"Multiscale graph convolution residual network for hyperspectral image classification","authors":"Ao Li, Yuegong Sun, Cong Feng, Yuan Cheng, Liang Xi","doi":"10.1117/1.jrs.18.014504","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014504","url":null,"abstract":"In recent years, graph convolutional networks (GCNs) have attracted increased attention in hyperspectral image (HSI) classification through the utilization of data and their connection graph. However, most existing GCN-based methods have two main drawbacks. First, the constructed graph with pixel-level nodes loses many useful spatial information while high computational cost is required due to large graph size. Second, the joint spatial-spectral structure hidden in HSI are not fully explored for better neighbor correlation preservation, which limits the GCN to achieve promising performance on discriminative feature extraction. To address these problems, we propose a multiscale graph convolutional residual network (MSGCRN) for HSI classification. First, to explore the local spatial–spectral structure, superpixel segmentation is performed on the spectral principal component of HSI at different scales. Thus, the obtained multiscale superpixel areas can capture rich spatial texture division. Second, multiple superpixel-level subgraphs are constructed with adaptive weighted node aggregation, which not only effectively reduces the graph size, but also preserves local neighbor correlation in varying subgraph scales. Finally, a graph convolution residual network is designed for multiscale hierarchical features extraction, which are further integrated into the final discriminative features for HSI classification via a diffusion operation. Moreover, a mini-batch branch is adopted to the large-scale superpixel branch of MSGCRN to further reduce computational cost. Extensive experiments on three public HSI datasets demonstrate the advantages of our MSGCRN model compared to several cutting-edge approaches.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale contrastive learning method for PolSAR image classification 用于 PolSAR 图像分类的多尺度对比学习方法
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014502
Wenqiang Hua, Chen Wang, Nan Sun, Lin Liu
Although deep learning-based methods have made remarkable achievements in polarimetric synthetic aperture radar (PolSAR) image classification, these methods require a large number of labeled samples. However, for PolSAR image classification, it is difficult to obtain a large number of labeled samples, which requires extensive human labor and material resources. Therefore, a new PolSAR image classification method based on multi-scale contrastive learning is proposed, which can achieve good classification results with only a small number of labeled samples. During the pre-training process, we propose a multi-scale contrastive learning network model that uses the characteristics of the data itself to train the network by contrastive training. In addition, to capture richer feature information, a multi-scale network structure is introduced. In the training process, considering the diversity and complexity of PolSAR images, we design a hybrid loss function combining the supervised and unsupervised information to achieve better classification performance with limited labeled samples. The experimental results on three real PolSAR datasets have demonstrated that the proposed method outperforms other comparison methods, even with limited labeled samples.
虽然基于深度学习的方法在偏振合成孔径雷达(PolSAR)图像分类方面取得了显著成就,但这些方法需要大量的标记样本。然而,对于 PolSAR 图像分类来说,很难获得大量的标记样本,这需要大量的人力和物力。因此,本文提出了一种基于多尺度对比学习的新 PolSAR 图像分类方法,只需少量标注样本即可获得良好的分类效果。在预训练过程中,我们提出了一种多尺度对比学习网络模型,利用数据本身的特点通过对比训练来训练网络。此外,为了捕捉更丰富的特征信息,我们还引入了多尺度网络结构。在训练过程中,考虑到 PolSAR 图像的多样性和复杂性,我们设计了一种混合损失函数,将监督信息和非监督信息相结合,从而在有限的标注样本下获得更好的分类性能。在三个真实 PolSAR 数据集上的实验结果表明,即使在标注样本有限的情况下,所提出的方法也优于其他比较方法。
{"title":"Multi-scale contrastive learning method for PolSAR image classification","authors":"Wenqiang Hua, Chen Wang, Nan Sun, Lin Liu","doi":"10.1117/1.jrs.18.014502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014502","url":null,"abstract":"Although deep learning-based methods have made remarkable achievements in polarimetric synthetic aperture radar (PolSAR) image classification, these methods require a large number of labeled samples. However, for PolSAR image classification, it is difficult to obtain a large number of labeled samples, which requires extensive human labor and material resources. Therefore, a new PolSAR image classification method based on multi-scale contrastive learning is proposed, which can achieve good classification results with only a small number of labeled samples. During the pre-training process, we propose a multi-scale contrastive learning network model that uses the characteristics of the data itself to train the network by contrastive training. In addition, to capture richer feature information, a multi-scale network structure is introduced. In the training process, considering the diversity and complexity of PolSAR images, we design a hybrid loss function combining the supervised and unsupervised information to achieve better classification performance with limited labeled samples. The experimental results on three real PolSAR datasets have demonstrated that the proposed method outperforms other comparison methods, even with limited labeled samples.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139092110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monitoring of land subsidence by combining small baseline subset interferometric synthetic aperture radar and generic atmospheric correction online service in Qingdao City, China 利用小基线子集干涉合成孔径雷达和通用大气校正在线服务对中国青岛市的地面沉降进行监测
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.014506
Xuepeng Li, Qiuxiang Tao, Yang Chen, Anye Hou, Ruixiang Liu, Yixin Xiao
Owing to accelerated urbanization, land subsidence has damaged urban infrastructure and impeded sustainable economic and social development in Qingdao City, China. Combining interferometric synthetic aperture radar (InSAR) and generic atmospheric correction online service (GACOS), atmospheric correction has not yet been investigated for land subsidence in Qingdao. A small baseline subset of InSAR (SBAS InSAR), GACOS, and 28 Sentinel-1A images were combined to produce a land subsidence time series from January 2019 to December 2020 for the urban areas of Qingdao, and the spatiotemporal evolution of land subsidence before and after GACOS atmospheric correction was compared, analyzed, and verified using leveling data. Our work demonstrates that the overall surface condition of the Qingdao urban area is stable, and subsidence areas are mainly concentrated in the coastal area of Jiaozhou Bay, northwestern Jimo District, and northern Chengyang District. The GACOS atmospheric correction could reduce the root-mean-square error of the differential interferometric phase. The land subsidence time series after correction was in better agreement with the leveling-monitored results. It is effective to perform GACOS atmospheric correction to improve the accuracy of SBAS InSAR-monitored land subsidence over a large scale and long time series in coastal cities.
由于城市化进程加快,土地沉降破坏了城市基础设施,阻碍了中国青岛市经济和社会的可持续发展。结合干涉合成孔径雷达(InSAR)和通用大气校正在线服务(GACOS),尚未对青岛的地面沉降进行大气校正研究。我们将 InSAR(SBAS InSAR)、GACOS 和 28 幅 Sentinel-1A 图像的一小部分基线子集组合起来,生成了青岛市区 2019 年 1 月至 2020 年 12 月的地面沉降时间序列,并利用水准测量数据对 GACOS 大气校正前后的地面沉降时空演变进行了比较、分析和验证。研究结果表明,青岛市区地表状况总体稳定,塌陷区主要集中在胶州湾沿岸、即墨区西北部和城阳区北部。GACOS 大气校正可以减小差分干涉相位的均方根误差。校正后的地面沉降时间序列与水准测量结果更加吻合。进行 GACOS 大气校正可以有效提高 SBAS InSAR 监测的沿海城市大尺度、长时间序列的地面沉降精度。
{"title":"Monitoring of land subsidence by combining small baseline subset interferometric synthetic aperture radar and generic atmospheric correction online service in Qingdao City, China","authors":"Xuepeng Li, Qiuxiang Tao, Yang Chen, Anye Hou, Ruixiang Liu, Yixin Xiao","doi":"10.1117/1.jrs.18.014506","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014506","url":null,"abstract":"Owing to accelerated urbanization, land subsidence has damaged urban infrastructure and impeded sustainable economic and social development in Qingdao City, China. Combining interferometric synthetic aperture radar (InSAR) and generic atmospheric correction online service (GACOS), atmospheric correction has not yet been investigated for land subsidence in Qingdao. A small baseline subset of InSAR (SBAS InSAR), GACOS, and 28 Sentinel-1A images were combined to produce a land subsidence time series from January 2019 to December 2020 for the urban areas of Qingdao, and the spatiotemporal evolution of land subsidence before and after GACOS atmospheric correction was compared, analyzed, and verified using leveling data. Our work demonstrates that the overall surface condition of the Qingdao urban area is stable, and subsidence areas are mainly concentrated in the coastal area of Jiaozhou Bay, northwestern Jimo District, and northern Chengyang District. The GACOS atmospheric correction could reduce the root-mean-square error of the differential interferometric phase. The land subsidence time series after correction was in better agreement with the leveling-monitored results. It is effective to perform GACOS atmospheric correction to improve the accuracy of SBAS InSAR-monitored land subsidence over a large scale and long time series in coastal cities.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 List of Reviewers 2023 年审查员名单
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.010102
JARS thanks the reviewers who served the journal in 2023.
JARS 感谢 2023 年为期刊服务的审稿人。
{"title":"2023 List of Reviewers","authors":"","doi":"10.1117/1.jrs.18.010102","DOIUrl":"https://doi.org/10.1117/1.jrs.18.010102","url":null,"abstract":"JARS thanks the reviewers who served the journal in 2023.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139408029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal fusion convolutional neural network: tropical cyclone intensity estimation from multisource remote sensing images 时空融合卷积神经网络:从多源遥感图像估算热带气旋强度
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.018501
Randi Fu, Haiyan Hu, Nan Wu, Zhening Liu, Wei Jin
Utilizing multisource remote sensing images to accurately estimate tropical cyclone (TC) intensity is crucial and challenging. Traditional approaches rely on a single image for intensity estimation and lack the capability to perceive dynamic spatiotemporal information. Meanwhile, many existing deep learning methods sample from a time series of fixed length and depend on computation-intensive 3D feature extraction modules, limiting the model’s flexibility and scalability. By organically linking the genesis and dissipation mechanisms of a TC with computer vision techniques, we introduce a spatiotemporal fusion convolutional neural network that integrates three distinct improvement approaches. First, an a priori aware nonparametric fusion module is introduced to effectively fuse key features from multisource remote sensing data. Second, we design a scale-aware contraction–expansion module. This module effectively captures detailed features of the TC by connecting information from different scales through a weighted and up-sampling method. Finally, we propose a 1D–2D conditional sampling training method that balances single-step regression (for short sequences) and latent-variable-based temporal modeling (for long sequences) to achieve flexible spatiotemporal feature perception, thereby avoiding the data scale constraint imposed by fixed sequence lengths. Through qualitative and quantitative experimental comparisons, the proposed spatiotemporal fusion convolutional neural network achieved a root-mean-square error of 8.89 kt, marking a 29.7% improvement over the advanced Dvorak technique, and its efficacy in actual TC case analyses indicates its practical viability and potential for broader applications.
利用多源遥感图像准确估算热带气旋(TC)强度至关重要,也极具挑战性。传统方法依赖单一图像进行强度估算,缺乏感知动态时空信息的能力。同时,许多现有的深度学习方法都是从固定长度的时间序列中采样,并依赖于计算密集型的三维特征提取模块,限制了模型的灵活性和可扩展性。通过将时空旅行的成因和消散机制与计算机视觉技术有机地联系起来,我们引入了一种时空融合卷积神经网络,该网络集成了三种不同的改进方法。首先,我们引入了先验感知非参数融合模块,以有效融合多源遥感数据的关键特征。其次,我们设计了一个规模感知的收缩-扩展模块。该模块通过加权和向上采样的方法将不同尺度的信息连接起来,从而有效捕捉到热带气旋的细节特征。最后,我们提出了一种 1D-2D 条件采样训练方法,该方法兼顾了单步回归(适用于短序列)和基于潜变量的时间建模(适用于长序列),实现了灵活的时空特征感知,从而避免了固定序列长度带来的数据尺度限制。通过定性和定量实验比较,所提出的时空融合卷积神经网络的均方根误差为 8.89 kt,比先进的 Dvorak 技术提高了 29.7%,其在实际 TC 案例分析中的有效性表明了其实际可行性和更广泛的应用潜力。
{"title":"Spatiotemporal fusion convolutional neural network: tropical cyclone intensity estimation from multisource remote sensing images","authors":"Randi Fu, Haiyan Hu, Nan Wu, Zhening Liu, Wei Jin","doi":"10.1117/1.jrs.18.018501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.018501","url":null,"abstract":"Utilizing multisource remote sensing images to accurately estimate tropical cyclone (TC) intensity is crucial and challenging. Traditional approaches rely on a single image for intensity estimation and lack the capability to perceive dynamic spatiotemporal information. Meanwhile, many existing deep learning methods sample from a time series of fixed length and depend on computation-intensive 3D feature extraction modules, limiting the model’s flexibility and scalability. By organically linking the genesis and dissipation mechanisms of a TC with computer vision techniques, we introduce a spatiotemporal fusion convolutional neural network that integrates three distinct improvement approaches. First, an a priori aware nonparametric fusion module is introduced to effectively fuse key features from multisource remote sensing data. Second, we design a scale-aware contraction–expansion module. This module effectively captures detailed features of the TC by connecting information from different scales through a weighted and up-sampling method. Finally, we propose a 1D–2D conditional sampling training method that balances single-step regression (for short sequences) and latent-variable-based temporal modeling (for long sequences) to achieve flexible spatiotemporal feature perception, thereby avoiding the data scale constraint imposed by fixed sequence lengths. Through qualitative and quantitative experimental comparisons, the proposed spatiotemporal fusion convolutional neural network achieved a root-mean-square error of 8.89 kt, marking a 29.7% improvement over the advanced Dvorak technique, and its efficacy in actual TC case analyses indicates its practical viability and potential for broader applications.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EPAWFusion: multimodal fusion for 3D object detection based on enhanced points and adaptive weights EPAWFusion:基于增强点和自适应权重的三维物体检测多模态融合
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.017501
Xiang Sun, Shaojing Song, Fan Wu, Tingting Lu, Bohao Li, Zhiqing Miao
Fusing LiDAR point cloud and camera image for 3D object detection in autonomous driving has emerged as a captivating research avenue. The core challenge of multimodal fusion is how to seamlessly fuse 3D LiDAR point cloud with 2D camera image. Although current approaches exhibit promising results, they often rely solely on fusion at either the data level, feature level, or object level, and there is still a room for improvement in the utilization of multimodal information. We present an advanced and effective multimodal fusion framework called EPAWFusion for fusing 3D point cloud and 2D camera image at both data level and feature level. EPAWFusion model consists of three key modules: a point enhanced module based on semantic segmentation for data-level fusion, an adaptive weight allocation module for feature-level fusion, and a detector based on 3D sparse convolution. The semantic information of the 2D image is extracted using semantic segmentation, and the calibration matrix is used to establish the point-pixel correspondence. The semantic information and distance information are then attached to the point cloud to achieve data-level fusion. The geometry features of enhanced point cloud are extracted by voxel encoding, and the texture features of image are obtained using a pretrained 2D CNN. Feature-level fusion is achieved via the adaptive weight allocation module. The fused features are fed into a 3D sparse convolution-based detector to obtain the accurate 3D objects. Experiment results demonstrate that EPAWFusion outperforms the baseline network MVXNet on the KITTI dataset for 3D detection of cars, pedestrians, and cyclists by 5.81%, 6.97%, and 3.88%. Additionally, EPAWFusion performs well for single-vehicle-side 3D object detection based on the experimental findings on DAIR-V2X dataset and the inference frame rate of our proposed model reaches 11.1 FPS. The two-layer level fusion of EPAWFusion significantly enhances the performance of multimodal 3D object detection.
在自动驾驶中融合激光雷达点云和摄像头图像进行三维物体检测已成为一个引人注目的研究领域。多模态融合的核心挑战是如何无缝融合三维激光雷达点云和二维相机图像。尽管目前的方法取得了可喜的成果,但它们往往仅仅依赖于数据级、特征级或对象级的融合,在多模态信息的利用方面仍有改进的余地。我们提出了一种先进而有效的多模态融合框架,称为 EPAWFusion,用于在数据级和特征级融合三维点云和二维相机图像。EPAWFusion 模型由三个关键模块组成:用于数据级融合的基于语义分割的点增强模块、用于特征级融合的自适应权重分配模块和基于三维稀疏卷积的检测器。利用语义分割提取二维图像的语义信息,并利用校准矩阵建立点-像素对应关系。然后将语义信息和距离信息附加到点云上,实现数据级融合。通过体素编码提取增强点云的几何特征,并使用预训练的二维 CNN 获取图像的纹理特征。通过自适应权重分配模块实现特征级融合。融合后的特征被送入基于三维稀疏卷积的检测器,从而获得精确的三维物体。实验结果表明,在 KITTI 数据集上,EPAWFusion 对汽车、行人和骑车人的三维检测结果分别比基准网络 MVXNet 高出 5.81%、6.97% 和 3.88%。此外,根据在 DAIR-V2X 数据集上的实验结果,EPAWFusion 在单车侧三维物体检测方面表现出色,我们提出的模型的推理帧速率达到 11.1 FPS。EPAWFusion 的双层融合显著提高了多模态三维物体检测的性能。
{"title":"EPAWFusion: multimodal fusion for 3D object detection based on enhanced points and adaptive weights","authors":"Xiang Sun, Shaojing Song, Fan Wu, Tingting Lu, Bohao Li, Zhiqing Miao","doi":"10.1117/1.jrs.18.017501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.017501","url":null,"abstract":"Fusing LiDAR point cloud and camera image for 3D object detection in autonomous driving has emerged as a captivating research avenue. The core challenge of multimodal fusion is how to seamlessly fuse 3D LiDAR point cloud with 2D camera image. Although current approaches exhibit promising results, they often rely solely on fusion at either the data level, feature level, or object level, and there is still a room for improvement in the utilization of multimodal information. We present an advanced and effective multimodal fusion framework called EPAWFusion for fusing 3D point cloud and 2D camera image at both data level and feature level. EPAWFusion model consists of three key modules: a point enhanced module based on semantic segmentation for data-level fusion, an adaptive weight allocation module for feature-level fusion, and a detector based on 3D sparse convolution. The semantic information of the 2D image is extracted using semantic segmentation, and the calibration matrix is used to establish the point-pixel correspondence. The semantic information and distance information are then attached to the point cloud to achieve data-level fusion. The geometry features of enhanced point cloud are extracted by voxel encoding, and the texture features of image are obtained using a pretrained 2D CNN. Feature-level fusion is achieved via the adaptive weight allocation module. The fused features are fed into a 3D sparse convolution-based detector to obtain the accurate 3D objects. Experiment results demonstrate that EPAWFusion outperforms the baseline network MVXNet on the KITTI dataset for 3D detection of cars, pedestrians, and cyclists by 5.81%, 6.97%, and 3.88%. Additionally, EPAWFusion performs well for single-vehicle-side 3D object detection based on the experimental findings on DAIR-V2X dataset and the inference frame rate of our proposed model reaches 11.1 FPS. The two-layer level fusion of EPAWFusion significantly enhances the performance of multimodal 3D object detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139464032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic aperture radar image change detection using saliency detection and attention capsule network 利用显著性检测和注意力胶囊网络进行合成孔径雷达图像变化检测
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016505
Shaona Wang, Di Wang, Jia Shi, Zhenghua Zhang, Xiang Li, Yanmiao Guo
Synthetic aperture radar (SAR) image change detection has been widely applied in a variety of fields as one of the research hotspots in remote sensing image processing. To increase the accuracy of SAR image change detection, an algorithm based on saliency detection and an attention capsule network is proposed. First, the difference image (DI) is processed using the saliency detection method. The DI’s most significant regions are extracted. Considering the saliency detection characteristics, we select training samples only from the DI’s most salient regions. The regions in the background are omitted. This results in a significant reduction in the number of training samples. Second, a capsule network based on an attention mechanism is constructed. The spatial attention model is capable of extracting the salient characteristics. Capsule networks enable precise classification. Finally, a final change map is obtained using capsule network to classify images. To compare the proposed method with the related methods, experiments are carried out on four real SAR datasets. The results show that the proposed method is effective in improving the exactitude of change detection.
合成孔径雷达(SAR)图像变化检测作为遥感图像处理的研究热点之一,已被广泛应用于多个领域。为了提高合成孔径雷达图像变化检测的准确性,本文提出了一种基于显著性检测和注意力胶囊网络的算法。首先,使用显著性检测方法处理差分图像(DI)。提取出 DI 中最重要的区域。考虑到显著性检测的特点,我们只从 DI 的最显著区域中选择训练样本。背景区域则被省略。这就大大减少了训练样本的数量。第二,构建基于注意力机制的胶囊网络。空间注意力模型能够提取突出特征。胶囊网络能够实现精确分类。最后,利用胶囊网络获得最终的变化图,对图像进行分类。为了将提出的方法与相关方法进行比较,我们在四个真实的合成孔径雷达数据集上进行了实验。结果表明,所提出的方法能有效提高变化检测的精确度。
{"title":"Synthetic aperture radar image change detection using saliency detection and attention capsule network","authors":"Shaona Wang, Di Wang, Jia Shi, Zhenghua Zhang, Xiang Li, Yanmiao Guo","doi":"10.1117/1.jrs.18.016505","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016505","url":null,"abstract":"Synthetic aperture radar (SAR) image change detection has been widely applied in a variety of fields as one of the research hotspots in remote sensing image processing. To increase the accuracy of SAR image change detection, an algorithm based on saliency detection and an attention capsule network is proposed. First, the difference image (DI) is processed using the saliency detection method. The DI’s most significant regions are extracted. Considering the saliency detection characteristics, we select training samples only from the DI’s most salient regions. The regions in the background are omitted. This results in a significant reduction in the number of training samples. Second, a capsule network based on an attention mechanism is constructed. The spatial attention model is capable of extracting the salient characteristics. Capsule networks enable precise classification. Finally, a final change map is obtained using capsule network to classify images. To compare the proposed method with the related methods, experiments are carried out on four real SAR datasets. The results show that the proposed method is effective in improving the exactitude of change detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139656853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Plume motion characterization in unmanned aerial vehicle aerial video and imagery 无人飞行器航拍视频和图像中的羽流运动特征描述
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016501
Mehrube Mehrubeoglu, Kirk Cammarata, Hua Zhang, Lifford McLauchlan
Sediment plumes are generated from both natural and human activities in benthic environments, increasing the turbidity of the water and reducing the amount of sunlight reaching the benthic vegetation. Seagrasses, which are photosynthetic bioindicators of their environment, are threatened by chronic reductions in sunlight, impacting entire aquatic food chains. Our research uses unmanned aerial vehicle (UAV) aerial video and imagery to investigate the characteristics of sediment plumes generated by a model of anthropogenic disturbance. The extent, speed, and motion of the plumes were assessed as these parameters may pertain to the potential impacts of plume turbidity on seagrass communities. In a case study using UAV video, the turbidity plume was observed to spread more than 200 ft over 20 min of the UAV campaign. The directional speed of the plume was estimated to be between 10.4 and 10.6 ft/min. This was corroborated by observation of the greatest plume turbidity and sediment load near the location of the disturbance and diminishing with distance. Further temporal studies are necessary to determine any long-term impacts of human activity-generated sediment plumes on seagrass beds.
底栖环境中的自然和人类活动都会产生沉积物羽流,增加水的浑浊度,减少到达底栖植被的阳光量。海草是其环境的光合生物指示器,受到长期日照减少的威胁,影响到整个水生食物链。我们的研究利用无人飞行器(UAV)的空中视频和图像来研究人为干扰模型产生的沉积物羽流的特征。我们评估了羽流的范围、速度和运动,因为这些参数可能与羽流浑浊度对海草群落的潜在影响有关。在利用无人机视频进行的案例研究中,在无人机飞行的 20 分钟内观察到浊流扩散超过 200 英尺。据估计,羽流的定向速度在 10.4 至 10.6 英尺/分钟之间。在扰动位置附近观察到的最大羽流浊度和沉积物负荷也证实了这一点,并随着距离的增加而减小。有必要开展进一步的时间研究,以确定人类活动产生的沉积物羽流对海草床的长期影响。
{"title":"Plume motion characterization in unmanned aerial vehicle aerial video and imagery","authors":"Mehrube Mehrubeoglu, Kirk Cammarata, Hua Zhang, Lifford McLauchlan","doi":"10.1117/1.jrs.18.016501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016501","url":null,"abstract":"Sediment plumes are generated from both natural and human activities in benthic environments, increasing the turbidity of the water and reducing the amount of sunlight reaching the benthic vegetation. Seagrasses, which are photosynthetic bioindicators of their environment, are threatened by chronic reductions in sunlight, impacting entire aquatic food chains. Our research uses unmanned aerial vehicle (UAV) aerial video and imagery to investigate the characteristics of sediment plumes generated by a model of anthropogenic disturbance. The extent, speed, and motion of the plumes were assessed as these parameters may pertain to the potential impacts of plume turbidity on seagrass communities. In a case study using UAV video, the turbidity plume was observed to spread more than 200 ft over 20 min of the UAV campaign. The directional speed of the plume was estimated to be between 10.4 and 10.6 ft/min. This was corroborated by observation of the greatest plume turbidity and sediment load near the location of the disturbance and diminishing with distance. Further temporal studies are necessary to determine any long-term impacts of human activity-generated sediment plumes on seagrass beds.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139092104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LRSNet: a high-efficiency lightweight model for object detection in remote sensing LRSNet:用于遥感中物体探测的高效轻量级模型
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016502
Shiliang Zhu, Min Miao, Yutong Wang
Unmanned aerial vehicles (UAVs) exhibit the ability to flexibly conduct aerial remote-sensing imaging. By employing deep learning object-detection algorithms, they efficiently perceive objects, finding widespread application in various practical engineering tasks. Consequently, UAV-based remote sensing object detection technology holds considerable research value. However, the background of UAV remote sensing images is often complex, with varying shooting angles and heights leading to difficulties in unifying target scales and features. Moreover, there is the challenge of numerous densely distributed small targets. In addition, UAVs face significant limitations in terms of hardware resources. Against this background, we propose a lightweight remote sensing object detection network (LRSNet) model based on YOLOv5s. In the backbone of LRSNet, the lightweight network MobileNetV3 is used to substantially reduce the model’s computational complexity and parameter count. In the model’s neck, a multiscale feature pyramid network named CM-FPN is introduced to enhance the detection capability of small objects. CM-FPN comprises two key components: C3EGhost, based on GhostNet and efficient channel attention modules, and the multiscale feature fusion channel attention mechanism (MFFC). C3EGhost, serving as CM-FPN’s primary feature extraction module, possesses lower computational complexity and fewer parameters, as well as effectively reducing background interference. MFFC, as the feature fusion node of CM-FPN, can adaptively weight the fusion of shallow and deep features, acquiring more effective details and semantic information for object detection. LRSNet, evaluated on the NWPU VHR-10, DOTA V1.0, and VisDrone-2019 datasets, achieved mean average precision of 94.0%, 71.9%, and 35.6%, with Giga floating-point operations per second and Param (M) measuring only 5.8 and 4.1, respectively. This outcome affirms the efficiency of LRSNet in UAV-based remote-sensing object detection tasks.
无人飞行器(UAV)具有灵活进行空中遥感成像的能力。通过采用深度学习物体检测算法,无人飞行器可以高效地感知物体,并在各种实际工程任务中得到广泛应用。因此,基于无人机的遥感物体探测技术具有相当高的研究价值。然而,无人机遥感图像的背景往往十分复杂,拍摄角度和高度各不相同,导致目标尺度和特征难以统一。此外,还有众多密集分布的小型目标的挑战。此外,无人机在硬件资源方面也面临很大限制。在此背景下,我们提出了基于 YOLOv5s 的轻量级遥感目标检测网络(LRSNet)模型。在 LRSNet 的骨干网中,使用了轻量级网络 MobileNetV3,从而大大降低了模型的计算复杂度和参数数量。在模型的颈部,引入了名为 CM-FPN 的多尺度特征金字塔网络,以增强对小型物体的检测能力。CM-FPN 包括两个关键组件:基于 GhostNet 和高效通道注意模块的 C3EGhost,以及多尺度特征融合通道注意机制(MFFC)。C3EGhost 作为 CM-FPN 的主要特征提取模块,具有较低的计算复杂度和较少的参数,并能有效减少背景干扰。MFFC 作为 CM-FPN 的特征融合节点,可以自适应地加权融合浅层和深层特征,为物体检测获取更有效的细节和语义信息。在NWPU VHR-10、DOTA V1.0和VisDrone-2019数据集上评估的LRSNet的平均精度分别为94.0%、71.9%和35.6%,每秒千兆浮点运算和参数(M)分别仅为5.8和4.1。这一结果肯定了 LRSNet 在无人机遥感物体检测任务中的效率。
{"title":"LRSNet: a high-efficiency lightweight model for object detection in remote sensing","authors":"Shiliang Zhu, Min Miao, Yutong Wang","doi":"10.1117/1.jrs.18.016502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016502","url":null,"abstract":"Unmanned aerial vehicles (UAVs) exhibit the ability to flexibly conduct aerial remote-sensing imaging. By employing deep learning object-detection algorithms, they efficiently perceive objects, finding widespread application in various practical engineering tasks. Consequently, UAV-based remote sensing object detection technology holds considerable research value. However, the background of UAV remote sensing images is often complex, with varying shooting angles and heights leading to difficulties in unifying target scales and features. Moreover, there is the challenge of numerous densely distributed small targets. In addition, UAVs face significant limitations in terms of hardware resources. Against this background, we propose a lightweight remote sensing object detection network (LRSNet) model based on YOLOv5s. In the backbone of LRSNet, the lightweight network MobileNetV3 is used to substantially reduce the model’s computational complexity and parameter count. In the model’s neck, a multiscale feature pyramid network named CM-FPN is introduced to enhance the detection capability of small objects. CM-FPN comprises two key components: C3EGhost, based on GhostNet and efficient channel attention modules, and the multiscale feature fusion channel attention mechanism (MFFC). C3EGhost, serving as CM-FPN’s primary feature extraction module, possesses lower computational complexity and fewer parameters, as well as effectively reducing background interference. MFFC, as the feature fusion node of CM-FPN, can adaptively weight the fusion of shallow and deep features, acquiring more effective details and semantic information for object detection. LRSNet, evaluated on the NWPU VHR-10, DOTA V1.0, and VisDrone-2019 datasets, achieved mean average precision of 94.0%, 71.9%, and 35.6%, with Giga floating-point operations per second and Param (M) measuring only 5.8 and 4.1, respectively. This outcome affirms the efficiency of LRSNet in UAV-based remote-sensing object detection tasks.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139415338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual domain adaptation on aerial images under gradually degrading weather 在逐渐恶化的天气条件下对航空图像进行连续域适应性调整
IF 1.7 4区 地球科学 Q2 Earth and Planetary Sciences Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016504
Chowdhury Sadman Jahan, Andreas Savakis
Domain adaptation (DA) aims to reduce the effects of the distribution gap between the source domain where a model is trained and the target domain where the model is deployed. When a deep learning model is deployed on an aerial platform, it may face gradually degrading weather conditions during its operation, leading to gradually widening gaps between the source training data and the encountered target data. Because there are no existing datasets with gradually degrading weather, we generate four datasets by introducing progressively worsening clouds and snowflakes on aerial images. During deployment, unlabeled target domain samples are acquired in small batches, and adaptation is performed continually with each batch of incoming data, instead of assuming that the entire target dataset is available. We evaluate two continual DA models against a baseline standard DA model under gradually degrading conditions. All of these models are source-free, i.e., they operate without access to the source training data during adaptation. We utilize both convolutional and transformer architectures in the models for comparison. In our experiments, we find that continual DA methods perform better but sometimes encounter stability issues during adaptation. We propose gradient normalization as a simple but effective solution for managing instability during adaptation.
域适应(DA)旨在减少模型训练的源域与模型部署的目标域之间的分布差距所造成的影响。当深度学习模型部署在航空平台上时,它在运行过程中可能会面临逐渐恶化的天气条件,从而导致源训练数据与所遇到的目标数据之间的差距逐渐扩大。由于现有数据集没有逐渐恶化的天气状况,我们通过在航空图像上引入逐渐恶化的云层和雪花生成了四个数据集。在部署过程中,未标注的目标域样本会被小批量获取,而适应过程会随着每批数据的输入而持续进行,而不是假设整个目标数据集都是可用的。我们在逐渐退化的条件下评估了两个持续数据分析模型和一个基准标准数据分析模型。所有这些模型都是无源的,也就是说,它们在适应过程中无需访问源训练数据。我们在模型中使用卷积和变换器架构进行比较。在实验中,我们发现持续性 DA 方法的性能更好,但在适应过程中有时会遇到稳定性问题。我们提出了梯度归一化方法,作为在适应过程中管理不稳定性的一种简单而有效的解决方案。
{"title":"Continual domain adaptation on aerial images under gradually degrading weather","authors":"Chowdhury Sadman Jahan, Andreas Savakis","doi":"10.1117/1.jrs.18.016504","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016504","url":null,"abstract":"Domain adaptation (DA) aims to reduce the effects of the distribution gap between the source domain where a model is trained and the target domain where the model is deployed. When a deep learning model is deployed on an aerial platform, it may face gradually degrading weather conditions during its operation, leading to gradually widening gaps between the source training data and the encountered target data. Because there are no existing datasets with gradually degrading weather, we generate four datasets by introducing progressively worsening clouds and snowflakes on aerial images. During deployment, unlabeled target domain samples are acquired in small batches, and adaptation is performed continually with each batch of incoming data, instead of assuming that the entire target dataset is available. We evaluate two continual DA models against a baseline standard DA model under gradually degrading conditions. All of these models are source-free, i.e., they operate without access to the source training data during adaptation. We utilize both convolutional and transformer architectures in the models for comparison. In our experiments, we find that continual DA methods perform better but sometimes encounter stability issues during adaptation. We propose gradient normalization as a simple but effective solution for managing instability during adaptation.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Applied Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1