Accurate large-scale vegetation segmentation is essential to maintain vegetation inventories, which are vital for informed ecological planning, landscape management, and long-term sustainability and liveability of the environment. Advancements in deep learning (DL) coupled with the increasing availability of airborne laser scanning (ALS) point clouds hold significant potential for detailed and large-scale vegetation segmentation. Yet, ALS-based vegetation segmentation has received limited attention, leading to ambiguity in model selection. To address this research gap, we present a comprehensive benchmarking of point-based DL models for vegetation segmentation. Seven representative DL models, KPConv, RandLANet, SCFNet, PointNeXt, SPoTr, PointMetaBase, and GreenSegNet, have been implemented on three different datasets, Eclair, Dales, and WHU-Urban3D. Evaluated through a ten-fold cross-validation strategy, the results reveal strong but inconsistent performances. KPConv records the highest mean intersection over union (mIoU) on the Eclair dataset with 96.24% while GreenSegNet dominates on Dales dataset, reaching 93.91%. GreenSegNet also outperforms other models on the WHU-Urban3D dataset, achieving a mIoU of 79.27%. These findings highlight both the promise and the limitations of existing models, including the vegetation-specific GreenSegNet, which also exhibited inconsistent behavior on ALS data due to sparsity, nadir-view perspective, and canopy occlusions. Building on these insights, we propose GreenSegNet-A, a DL architecture explicitly tailored for ALS vegetation segmentation. Incorporated with a novel ALS-adaptive module, GreenSegNet-A achieves mIoU scores of 96.56% (Eclair), 94.29% (Dales), and 80.87% (WHU-Urban3D). Statistical tests confirm its efficacy, while ablation studies validate the design choices. Although the model has a slightly higher parameter count than GreenSegNet, it remains lighter compared to other models. Overall, GreenSegNet-A establishes a strong performance baseline for ALS vegetation segmentation within the scope of our evaluation. The source code is available at this URL.
扫码关注我们
求助内容:
应助结果提醒方式:
