An open-source data-driven automatic road extraction framework for diverse farmland application scenarios

IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Computers and Electronics in Agriculture Pub Date : 2025-03-26 DOI:10.1016/j.compag.2025.110330
Jing Shen , Yawen He , Jian Peng , Tang Liu , Chenghu Zhou
{"title":"An open-source data-driven automatic road extraction framework for diverse farmland application scenarios","authors":"Jing Shen ,&nbsp;Yawen He ,&nbsp;Jian Peng ,&nbsp;Tang Liu ,&nbsp;Chenghu Zhou","doi":"10.1016/j.compag.2025.110330","DOIUrl":null,"url":null,"abstract":"<div><div>The narrow contours of farmland roads, lack of clear boundary features with surrounding objects, and the complexity and variability of features limit the applicability of existing supervised extraction algorithms. Meanwhile, visual segmentation models represented by SAM (Segment Anything Model) can achieve zero-shot generalization with appropriate prompts but struggle to capture linear object effectively. This study introduces OSAM (OpenStreetMap SAM), which fine-tunes SAM using historical open-source datasets to enhance its ability to detect linear features. Then the OSAM framework dynamically generates prompts from the open geographic database OpenStreetMap to activate SAM, enabling autonomous detection of farmland roads without the need for additional manual annotations or assisted interactions. Experiments demonstrate that OSAM performs exceptionally well in scenarios with sparse farmland road distributions and delivers robust results even with limited training data. Specifically, OSAM achieves a F1 of 71.91 % and an IoU of 58.53 % when trained on the full dataset, significantly outperforming DLinkNet (IoU: 56.42 %) and SegFormer (IoU: 41.65 %). Even with only 1 % of the original training samples, OSAM maintains robust performance (F1: 62.26 %, IoU: 47.02 %), whereas supervised learning methods such as SegFormer, SIINet, and UNet suffer significant performance degradation under extreme data constraints. Furthermore, evaluations on remote sensing images with varying data distributions, spatial resolutions, and agricultural environments confirm that OSAM achieves high extraction accuracy and strong generalization ability. This framework significantly reduces reliance on large, well-balanced labeled datasets while maintaining high accuracy, making farmland road extraction more efficient and cost-effective in diverse scenarios.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"235 ","pages":"Article 110330"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Electronics in Agriculture","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168169925004363","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

The narrow contours of farmland roads, lack of clear boundary features with surrounding objects, and the complexity and variability of features limit the applicability of existing supervised extraction algorithms. Meanwhile, visual segmentation models represented by SAM (Segment Anything Model) can achieve zero-shot generalization with appropriate prompts but struggle to capture linear object effectively. This study introduces OSAM (OpenStreetMap SAM), which fine-tunes SAM using historical open-source datasets to enhance its ability to detect linear features. Then the OSAM framework dynamically generates prompts from the open geographic database OpenStreetMap to activate SAM, enabling autonomous detection of farmland roads without the need for additional manual annotations or assisted interactions. Experiments demonstrate that OSAM performs exceptionally well in scenarios with sparse farmland road distributions and delivers robust results even with limited training data. Specifically, OSAM achieves a F1 of 71.91 % and an IoU of 58.53 % when trained on the full dataset, significantly outperforming DLinkNet (IoU: 56.42 %) and SegFormer (IoU: 41.65 %). Even with only 1 % of the original training samples, OSAM maintains robust performance (F1: 62.26 %, IoU: 47.02 %), whereas supervised learning methods such as SegFormer, SIINet, and UNet suffer significant performance degradation under extreme data constraints. Furthermore, evaluations on remote sensing images with varying data distributions, spatial resolutions, and agricultural environments confirm that OSAM achieves high extraction accuracy and strong generalization ability. This framework significantly reduces reliance on large, well-balanced labeled datasets while maintaining high accuracy, making farmland road extraction more efficient and cost-effective in diverse scenarios.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
面向多种农田应用场景的开源数据驱动道路自动提取框架
农田道路轮廓狭窄,与周围物体缺乏清晰的边界特征,特征的复杂性和可变性限制了现有监督提取算法的适用性。同时,以SAM (Segment Anything Model)为代表的视觉分割模型可以通过适当的提示实现零点泛化,但难以有效捕获线性对象。本研究引入了OSAM (OpenStreetMap SAM),它使用历史开源数据集对SAM进行微调,以增强其检测线性特征的能力。然后,OSAM框架从开放地理数据库OpenStreetMap动态生成提示以激活SAM,从而实现农田道路的自主检测,而无需额外的手动注释或辅助交互。实验表明,OSAM在农田道路分布稀疏的情况下表现非常好,即使训练数据有限,也能提供鲁棒性结果。具体来说,在完整数据集上训练时,OSAM实现了71.91%的F1和58.53%的IoU,显著优于DLinkNet (IoU: 56.42%)和SegFormer (IoU: 41.65%)。即使只有1%的原始训练样本,OSAM也保持了稳健的性能(F1: 62.26%, IoU: 47.02%),而SegFormer、SIINet和UNet等监督学习方法在极端数据约束下的性能会显著下降。此外,对不同数据分布、空间分辨率和农业环境的遥感图像进行评价,证实了OSAM具有较高的提取精度和较强的泛化能力。该框架显著减少了对大型、平衡良好的标记数据集的依赖,同时保持了高精度,使农田道路提取在不同场景下更高效、更具成本效益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture 工程技术-计算机:跨学科应用
CiteScore
15.30
自引率
14.50%
发文量
800
审稿时长
62 days
期刊介绍: Computers and Electronics in Agriculture provides international coverage of advancements in computer hardware, software, electronic instrumentation, and control systems applied to agricultural challenges. Encompassing agronomy, horticulture, forestry, aquaculture, and animal farming, the journal publishes original papers, reviews, and applications notes. It explores the use of computers and electronics in plant or animal agricultural production, covering topics like agricultural soils, water, pests, controlled environments, and waste. The scope extends to on-farm post-harvest operations and relevant technologies, including artificial intelligence, sensors, machine vision, robotics, networking, and simulation modeling. Its companion journal, Smart Agricultural Technology, continues the focus on smart applications in production agriculture.
期刊最新文献
Real-time and explainable non-destructive nut classification using spike-triggered acoustic sensing Precision mapping of soil salinity in reclaiming salt-induced wasteland with UAV multispectral images and machine learning Comparison of soil property predictions in Lithuanian croplands using UAV, satellite, EMI data and machine learning A new generation of embodied intelligent plant protection unmanned vehicle integrated with hydrostatic transmission and four-wheel drive technology: design, development and application Mathematical modeling of tomato ripening: Formulations, validation, and postharvest decision support — A review
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1