SPREAD: A large-scale, high-fidelity synthetic dataset for multiple forest vision tasks

IF 7.3 2区 环境科学与生态学 Q1 ECOLOGY Ecological Informatics Pub Date : 2025-07-01 Epub Date: 2025-02-25 DOI:10.1016/j.ecoinf.2025.103085
Zhengpeng Feng, Yihang She, Srinivasan Keshav
{"title":"SPREAD: A large-scale, high-fidelity synthetic dataset for multiple forest vision tasks","authors":"Zhengpeng Feng,&nbsp;Yihang She,&nbsp;Srinivasan Keshav","doi":"10.1016/j.ecoinf.2025.103085","DOIUrl":null,"url":null,"abstract":"<div><div>We present the Synthetic Photo-realistic Arboreal Dataset (SPREAD), a state-of-the-art synthetic dataset specifically designed for forest-related machine learning tasks. Developed using Unreal Engine 5, SPREAD goes beyond existing synthetic forest datasets in terms of realism, diversity, and comprehensiveness. It includes RGB, depth images, point clouds, semantic and instance segmentation labels, along with key parameters such as tree ID, location, diameter at breast height (DBH), height, and canopy diameter. In exemplary experiments, we found that SPREAD significantly reduces the need to use real-world datasets for trunk segmentation tasks and enhances model segmentation performance. Specifically, by pretraining on SPREAD, MobileNetV3 and DeepLabV3 models require only 25% of a fine-tuning real-world dataset to match or even surpass the performance of ImageNet-pretrained models fine-tuned on the entire real-world dataset. Furthermore, our hybrid training experiments demonstrate that by combining SPREAD and real data at a 1:1 or 2:1 ratio greatly improves task performance. For the canopy instance segmentation task, SPREAD pretraining still provides varying degrees of performance improvement for the models. All datasets, data collection frameworks, and codes are available at <span><span>https://github.com/FrankFeng-23/SPREAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51024,"journal":{"name":"Ecological Informatics","volume":"87 ","pages":"Article 103085"},"PeriodicalIF":7.3000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ecological Informatics","FirstCategoryId":"93","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1574954125000949","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/25 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ECOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

We present the Synthetic Photo-realistic Arboreal Dataset (SPREAD), a state-of-the-art synthetic dataset specifically designed for forest-related machine learning tasks. Developed using Unreal Engine 5, SPREAD goes beyond existing synthetic forest datasets in terms of realism, diversity, and comprehensiveness. It includes RGB, depth images, point clouds, semantic and instance segmentation labels, along with key parameters such as tree ID, location, diameter at breast height (DBH), height, and canopy diameter. In exemplary experiments, we found that SPREAD significantly reduces the need to use real-world datasets for trunk segmentation tasks and enhances model segmentation performance. Specifically, by pretraining on SPREAD, MobileNetV3 and DeepLabV3 models require only 25% of a fine-tuning real-world dataset to match or even surpass the performance of ImageNet-pretrained models fine-tuned on the entire real-world dataset. Furthermore, our hybrid training experiments demonstrate that by combining SPREAD and real data at a 1:1 or 2:1 ratio greatly improves task performance. For the canopy instance segmentation task, SPREAD pretraining still provides varying degrees of performance improvement for the models. All datasets, data collection frameworks, and codes are available at https://github.com/FrankFeng-23/SPREAD.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SPREAD:用于多种森林视觉任务的大规模高保真合成数据集
我们提出了合成逼真的树木数据集(SPREAD),这是一个专门为森林相关机器学习任务设计的最先进的合成数据集。使用虚幻引擎5开发,SPREAD在现实性,多样性和全面性方面超越了现有的合成森林数据集。它包括RGB、深度图像、点云、语义和实例分割标签,以及关键参数,如树ID、位置、胸径(DBH)、高度和树冠直径。在示例实验中,我们发现SPREAD显著减少了使用真实数据集进行主干分割任务的需要,并提高了模型分割性能。具体来说,通过在SPREAD、MobileNetV3和DeepLabV3模型上进行预训练,只需要对真实世界数据集进行25%的微调,就可以匹配甚至超过在整个真实世界数据集上进行微调的imagenet预训练模型的性能。此外,我们的混合训练实验表明,将SPREAD和真实数据以1:1或2:1的比例结合,可以大大提高任务性能。对于冠层实例分割任务,SPREAD预训练仍然为模型提供了不同程度的性能提升。所有数据集、数据收集框架和代码可在https://github.com/FrankFeng-23/SPREAD上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Ecological Informatics
Ecological Informatics 环境科学-生态学
CiteScore
8.30
自引率
11.80%
发文量
346
审稿时长
46 days
期刊介绍: The journal Ecological Informatics is devoted to the publication of high quality, peer-reviewed articles on all aspects of computational ecology, data science and biogeography. The scope of the journal takes into account the data-intensive nature of ecology, the growing capacity of information technology to access, harness and leverage complex data as well as the critical need for informing sustainable management in view of global environmental and climate change. The nature of the journal is interdisciplinary at the crossover between ecology and informatics. It focuses on novel concepts and techniques for image- and genome-based monitoring and interpretation, sensor- and multimedia-based data acquisition, internet-based data archiving and sharing, data assimilation, modelling and prediction of ecological data.
期刊最新文献
Can Google trends be used to estimate the geographic distribution of alien plants in the United States? WQEye (v1): A Python-based software for machine learning-based retrieval of water quality parameters from Sentinel-2 and Landsat-8/9 remote sensing data aided by Google Earth Engine LEPY: A Python pipeline for automated trait extraction from standardised Lepidoptera images A Hapke-based bi-temporal UAV hyperspectral index for reducing temporal spectral variations in individual-tree biomass estimation Are remote sensing-based crop type classifications suitable for calculating a landscape heterogeneity metric? A data-fitness-for-purpose assessment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1