Yangmingrui Gao , Linyuan Li , Marie Weiss , Wei Guo , Ming Shi , Hao Lu , Ruibo Jiang , Yanfeng Ding , Tejasri Nampally , P. Rajalakshmi , Frédéric Baret , Shouyang Liu
{"title":"将真实数据和模拟数据用于跨空间分辨率植被划分,并将其应用于水稻作物","authors":"Yangmingrui Gao , Linyuan Li , Marie Weiss , Wei Guo , Ming Shi , Hao Lu , Ruibo Jiang , Yanfeng Ding , Tejasri Nampally , P. Rajalakshmi , Frédéric Baret , Shouyang Liu","doi":"10.1016/j.isprsjprs.2024.10.007","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate image segmentation is essential for image-based estimation of vegetation canopy traits, as it minimizes background interference. However, existing segmentation models often lack the generalization ability to effectively tackle both ground-based and aerial images across a wide range of spatial resolutions. To address this limitation, a cross-spatial-resolution image segmentation model for rice crop was trained using the integration of <em>in-situ</em> and <em>in silico</em> multi-resolution images. We collected more than 3,000 RGB images (real set) covering 17 different resolutions reflecting diverse canopy structures, illumination conditions and background in rice fields, with vegetation pixels annotated manually. Using the previously developed Digital Plant Phenotyping Platform, we created a simulated dataset (sim set) including 10,000 RGB images with resolutions ranging from 0.5 to 3.5 mm/pixel, accompanied by corresponding mask labels. By employing a domain adaptation technique, the simulated images were further transformed into visually realistic images while preserving the original labels, creating a simulated-to-realistic dataset (sim2real set). Building upon a SegFormer deep learning model, we demonstrated that training with multi-resolution samples led to more generalized segmentation results than single-resolution training on the real dataset. Our exploration of various integration strategies revealed that a training set of 9,600 sim2real images combined with only 60 real images achieved the same segmentation accuracy as 2,400 real images (IoU = 0.819, F1 = 0.901). Moreover, combining 2,400 real images and 1,200 sim2real images resulted in the best performing model, effective against six challenging situations, such as specular reflections and shadows. Compared with models trained with single-resolution samples and an established model (i.e., VegANN), our model effectively improved the estimation of both green fraction and green area index across spatial resoultions. The strategy of bridging real and simulated data for cross-resolution deep learning model is expected to be applicable to other crops. The best trained model is available at <span><span>https://github.com/PheniX-Lab/crossGSD-seg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 133-150"},"PeriodicalIF":10.6000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bridging real and simulated data for cross-spatial- resolution vegetation segmentation with application to rice crops\",\"authors\":\"Yangmingrui Gao , Linyuan Li , Marie Weiss , Wei Guo , Ming Shi , Hao Lu , Ruibo Jiang , Yanfeng Ding , Tejasri Nampally , P. Rajalakshmi , Frédéric Baret , Shouyang Liu\",\"doi\":\"10.1016/j.isprsjprs.2024.10.007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Accurate image segmentation is essential for image-based estimation of vegetation canopy traits, as it minimizes background interference. However, existing segmentation models often lack the generalization ability to effectively tackle both ground-based and aerial images across a wide range of spatial resolutions. To address this limitation, a cross-spatial-resolution image segmentation model for rice crop was trained using the integration of <em>in-situ</em> and <em>in silico</em> multi-resolution images. We collected more than 3,000 RGB images (real set) covering 17 different resolutions reflecting diverse canopy structures, illumination conditions and background in rice fields, with vegetation pixels annotated manually. Using the previously developed Digital Plant Phenotyping Platform, we created a simulated dataset (sim set) including 10,000 RGB images with resolutions ranging from 0.5 to 3.5 mm/pixel, accompanied by corresponding mask labels. By employing a domain adaptation technique, the simulated images were further transformed into visually realistic images while preserving the original labels, creating a simulated-to-realistic dataset (sim2real set). Building upon a SegFormer deep learning model, we demonstrated that training with multi-resolution samples led to more generalized segmentation results than single-resolution training on the real dataset. Our exploration of various integration strategies revealed that a training set of 9,600 sim2real images combined with only 60 real images achieved the same segmentation accuracy as 2,400 real images (IoU = 0.819, F1 = 0.901). Moreover, combining 2,400 real images and 1,200 sim2real images resulted in the best performing model, effective against six challenging situations, such as specular reflections and shadows. Compared with models trained with single-resolution samples and an established model (i.e., VegANN), our model effectively improved the estimation of both green fraction and green area index across spatial resoultions. The strategy of bridging real and simulated data for cross-resolution deep learning model is expected to be applicable to other crops. The best trained model is available at <span><span>https://github.com/PheniX-Lab/crossGSD-seg</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"218 \",\"pages\":\"Pages 133-150\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271624003848\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624003848","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
Bridging real and simulated data for cross-spatial- resolution vegetation segmentation with application to rice crops
Accurate image segmentation is essential for image-based estimation of vegetation canopy traits, as it minimizes background interference. However, existing segmentation models often lack the generalization ability to effectively tackle both ground-based and aerial images across a wide range of spatial resolutions. To address this limitation, a cross-spatial-resolution image segmentation model for rice crop was trained using the integration of in-situ and in silico multi-resolution images. We collected more than 3,000 RGB images (real set) covering 17 different resolutions reflecting diverse canopy structures, illumination conditions and background in rice fields, with vegetation pixels annotated manually. Using the previously developed Digital Plant Phenotyping Platform, we created a simulated dataset (sim set) including 10,000 RGB images with resolutions ranging from 0.5 to 3.5 mm/pixel, accompanied by corresponding mask labels. By employing a domain adaptation technique, the simulated images were further transformed into visually realistic images while preserving the original labels, creating a simulated-to-realistic dataset (sim2real set). Building upon a SegFormer deep learning model, we demonstrated that training with multi-resolution samples led to more generalized segmentation results than single-resolution training on the real dataset. Our exploration of various integration strategies revealed that a training set of 9,600 sim2real images combined with only 60 real images achieved the same segmentation accuracy as 2,400 real images (IoU = 0.819, F1 = 0.901). Moreover, combining 2,400 real images and 1,200 sim2real images resulted in the best performing model, effective against six challenging situations, such as specular reflections and shadows. Compared with models trained with single-resolution samples and an established model (i.e., VegANN), our model effectively improved the estimation of both green fraction and green area index across spatial resoultions. The strategy of bridging real and simulated data for cross-resolution deep learning model is expected to be applicable to other crops. The best trained model is available at https://github.com/PheniX-Lab/crossGSD-seg.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.