{"title":"农业全景细分的跨领域挑战","authors":"Michael Halstead, Patrick Zimmer, Chris McCool","doi":"10.1177/02783649241227448","DOIUrl":null,"url":null,"abstract":"Automation in agriculture is a growing area of research with fundamental societal importance as farmers are expected to produce more and better crop with fewer resources. A key enabling factor is robotic vision techniques allowing us to sense and then interact with the environment. A limiting factor for these robotic vision systems is their cross-domain performance, that is, their ability to operate in a large range of environments. In this paper, we propose the use of auxiliary tasks to enhance cross-domain performance without the need for extra data. We perform experiments using four datasets (two in a glasshouse and two in arable farmland) for four cross-domain evaluations. These experiments demonstrate the effectiveness of our auxiliary tasks to improve network generalisability. In glasshouse experiments, our approach improves the panoptic quality of things from 10.4 to 18.5 and in arable farmland from 16.0 to 27.5; where a score of 100 is the best. To further evaluate the generalisability of our approach, we perform an ablation study using the large Crop and Weed dataset (CAW) where we improve cross-domain performance (panoptic quality of things) from 12.8 to 30.6 for the CAW dataset to our novel WeedAI dataset, and 21.2 to 36.0 from CAW to the other arable farmland dataset. Although our proposed approaches considerably improve cross-domain performance we still do not generally outperform in-domain trained systems. This highlights the potential room for improvement in this area and the importance of cross-domain research for robotic vision systems.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"120 17","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A cross-domain challenge with panoptic segmentation in agriculture\",\"authors\":\"Michael Halstead, Patrick Zimmer, Chris McCool\",\"doi\":\"10.1177/02783649241227448\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automation in agriculture is a growing area of research with fundamental societal importance as farmers are expected to produce more and better crop with fewer resources. A key enabling factor is robotic vision techniques allowing us to sense and then interact with the environment. A limiting factor for these robotic vision systems is their cross-domain performance, that is, their ability to operate in a large range of environments. In this paper, we propose the use of auxiliary tasks to enhance cross-domain performance without the need for extra data. We perform experiments using four datasets (two in a glasshouse and two in arable farmland) for four cross-domain evaluations. These experiments demonstrate the effectiveness of our auxiliary tasks to improve network generalisability. In glasshouse experiments, our approach improves the panoptic quality of things from 10.4 to 18.5 and in arable farmland from 16.0 to 27.5; where a score of 100 is the best. To further evaluate the generalisability of our approach, we perform an ablation study using the large Crop and Weed dataset (CAW) where we improve cross-domain performance (panoptic quality of things) from 12.8 to 30.6 for the CAW dataset to our novel WeedAI dataset, and 21.2 to 36.0 from CAW to the other arable farmland dataset. Although our proposed approaches considerably improve cross-domain performance we still do not generally outperform in-domain trained systems. This highlights the potential room for improvement in this area and the importance of cross-domain research for robotic vision systems.\",\"PeriodicalId\":501362,\"journal\":{\"name\":\"The International Journal of Robotics Research\",\"volume\":\"120 17\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The International Journal of Robotics Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/02783649241227448\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Journal of Robotics Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02783649241227448","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A cross-domain challenge with panoptic segmentation in agriculture
Automation in agriculture is a growing area of research with fundamental societal importance as farmers are expected to produce more and better crop with fewer resources. A key enabling factor is robotic vision techniques allowing us to sense and then interact with the environment. A limiting factor for these robotic vision systems is their cross-domain performance, that is, their ability to operate in a large range of environments. In this paper, we propose the use of auxiliary tasks to enhance cross-domain performance without the need for extra data. We perform experiments using four datasets (two in a glasshouse and two in arable farmland) for four cross-domain evaluations. These experiments demonstrate the effectiveness of our auxiliary tasks to improve network generalisability. In glasshouse experiments, our approach improves the panoptic quality of things from 10.4 to 18.5 and in arable farmland from 16.0 to 27.5; where a score of 100 is the best. To further evaluate the generalisability of our approach, we perform an ablation study using the large Crop and Weed dataset (CAW) where we improve cross-domain performance (panoptic quality of things) from 12.8 to 30.6 for the CAW dataset to our novel WeedAI dataset, and 21.2 to 36.0 from CAW to the other arable farmland dataset. Although our proposed approaches considerably improve cross-domain performance we still do not generally outperform in-domain trained systems. This highlights the potential room for improvement in this area and the importance of cross-domain research for robotic vision systems.