{"title":"从错误中深度学习:自动云类细化天空图像分割","authors":"Gemma Dianne, A. Wiliem, B. Lovell","doi":"10.1109/DICTA47822.2019.8946028","DOIUrl":null,"url":null,"abstract":"There is considerable research effort directed toward ground based cloud detection due to its many applications in Air traffic control, Cloud-track wind data monitoring, and Solar-power forecasting to name a few. There are key challenges that have been identified consistently in the literature being primarily: glare, varied illumination, poorly defined boundaries, and thin wispy clouds. At this time there is one significant research database for use in Cloud Segmentation; the SWIMSEG database [1] which consists of 1013 Images and the corresponding Ground Truths. While investigating the limitations around detecting thin cloud, we found significant ambiguity even within this high quality hand labelled research dataset. This is to be expected, as the task of tracing cloud boundaries is subjective. We propose capitalising on these inconsistencies by utilising robust deep-learning techniques, which have been recently shown to be effective on this data. By implementing a two-stage training strategy, validated on the smaller HYTA dataset, we plan to leverage the mistakes in the first stage of training to refine class features in the second. This approach is based on the assumption that the majority of mistakes made in the first stage will correspond to thin cloud pixels. The results of our experimentation indicate that this assumption is true, with this two-stage process producing quality results, while also proving to be robust when extended to unseen data.","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"34 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Deep-Learning from Mistakes: Automating Cloud Class Refinement for Sky Image Segmentation\",\"authors\":\"Gemma Dianne, A. Wiliem, B. Lovell\",\"doi\":\"10.1109/DICTA47822.2019.8946028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is considerable research effort directed toward ground based cloud detection due to its many applications in Air traffic control, Cloud-track wind data monitoring, and Solar-power forecasting to name a few. There are key challenges that have been identified consistently in the literature being primarily: glare, varied illumination, poorly defined boundaries, and thin wispy clouds. At this time there is one significant research database for use in Cloud Segmentation; the SWIMSEG database [1] which consists of 1013 Images and the corresponding Ground Truths. While investigating the limitations around detecting thin cloud, we found significant ambiguity even within this high quality hand labelled research dataset. This is to be expected, as the task of tracing cloud boundaries is subjective. We propose capitalising on these inconsistencies by utilising robust deep-learning techniques, which have been recently shown to be effective on this data. By implementing a two-stage training strategy, validated on the smaller HYTA dataset, we plan to leverage the mistakes in the first stage of training to refine class features in the second. This approach is based on the assumption that the majority of mistakes made in the first stage will correspond to thin cloud pixels. The results of our experimentation indicate that this assumption is true, with this two-stage process producing quality results, while also proving to be robust when extended to unseen data.\",\"PeriodicalId\":6696,\"journal\":{\"name\":\"2019 Digital Image Computing: Techniques and Applications (DICTA)\",\"volume\":\"34 1\",\"pages\":\"1-8\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Digital Image Computing: Techniques and Applications (DICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA47822.2019.8946028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA47822.2019.8946028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep-Learning from Mistakes: Automating Cloud Class Refinement for Sky Image Segmentation
There is considerable research effort directed toward ground based cloud detection due to its many applications in Air traffic control, Cloud-track wind data monitoring, and Solar-power forecasting to name a few. There are key challenges that have been identified consistently in the literature being primarily: glare, varied illumination, poorly defined boundaries, and thin wispy clouds. At this time there is one significant research database for use in Cloud Segmentation; the SWIMSEG database [1] which consists of 1013 Images and the corresponding Ground Truths. While investigating the limitations around detecting thin cloud, we found significant ambiguity even within this high quality hand labelled research dataset. This is to be expected, as the task of tracing cloud boundaries is subjective. We propose capitalising on these inconsistencies by utilising robust deep-learning techniques, which have been recently shown to be effective on this data. By implementing a two-stage training strategy, validated on the smaller HYTA dataset, we plan to leverage the mistakes in the first stage of training to refine class features in the second. This approach is based on the assumption that the majority of mistakes made in the first stage will correspond to thin cloud pixels. The results of our experimentation indicate that this assumption is true, with this two-stage process producing quality results, while also proving to be robust when extended to unseen data.