{"title":"Crowd-sourcing Applied to Photograph-Based Automatic Habitat Classification","authors":"M. Torres, G. Qiu","doi":"10.1145/2661821.2661824","DOIUrl":null,"url":null,"abstract":"Habitat classification is a crucial activity for monitoring environmental biodiversity. To date, manual methods, which are laborious, time-consuming and expensive, remain the most successful alternative. Most automatic methods use remote-sensed imagery but remotely sensed images lack the necessary level of detail. Previous studies have treated automatic habitat classification as an image-annotation problem and have developed a framework that uses ground-taken photographs, feature extraction and a random-forest-based classifier to automatically annotate unseen photographs with their habitats. This paper builds on this previous framework with two new contributions that explore the benefits of applying crowd-sourcing methodologies to automatically collect, annotate and classify habitats. First, we use Geograph, a crowd-sourcing photograph website, to collect a larger geo-referenced ground-taken photograph database, with over 3,000 photographs and 11,000 habitats. We tested the original framework on this much larger database and show that it maintains its success rate. Second, we use a crowd-sourcing mechanism to obtain higher-level semantic features, designed to improve the limitations that visual features have for Fine-Grained Visual Categorization (FGVC) problems, such as habitat classification. Results show that the inclusion of these features improves the performance of a previous framework, particularly in terms of precision.","PeriodicalId":250753,"journal":{"name":"MAED '14","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MAED '14","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2661821.2661824","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Habitat classification is a crucial activity for monitoring environmental biodiversity. To date, manual methods, which are laborious, time-consuming and expensive, remain the most successful alternative. Most automatic methods use remote-sensed imagery but remotely sensed images lack the necessary level of detail. Previous studies have treated automatic habitat classification as an image-annotation problem and have developed a framework that uses ground-taken photographs, feature extraction and a random-forest-based classifier to automatically annotate unseen photographs with their habitats. This paper builds on this previous framework with two new contributions that explore the benefits of applying crowd-sourcing methodologies to automatically collect, annotate and classify habitats. First, we use Geograph, a crowd-sourcing photograph website, to collect a larger geo-referenced ground-taken photograph database, with over 3,000 photographs and 11,000 habitats. We tested the original framework on this much larger database and show that it maintains its success rate. Second, we use a crowd-sourcing mechanism to obtain higher-level semantic features, designed to improve the limitations that visual features have for Fine-Grained Visual Categorization (FGVC) problems, such as habitat classification. Results show that the inclusion of these features improves the performance of a previous framework, particularly in terms of precision.