{"title":"遥感图像语义标注概念模糊超图","authors":"K. Amiri, Mohamed Farah, I. Farah","doi":"10.1109/ATSIP.2017.8075516","DOIUrl":null,"url":null,"abstract":"Annotation of images is largely studied in the literature and used in many application fields such as in image interpretation, indexation and retrieval. Manually annotating images gives valuable information on the semantic content of images, but is no longer acceptable when dealing with real corpora of images, especially in the era of big data. Content-based approaches had known great success to deal with large datasets, using low-level features such as color, texture, and shape, which are easy to compute automatically. Nonetheless, they suffer from the well known semantic gap problem, since they produce semantically very limited representations of images. In this paper, we propose a semantic image annotation approach that simultaneously handles contextual, spatial and spectral information of the image. We consider a predefined remotely sensed ontology and develop an annotation process that produces semantically rich hypergraphs representing objects in scenes, as well as their spatial and spectral attributes. We apply our approach to build a hypergraph corresponding to the Jasper Ridge AVIRIS image, showing the promising use of such representation in remote sensing.","PeriodicalId":259951,"journal":{"name":"2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Fuzzy hypergraph of concepts for semantic annotation of remotely sensed images\",\"authors\":\"K. Amiri, Mohamed Farah, I. Farah\",\"doi\":\"10.1109/ATSIP.2017.8075516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Annotation of images is largely studied in the literature and used in many application fields such as in image interpretation, indexation and retrieval. Manually annotating images gives valuable information on the semantic content of images, but is no longer acceptable when dealing with real corpora of images, especially in the era of big data. Content-based approaches had known great success to deal with large datasets, using low-level features such as color, texture, and shape, which are easy to compute automatically. Nonetheless, they suffer from the well known semantic gap problem, since they produce semantically very limited representations of images. In this paper, we propose a semantic image annotation approach that simultaneously handles contextual, spatial and spectral information of the image. We consider a predefined remotely sensed ontology and develop an annotation process that produces semantically rich hypergraphs representing objects in scenes, as well as their spatial and spectral attributes. We apply our approach to build a hypergraph corresponding to the Jasper Ridge AVIRIS image, showing the promising use of such representation in remote sensing.\",\"PeriodicalId\":259951,\"journal\":{\"name\":\"2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)\",\"volume\":\"75 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ATSIP.2017.8075516\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ATSIP.2017.8075516","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fuzzy hypergraph of concepts for semantic annotation of remotely sensed images
Annotation of images is largely studied in the literature and used in many application fields such as in image interpretation, indexation and retrieval. Manually annotating images gives valuable information on the semantic content of images, but is no longer acceptable when dealing with real corpora of images, especially in the era of big data. Content-based approaches had known great success to deal with large datasets, using low-level features such as color, texture, and shape, which are easy to compute automatically. Nonetheless, they suffer from the well known semantic gap problem, since they produce semantically very limited representations of images. In this paper, we propose a semantic image annotation approach that simultaneously handles contextual, spatial and spectral information of the image. We consider a predefined remotely sensed ontology and develop an annotation process that produces semantically rich hypergraphs representing objects in scenes, as well as their spatial and spectral attributes. We apply our approach to build a hypergraph corresponding to the Jasper Ridge AVIRIS image, showing the promising use of such representation in remote sensing.