{"title":"图像检索与一个视觉同义词典","authors":"Yanzhi Chen, A. Dick, A. Hengel","doi":"10.1109/DICTA.2010.11","DOIUrl":null,"url":null,"abstract":"Current state-of-art of image retrieval methods represent images as an unordered collection of local patches, each of which is classified as a \"visual word\" from a fixed vocabulary. This paper presents a simple but innovative way to uncover the spatial relationship between visual words so that we can discover words that represent the same latent topic and thereby improve the retrieval results. The method in this paper is borrowed from text retrieval, and is analogous to a text thesaurus in that it describes a broad set of equivalence relationship between words. We evaluate our method on the popular Oxford Building dataset. This makes it possible to compare our method with previous work on image retrieval, and the results show that our method is comparable to more complex state of the art methods.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Image Retrieval with a Visual Thesaurus\",\"authors\":\"Yanzhi Chen, A. Dick, A. Hengel\",\"doi\":\"10.1109/DICTA.2010.11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current state-of-art of image retrieval methods represent images as an unordered collection of local patches, each of which is classified as a \\\"visual word\\\" from a fixed vocabulary. This paper presents a simple but innovative way to uncover the spatial relationship between visual words so that we can discover words that represent the same latent topic and thereby improve the retrieval results. The method in this paper is borrowed from text retrieval, and is analogous to a text thesaurus in that it describes a broad set of equivalence relationship between words. We evaluate our method on the popular Oxford Building dataset. This makes it possible to compare our method with previous work on image retrieval, and the results show that our method is comparable to more complex state of the art methods.\",\"PeriodicalId\":246460,\"journal\":{\"name\":\"2010 International Conference on Digital Image Computing: Techniques and Applications\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 International Conference on Digital Image Computing: Techniques and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA.2010.11\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 International Conference on Digital Image Computing: Techniques and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2010.11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Current state-of-art of image retrieval methods represent images as an unordered collection of local patches, each of which is classified as a "visual word" from a fixed vocabulary. This paper presents a simple but innovative way to uncover the spatial relationship between visual words so that we can discover words that represent the same latent topic and thereby improve the retrieval results. The method in this paper is borrowed from text retrieval, and is analogous to a text thesaurus in that it describes a broad set of equivalence relationship between words. We evaluate our method on the popular Oxford Building dataset. This makes it possible to compare our method with previous work on image retrieval, and the results show that our method is comparable to more complex state of the art methods.