Y. Kamoi, Y. Furukawa, T. Sato, Y. Kiwada, T. Takagi
{"title":"Automatic Image Annotation Based on Visual Cognitive Theory","authors":"Y. Kamoi, Y. Furukawa, T. Sato, Y. Kiwada, T. Takagi","doi":"10.1109/NAFIPS.2007.383844","DOIUrl":null,"url":null,"abstract":"This paper presents a new method of automatic image annotation based on visual cognitive theory that improves the accuracy of image recognition by taking two semantic levels of keywords that give feedback to each other into consideration. Our system first segments an image and recognizes objects in the K-Nearest Neighbor (KNN). It then recognizes contexts by using them from networked knowledge. After that, it re-recognizes objects depending on these contexts. We adopted natural images for experiments and verified the system's effectiveness. As a result, we obtained improved recognition rates compared with KNN. We proved that our system that takes the semantic levels of keywords into account has great potential for enhancing image recognition.","PeriodicalId":292853,"journal":{"name":"NAFIPS 2007 - 2007 Annual Meeting of the North American Fuzzy Information Processing Society","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"NAFIPS 2007 - 2007 Annual Meeting of the North American Fuzzy Information Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAFIPS.2007.383844","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper presents a new method of automatic image annotation based on visual cognitive theory that improves the accuracy of image recognition by taking two semantic levels of keywords that give feedback to each other into consideration. Our system first segments an image and recognizes objects in the K-Nearest Neighbor (KNN). It then recognizes contexts by using them from networked knowledge. After that, it re-recognizes objects depending on these contexts. We adopted natural images for experiments and verified the system's effectiveness. As a result, we obtained improved recognition rates compared with KNN. We proved that our system that takes the semantic levels of keywords into account has great potential for enhancing image recognition.