{"title":"Exploration of Social and Web Image Search Results Using Tensor Decomposition","authors":"Liuqing Yang, E. Papalexakis","doi":"10.1109/CVPRW.2017.239","DOIUrl":null,"url":null,"abstract":"How do socially popular images differ from authoritative images indexed by web search engines? Empirically, social images on e.g., Twitter often tend to look more diverse and ultimately more \"personal\", contrary to images that are returned by web image search, some of which are so-called \"stock\" images. Are there image features, that we can automatically learn, which differentiate the two types of image search results, or features that the two have in common? This paper outlines the vision towards achieving this result. We propose a tensor-based approach that learns key features of social and web image search results, and provides a comprehensive framework for analyzing and understanding the similarities and differences between the two types types of content. We demonstrate our preliminary results on a small-scale study, and conclude with future research directions for this exciting and novel application.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"24 1","pages":"1915-1920"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2017.239","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
How do socially popular images differ from authoritative images indexed by web search engines? Empirically, social images on e.g., Twitter often tend to look more diverse and ultimately more "personal", contrary to images that are returned by web image search, some of which are so-called "stock" images. Are there image features, that we can automatically learn, which differentiate the two types of image search results, or features that the two have in common? This paper outlines the vision towards achieving this result. We propose a tensor-based approach that learns key features of social and web image search results, and provides a comprehensive framework for analyzing and understanding the similarities and differences between the two types types of content. We demonstrate our preliminary results on a small-scale study, and conclude with future research directions for this exciting and novel application.