{"title":"Fixation-indices based correlation between text and image visual features of webpages","authors":"Sandeep Vidyapu, V. Saradhi, S. Bhattacharya","doi":"10.1145/3204493.3204566","DOIUrl":null,"url":null,"abstract":"Web elements associate with a set of visual features based on their data modality. For example, text associated with font-size and font-family whereas images associate with intensity and color. The unavailability of methods to relate these heterogeneous visual features limiting the attention-based analyses on webpages. In this paper, we propose a novel approach to establish the correlation between text and image visual features that influence users' attention. We pair the visual features of text and images based on their associated fixation-indices obtained from eye-tracking. From paired data, a common subspace is learned using Canonical Correlation Analysis (CCA) to maximize the correlation between them. The performance of the proposed approach is analyzed through a controlled eye-tracking experiment conducted on 51 real-world webpages. A very high correlation of 99.48% is achieved between text and images with text related font families and image related color features influencing the correlation.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3204493.3204566","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Web elements associate with a set of visual features based on their data modality. For example, text associated with font-size and font-family whereas images associate with intensity and color. The unavailability of methods to relate these heterogeneous visual features limiting the attention-based analyses on webpages. In this paper, we propose a novel approach to establish the correlation between text and image visual features that influence users' attention. We pair the visual features of text and images based on their associated fixation-indices obtained from eye-tracking. From paired data, a common subspace is learned using Canonical Correlation Analysis (CCA) to maximize the correlation between them. The performance of the proposed approach is analyzed through a controlled eye-tracking experiment conducted on 51 real-world webpages. A very high correlation of 99.48% is achieved between text and images with text related font families and image related color features influencing the correlation.