This paper presents a way to track eye-gaze by using webcam and mapping the eye-gaze data compensating head pose and orientation on the display screen. First, we have shown a blank screen with red dots to 10 individuals and recorded their eye-gaze pattern and head orientation associated with that screen location by automated annotation. Then, we trained a neural network to learn the relationship between eye-gaze and head pose with screen location. The proposed method can map eye-gazes to screen with 68.3% accuracy. Next, by using the trained model to estimate eye gaze on screen, we have evaluated content of a website. This gives us an automated way to evaluate the UI of a website. The evaluation metric might be used with several other metrics to define a standard for web design and layout. This also gives insight to the likes and dislikes, important areas of a website. Also, eye tracking by only a webcam simplifies the matter to use this technology in various fields which might open the future prospect of enormous applications.
{"title":"Eye-Gaze to Screen Location Mapping for UI Evaluation of Webpages","authors":"M. S. Hossain, A. Ali, M. Amin","doi":"10.1145/3338472.3338483","DOIUrl":"https://doi.org/10.1145/3338472.3338483","url":null,"abstract":"This paper presents a way to track eye-gaze by using webcam and mapping the eye-gaze data compensating head pose and orientation on the display screen. First, we have shown a blank screen with red dots to 10 individuals and recorded their eye-gaze pattern and head orientation associated with that screen location by automated annotation. Then, we trained a neural network to learn the relationship between eye-gaze and head pose with screen location. The proposed method can map eye-gazes to screen with 68.3% accuracy. Next, by using the trained model to estimate eye gaze on screen, we have evaluated content of a website. This gives us an automated way to evaluate the UI of a website. The evaluation metric might be used with several other metrics to define a standard for web design and layout. This also gives insight to the likes and dislikes, important areas of a website. Also, eye tracking by only a webcam simplifies the matter to use this technology in various fields which might open the future prospect of enormous applications.","PeriodicalId":142573,"journal":{"name":"Proceedings of the 3rd International Conference on Graphics and Signal Processing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Barcode technology is widely applied to industrial automatic identification field for its low cost and high reliability as well as the fast real-time performance. Due to the complexity of low-light, rotation and blur in the industrial field, several barcode localization approaches which are superior in speed or accuracy fail to accurately locate and even detect the barcode. This paper proposes a real-time barcode approach that can effectively cope with the above problems when dealing with low-quality images. First, we rely on the gradient information of the pixels to obtain both orientation map and magnitude map. Then, we use the Shannon entropy theorem to get a salient map for the sake of segmenting salient patches of the high score. Later, we utilize the smoothing filter to remove the noise and connect the salient patches to form the barcode candidate BLOBs (Binary Large OBject). Finally, the correct barcode is selected from the above candidate BLOBs with a covariance matrix. We obtained 500 experimental images covering the conditions of reflection, rotation, and low illumination from the industrial site. The experimental results based on the dataset show that our method exceeds significantly the other three advanced methods in accuracy.
{"title":"Efficient Barcode Localization Method for Low-Quality Images","authors":"Xiang Pan, Dong Li, Weijia Wu, Hong Zhou","doi":"10.1145/3338472.3338474","DOIUrl":"https://doi.org/10.1145/3338472.3338474","url":null,"abstract":"Barcode technology is widely applied to industrial automatic identification field for its low cost and high reliability as well as the fast real-time performance. Due to the complexity of low-light, rotation and blur in the industrial field, several barcode localization approaches which are superior in speed or accuracy fail to accurately locate and even detect the barcode. This paper proposes a real-time barcode approach that can effectively cope with the above problems when dealing with low-quality images. First, we rely on the gradient information of the pixels to obtain both orientation map and magnitude map. Then, we use the Shannon entropy theorem to get a salient map for the sake of segmenting salient patches of the high score. Later, we utilize the smoothing filter to remove the noise and connect the salient patches to form the barcode candidate BLOBs (Binary Large OBject). Finally, the correct barcode is selected from the above candidate BLOBs with a covariance matrix. We obtained 500 experimental images covering the conditions of reflection, rotation, and low illumination from the industrial site. The experimental results based on the dataset show that our method exceeds significantly the other three advanced methods in accuracy.","PeriodicalId":142573,"journal":{"name":"Proceedings of the 3rd International Conference on Graphics and Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122337103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}