L. Murthy, Gyanig Kumar, Modiksha Madan, Sachin Deshmukh, P. Biswas
{"title":"Efficient Interaction with Automotive Heads-Up Displays using Appearance-based Gaze Tracking","authors":"L. Murthy, Gyanig Kumar, Modiksha Madan, Sachin Deshmukh, P. Biswas","doi":"10.1145/3544999.3554818","DOIUrl":null,"url":null,"abstract":"Automotive Head-Up Displays (HUD) offer a promising alternative to the existing Head Down Displays (HDD) inside the car. Since HUDs lie closer to the driver’s line of sight, they reduce eyes-off-the-road time of the drivers while interacting with them. Yet existing HUDs do not provide interactivity restricting their potential to mere information visualization. In this work, we proposed a novel webcam-based gaze tracking system to interact with the icons on HUD. We conducted a user study on a driving simulator and compared the proposed system with a Gesture-based system. We collected quantitative and qualitative metrics from 8 participants while they were performing a dual-task trial. We observed that using the proposed eye gaze system, users were able to select icons on the HUD as fast as gesture modality. Further, users perceived significantly lower cognitive load and expressed significantly higher preference towards the proposed eye gaze control system than the gesture-based system.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3544999.3554818","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Automotive Head-Up Displays (HUD) offer a promising alternative to the existing Head Down Displays (HDD) inside the car. Since HUDs lie closer to the driver’s line of sight, they reduce eyes-off-the-road time of the drivers while interacting with them. Yet existing HUDs do not provide interactivity restricting their potential to mere information visualization. In this work, we proposed a novel webcam-based gaze tracking system to interact with the icons on HUD. We conducted a user study on a driving simulator and compared the proposed system with a Gesture-based system. We collected quantitative and qualitative metrics from 8 participants while they were performing a dual-task trial. We observed that using the proposed eye gaze system, users were able to select icons on the HUD as fast as gesture modality. Further, users perceived significantly lower cognitive load and expressed significantly higher preference towards the proposed eye gaze control system than the gesture-based system.