{"title":"协同广域无模型增强现实注视方向可视化技术","authors":"Yuan Li, Feiyu Lu, W. Lages, D. Bowman","doi":"10.1145/3357251.3357583","DOIUrl":null,"url":null,"abstract":"In collaborative tasks, it is often important for users to understand their collaborator’s gaze direction or gaze target. Using an augmented reality (AR) display, a ray representing the collaborator’s gaze can be used to convey such information. In wide-area AR, however, a simplistic virtual ray may be ambiguous at large distances, due to the lack of occlusion cues when a model of the environment is unavailable. We describe two novel visualization techniques designed to improve gaze ray effectiveness by facilitating visual matching between rays and targets (Double Ray technique), and by providing spatial cues to help users understand ray orientation (Parallel Bars technique). In a controlled experiment performed in a simulated AR environment, we evaluated these gaze ray techniques on target identification tasks with varying levels of difficulty. The experiment found that, assuming reliable tracking and an accurate collaborator, the Double Ray technique is highly effective at reducing visual ambiguity, but that users found it difficult to use the spatial information provided by the Parallel Bars technique. We discuss the implications of these findings for the design of collaborative mobile AR systems for use in large outdoor areas.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"132 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Gaze Direction Visualization Techniques for Collaborative Wide-Area Model-Free Augmented Reality\",\"authors\":\"Yuan Li, Feiyu Lu, W. Lages, D. Bowman\",\"doi\":\"10.1145/3357251.3357583\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In collaborative tasks, it is often important for users to understand their collaborator’s gaze direction or gaze target. Using an augmented reality (AR) display, a ray representing the collaborator’s gaze can be used to convey such information. In wide-area AR, however, a simplistic virtual ray may be ambiguous at large distances, due to the lack of occlusion cues when a model of the environment is unavailable. We describe two novel visualization techniques designed to improve gaze ray effectiveness by facilitating visual matching between rays and targets (Double Ray technique), and by providing spatial cues to help users understand ray orientation (Parallel Bars technique). In a controlled experiment performed in a simulated AR environment, we evaluated these gaze ray techniques on target identification tasks with varying levels of difficulty. The experiment found that, assuming reliable tracking and an accurate collaborator, the Double Ray technique is highly effective at reducing visual ambiguity, but that users found it difficult to use the spatial information provided by the Parallel Bars technique. We discuss the implications of these findings for the design of collaborative mobile AR systems for use in large outdoor areas.\",\"PeriodicalId\":370782,\"journal\":{\"name\":\"Symposium on Spatial User Interaction\",\"volume\":\"132 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Symposium on Spatial User Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3357251.3357583\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Spatial User Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3357251.3357583","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Gaze Direction Visualization Techniques for Collaborative Wide-Area Model-Free Augmented Reality
In collaborative tasks, it is often important for users to understand their collaborator’s gaze direction or gaze target. Using an augmented reality (AR) display, a ray representing the collaborator’s gaze can be used to convey such information. In wide-area AR, however, a simplistic virtual ray may be ambiguous at large distances, due to the lack of occlusion cues when a model of the environment is unavailable. We describe two novel visualization techniques designed to improve gaze ray effectiveness by facilitating visual matching between rays and targets (Double Ray technique), and by providing spatial cues to help users understand ray orientation (Parallel Bars technique). In a controlled experiment performed in a simulated AR environment, we evaluated these gaze ray techniques on target identification tasks with varying levels of difficulty. The experiment found that, assuming reliable tracking and an accurate collaborator, the Double Ray technique is highly effective at reducing visual ambiguity, but that users found it difficult to use the spatial information provided by the Parallel Bars technique. We discuss the implications of these findings for the design of collaborative mobile AR systems for use in large outdoor areas.