[POSTER] Vergence-Based AR X-ray Vision

Y. Kitajima, Sei Ikeda, Kosuke Sato
{"title":"[POSTER] Vergence-Based AR X-ray Vision","authors":"Y. Kitajima, Sei Ikeda, Kosuke Sato","doi":"10.1109/ISMAR.2015.58","DOIUrl":null,"url":null,"abstract":"The ideal AR x-ray vision should enable users to clearly observe and grasp not only occludees, but also occluders. We propose a novel selective visualization method of both occludee and oc-cluder layers with dynamic opacity depending on the user's gaze depth. Using the gaze depth as a trigger to select the layers has a essential advantage over using other gestures or spoken commands in the sense of avoiding collision between user's intentional commands and unintentional actions. Our experiment by a visual paired-comparison task shows that our method has achieved a 20% higher success rate, and significantly reduced 30% of the average task completion time than a non-selective method using a constant and half transparency.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"94 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Symposium on Mixed and Augmented Reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2015.58","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The ideal AR x-ray vision should enable users to clearly observe and grasp not only occludees, but also occluders. We propose a novel selective visualization method of both occludee and oc-cluder layers with dynamic opacity depending on the user's gaze depth. Using the gaze depth as a trigger to select the layers has a essential advantage over using other gestures or spoken commands in the sense of avoiding collision between user's intentional commands and unintentional actions. Our experiment by a visual paired-comparison task shows that our method has achieved a 20% higher success rate, and significantly reduced 30% of the average task completion time than a non-selective method using a constant and half transparency.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
[海报]基于收敛的AR x射线视觉
理想的AR x射线视觉应该使用户不仅能够清晰地观察和抓住遮挡物,而且能够清楚地观察和抓住遮挡物。我们提出了一种基于用户注视深度的动态不透明度的遮蔽物层和无遮蔽物层的选择性可视化方法。与使用其他手势或口头命令相比,使用凝视深度作为选择图层的触发器具有本质优势,因为它可以避免用户有意命令和无意动作之间的冲突。我们通过视觉配对比较任务的实验表明,我们的方法比使用恒定和半透明度的非选择性方法取得了20%的成功率,并显着减少了30%的平均任务完成时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
[POSTER] Realtime Shape-from-Template: System and Applications [POSTER] Geometric Mapping for Color Compensation Using Scene Adaptive Patches Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality [POSTER] Mixed-Reality Store on the Other Side of a Tablet A Framework to Evaluate Omnidirectional Video Coding Schemes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1