The effectiveness of an AR-based context-aware assembly support system in object assembly

Bui Minh Khuong, K. Kiyokawa, Andrew Miller, Joseph J. La Viola, T. Mashita, H. Takemura
{"title":"The effectiveness of an AR-based context-aware assembly support system in object assembly","authors":"Bui Minh Khuong, K. Kiyokawa, Andrew Miller, Joseph J. La Viola, T. Mashita, H. Takemura","doi":"10.1109/VR.2014.6802051","DOIUrl":null,"url":null,"abstract":"This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"83","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Virtual Reality (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2014.6802051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 83

Abstract

This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于ar的上下文感知装配支持系统在对象装配中的有效性
本研究评估了基于AR的上下文感知装配支持系统的有效性,该系统采用AR可视化模式在对象装配中提出。尽管已经提出了许多基于ar的装配支持系统,但很少有系统能够实时跟踪装配状态并自动识别每一步的错误和完成状态。当然,这种环境感知系统的有效性仍未得到探索。我们的测试平台系统显示相应的引导信息和错误检测信息,以识别装配状态在积木(乐高)组装的背景下。用户戴上头戴式显示器(HMD),可以直观地在桌子上构建积木结构,通过视觉确认正确和不正确的积木,并确定新积木的附着位置。我们提出了两种AR可视化模式,一种是直接叠加在物理模型上显示制导信息,另一种是在与真实模型相邻的虚拟模型上呈现制导信息。对这些AR可视化模式进行了比较评估,并确定了上下文感知错误检测的有效性。实验结果表明,在中等配准精度和基于标记的跟踪条件下,将目标状态显示在实际关注对象旁边的可视化模式优于传统的直接叠加。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An enhanced steering algorithm for redirected walking in virtual environments Using relative head and hand-target features to predict intention in 3D moving-target selection Transitional Augmented Reality navigation for live captured scenes Time perception during walking in virtual environments The Mind-Mirror: See your brain in action in your head using EEG and augmented reality
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1