首页 > 最新文献

2010 IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
MTMR: A conceptual interior design framework integrating Mixed Reality with the Multi-Touch tabletop interface MTMR:一个概念性的室内设计框架,集成了混合现实和多点触控桌面界面
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643606
Dong Wei, S. Zhou, Du Xie
This paper introduces a conceptual interior design framework - Multi-Touch Mixed Reality (MTMR), which integrates mixed reality with the multi-touch tabletop interface, to provide an intuitive and efficient interface for collaborative design and an augmented 3D view to users at the same time. Under this framework, multiple designers can carry out design work simultaneously on the top view displayed on the tabletop, while live video of the ongoing design work is captured and augmented by overlaying virtual 3D furniture models to their 2D virtual counterparts, and shown on a vertical screen in front of the tabletop. Meanwhile, the remote client's camera view of the physical room is augmented with the interior design layout in real time, that is, as the designers place, move, and modify the virtual furniture models on the tabletop, the client sees the corresponding life-size 3D virtual furniture models residing, moving, and changing in the physical room through the camera view on his/her screen. By adopting MTMR, which we argue may also apply to other kinds of collaborative work, the designers can expect a good working experience in terms of naturalness and intuitiveness, while the client can be involved in the design process and view the design result without moving around heavy furniture. By presenting MTMR, we hope to provide reliable and precise freehand interactions to mixed reality systems, with multi-touch inputs on tabletop interfaces.
本文介绍了一种概念室内设计框架——多点触控混合现实(Multi-Touch Mixed Reality, MTMR),该框架将混合现实与多点触控桌面界面相结合,为协同设计提供直观高效的界面,同时为用户提供增强的3D视图。在这个框架下,多名设计师可以在桌面上显示的顶视图上同时进行设计工作,同时通过将虚拟的3D家具模型叠加到虚拟的2D家具模型上,捕捉正在进行的设计工作的实时视频,并在桌面上的垂直屏幕上显示。同时,远程客户端对物理房间的摄像头视图实时增强室内设计布局,即当设计师在桌面上放置、移动、修改虚拟家具模型时,客户端通过屏幕上的摄像头视图看到相应的真人大小的3D虚拟家具模型在物理房间中驻留、移动、变化。通过采用MTMR,我们认为它也可以适用于其他类型的协作工作,设计师可以期待在自然和直观方面获得良好的工作体验,而客户可以参与设计过程并查看设计结果,而无需移动沉重的家具。通过介绍MTMR,我们希望为混合现实系统提供可靠和精确的徒手交互,在桌面界面上使用多点触摸输入。
{"title":"MTMR: A conceptual interior design framework integrating Mixed Reality with the Multi-Touch tabletop interface","authors":"Dong Wei, S. Zhou, Du Xie","doi":"10.1109/ISMAR.2010.5643606","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643606","url":null,"abstract":"This paper introduces a conceptual interior design framework - Multi-Touch Mixed Reality (MTMR), which integrates mixed reality with the multi-touch tabletop interface, to provide an intuitive and efficient interface for collaborative design and an augmented 3D view to users at the same time. Under this framework, multiple designers can carry out design work simultaneously on the top view displayed on the tabletop, while live video of the ongoing design work is captured and augmented by overlaying virtual 3D furniture models to their 2D virtual counterparts, and shown on a vertical screen in front of the tabletop. Meanwhile, the remote client's camera view of the physical room is augmented with the interior design layout in real time, that is, as the designers place, move, and modify the virtual furniture models on the tabletop, the client sees the corresponding life-size 3D virtual furniture models residing, moving, and changing in the physical room through the camera view on his/her screen. By adopting MTMR, which we argue may also apply to other kinds of collaborative work, the designers can expect a good working experience in terms of naturalness and intuitiveness, while the client can be involved in the design process and view the design result without moving around heavy furniture. By presenting MTMR, we hope to provide reliable and precise freehand interactions to mixed reality systems, with multi-touch inputs on tabletop interfaces.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114954617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Various tangible devices suitable for mixed reality interactions 适合混合现实交互的各种有形设备
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643608
Taichi Yoshida, M. Tsukadaira, Asako Kimura, F. Shibata, H. Tamura
In this paper, we present various novel tangible devices suitable for interactions in a mixed reality (MR) environment. They are aimed at making the best use of the features of MR, which allows users to touch or handle both virtual and physical objects. Furthermore, we consider usability and intuitiveness as important characteristics of the interface, and thus designed our devices to imitate traditional tools and help users understand their use.
在本文中,我们提出了各种适合在混合现实(MR)环境中进行交互的新型有形设备。他们的目标是充分利用MR的特点,它允许用户触摸或处理虚拟和物理对象。此外,我们认为可用性和直观性是界面的重要特征,因此我们设计的设备模仿传统工具,帮助用户理解他们的使用。
{"title":"Various tangible devices suitable for mixed reality interactions","authors":"Taichi Yoshida, M. Tsukadaira, Asako Kimura, F. Shibata, H. Tamura","doi":"10.1109/ISMAR.2010.5643608","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643608","url":null,"abstract":"In this paper, we present various novel tangible devices suitable for interactions in a mixed reality (MR) environment. They are aimed at making the best use of the features of MR, which allows users to touch or handle both virtual and physical objects. Furthermore, we consider usability and intuitiveness as important characteristics of the interface, and thus designed our devices to imitate traditional tools and help users understand their use.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115554469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Camera motion tracking in a dynamic scene 动态场景中的摄像机运动跟踪
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643609
Jung-Jae Yu, Jae-Hean Kim
To insert a virtual object into a real image, the position of the object must appear seamlessly as the camera moves. This requires camera tracking with estimations of all internal and external parameters in each frame with an adequate degree of stability to ensure negligible visible drift between the real and virtual elements. In the post production of film, matchmoving software based on SfM is typically used in the camera tracking process. However, most of this type of software fails when attempting to track the camera in a dynamic scene in which a moving foreground object such as a real actor occupies a large part of the background. Therefore, this study proposes a camera tracking system that uses an auxiliary camera to estimate the motion of the main shooting camera and 3D position of background features in a dynamic scene. A novel reconstruction and connection method was developed for feature tracks that are occluded by a foreground object. Experimentation with a 2K sequence demonstrated the feasibility of the proposed method.
为了将虚拟物体插入到真实图像中,物体的位置必须随着相机的移动而无缝显示。这就要求摄像机跟踪每帧中所有内部和外部参数的估计,并具有足够的稳定性,以确保真实和虚拟元素之间可以忽略不计的可见漂移。在电影后期制作中,摄像机跟踪过程中通常使用基于SfM的匹配移动软件。然而,当试图在动态场景中跟踪摄像机时,大多数这类软件都失败了,在动态场景中,移动的前景物体(如真实演员)占据了大部分背景。因此,本研究提出了一种摄像机跟踪系统,该系统使用辅助摄像机来估计动态场景中主拍摄摄像机的运动和背景特征的三维位置。针对前景目标遮挡的特征轨迹,提出了一种新的重建与连接方法。对2K序列的实验证明了该方法的可行性。
{"title":"Camera motion tracking in a dynamic scene","authors":"Jung-Jae Yu, Jae-Hean Kim","doi":"10.1109/ISMAR.2010.5643609","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643609","url":null,"abstract":"To insert a virtual object into a real image, the position of the object must appear seamlessly as the camera moves. This requires camera tracking with estimations of all internal and external parameters in each frame with an adequate degree of stability to ensure negligible visible drift between the real and virtual elements. In the post production of film, matchmoving software based on SfM is typically used in the camera tracking process. However, most of this type of software fails when attempting to track the camera in a dynamic scene in which a moving foreground object such as a real actor occupies a large part of the background. Therefore, this study proposes a camera tracking system that uses an auxiliary camera to estimate the motion of the main shooting camera and 3D position of background features in a dynamic scene. A novel reconstruction and connection method was developed for feature tracks that are occluded by a foreground object. Experimentation with a 2K sequence demonstrated the feasibility of the proposed method.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128463643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Smart Glasses: An open environment for AR apps 智能眼镜:AR应用的开放环境
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643622
Martin Kurze, Axel Roselius
We present an architecture [fig. 1] and runtime environment for mobile Augmented Reality applications. The architecture is based on a plugin-concept on the device, a set of basic functionalities available for all apps and a cloud-oriented processing approach. As a first running sample app, we show a face recognition service running on amobile phone, conventional wearable displays and upcoming see-through - goggles. We invite interested 3rd parties to try out the environment, face recognition app and platform.
我们提出了一个移动增强现实应用程序的架构[图1]和运行时环境。该架构基于设备上的插件概念,一组适用于所有应用程序的基本功能以及面向云的处理方法。作为第一个运行的示例应用程序,我们展示了在手机、传统可穿戴显示器和即将推出的透视眼镜上运行的面部识别服务。我们邀请感兴趣的第三方试用环境、人脸识别应用和平台。
{"title":"Smart Glasses: An open environment for AR apps","authors":"Martin Kurze, Axel Roselius","doi":"10.1109/ISMAR.2010.5643622","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643622","url":null,"abstract":"We present an architecture [fig. 1] and runtime environment for mobile Augmented Reality applications. The architecture is based on a plugin-concept on the device, a set of basic functionalities available for all apps and a cloud-oriented processing approach. As a first running sample app, we show a face recognition service running on amobile phone, conventional wearable displays and upcoming see-through - goggles. We invite interested 3rd parties to try out the environment, face recognition app and platform.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124006803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Digital Diorama system for museum exhibition 博物馆展览数字立体模型系统
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643582
O. Hayashi, Kazuhiro Kasada, Takuji Narumi, T. Tanikawa, M. Hirose
In this paper, we proposed the Digital Diorama system to convey background information vividly. The system superimposes computer generated diorama scene reconstructed from related image/video materials on real exhibits. In order to switch and superimpose real exhibits and past photos seamlessly, we implement a matching system for estimating the camera position where photos are taken. By applying this subsystem to 26 past photos about the steam locomotive exhibit, we succeeded in estimating their camera position. Thus, we implement and install a prototype system at estimated position to superimposing virtual scene and real exhibit in the Railway Museum.
在本文中,我们提出了数字透视系统来生动地传达背景信息。该系统将相关图像/视频材料重建的计算机生成的立体场景叠加在真实展品上。为了无缝地切换和叠加真实展品和过去的照片,我们实现了一个匹配系统,用于估计拍摄照片的相机位置。通过将该子系统应用于26张关于蒸汽机车展览的照片,我们成功地估计了它们的相机位置。因此,我们在估计位置实现并安装了一个原型系统,将虚拟场景与真实展品叠加在铁路博物馆中。
{"title":"Digital Diorama system for museum exhibition","authors":"O. Hayashi, Kazuhiro Kasada, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/ISMAR.2010.5643582","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643582","url":null,"abstract":"In this paper, we proposed the Digital Diorama system to convey background information vividly. The system superimposes computer generated diorama scene reconstructed from related image/video materials on real exhibits. In order to switch and superimpose real exhibits and past photos seamlessly, we implement a matching system for estimating the camera position where photos are taken. By applying this subsystem to 26 past photos about the steam locomotive exhibit, we succeeded in estimating their camera position. Thus, we implement and install a prototype system at estimated position to superimposing virtual scene and real exhibit in the Railway Museum.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Positioning, tracking and mapping for outdoor augmentation 定位,跟踪和映射户外增强
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643567
J. Karlekar, S. Zhou, W. Lu, Loh Zhi Chang, Y. Nakayama, Daniel Hii
This paper presents a novel approach for user positioning, robust tracking and online 3D mapping for outdoor augmented reality applications. As coarse user pose obtained from GPS and orientation sensors is not sufficient for augmented reality applications, sub-meter accurate user pose is then estimated by a one-step silhouette matching approach. Silhouette matching of the rendered 3D model and camera data is carried out with shape context descriptors as they are invariant to translation, scale and rotational errors, giving rise to a non-iterative registration approach. Once the user is correctly positioned, further tracking is carried out with camera data alone. Drifts associated with vision based approaches are minimized by combining different feature modalities. Robust visual tracking is maintained by fusing frame-to-frame and model-to-frame feature matches. Frame-to-frame tracking is accomplished with corner matching while edges are used for model-to-frame registration. Results from individual feature tracker are fused using a pose estimate obtained from an extended Kalman filter (EKF) and a weighted M-estimator. In scenarios where dense 3D models of the environment are not available, online 3D incremental mapping and tracking is proposed to track the user in unprepared environments. Incremental mapping prepares the 3D point cloud of the outdoor environment for tracking.
本文提出了一种用于户外增强现实应用的用户定位、鲁棒跟踪和在线3D映射的新方法。由于GPS和方向传感器获得的粗糙用户姿态不足以用于增强现实应用,因此采用一步轮廓匹配方法估计亚米精度的用户姿态。由于形状上下文描述符不受平移、比例和旋转误差的影响,因此使用形状上下文描述符对渲染的3D模型和相机数据进行轮廓匹配,从而产生非迭代配准方法。一旦用户被正确定位,进一步的跟踪将单独使用相机数据进行。通过结合不同的特征模态,最小化了与基于视觉的方法相关的漂移。通过融合帧与帧之间和模型与帧之间的特征匹配来保持鲁棒的视觉跟踪。帧到帧的跟踪通过角匹配完成,而边缘用于模型到帧的配准。利用扩展卡尔曼滤波(EKF)和加权m估计器得到的姿态估计融合单个特征跟踪器的结果。在没有密集的环境三维模型的情况下,提出了在线三维增量映射和跟踪,以在没有准备的环境中跟踪用户。增量映射为室外环境的三维点云做好跟踪准备。
{"title":"Positioning, tracking and mapping for outdoor augmentation","authors":"J. Karlekar, S. Zhou, W. Lu, Loh Zhi Chang, Y. Nakayama, Daniel Hii","doi":"10.1109/ISMAR.2010.5643567","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643567","url":null,"abstract":"This paper presents a novel approach for user positioning, robust tracking and online 3D mapping for outdoor augmented reality applications. As coarse user pose obtained from GPS and orientation sensors is not sufficient for augmented reality applications, sub-meter accurate user pose is then estimated by a one-step silhouette matching approach. Silhouette matching of the rendered 3D model and camera data is carried out with shape context descriptors as they are invariant to translation, scale and rotational errors, giving rise to a non-iterative registration approach. Once the user is correctly positioned, further tracking is carried out with camera data alone. Drifts associated with vision based approaches are minimized by combining different feature modalities. Robust visual tracking is maintained by fusing frame-to-frame and model-to-frame feature matches. Frame-to-frame tracking is accomplished with corner matching while edges are used for model-to-frame registration. Results from individual feature tracker are fused using a pose estimate obtained from an extended Kalman filter (EKF) and a weighted M-estimator. In scenarios where dense 3D models of the environment are not available, online 3D incremental mapping and tracking is proposed to track the user in unprepared environments. Incremental mapping prepares the 3D point cloud of the outdoor environment for tracking.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121795360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A practical multi-viewer tabletop autostereoscopic display 一个实用的多视图桌面自动立体显示器
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643563
Gu Ye, A. State, H. Fuchs
This paper introduces a multi-user autostereoscopic tabletop display and its associated real-time rendering methods. Tabletop displays that support both multiple viewers and autostereoscopy have been extremely difficult to construct. Our new system is inspired by the “Random Hole Display” design [11] that modified the pattern of openings in a barrier mounted in front of a flat panel display from thin slits to a dense pattern of tiny, pseudo-randomly placed holes. This allows viewers anywhere in front of the display to see a different subset of the display's native pixels through the random-hole screen. However, a fraction of the visible pixels will be observable by more than a single viewer. Thus the main challenge is handling these “conflicting” pixels, which ideally must show different colors to each viewer. We introduce several solutions to this problem and describe in detail the current method of choice, a combination of color blending and approximate error diffusion, performing in real time in our GPU-based implementation. The easily reproducible design uses a pattern film barrier affixed to the display by means of a transparent polycarbonate layer spacer. We use a commercial optical tracker for viewers' locations and synthesize the appropriate image (or a stereoscopic image pair) for each viewer. The system supports graceful degradation with increasing number of simultaneous views, and graceful improvement as the number of views decreases.
介绍了一种多用户自动立体桌面显示器及其实时渲染方法。支持多视图和自动立体的桌面显示一直是非常困难的。我们的新系统受到“随机孔显示”设计的启发[11],该设计修改了安装在平板显示器前面的屏障上的开口图案,从薄的狭缝到密集的微小的、伪随机放置的孔。这允许观众在任何地方的显示器前看到显示器的原生像素通过随机孔屏幕的不同子集。然而,可见像素的一小部分将被多个观察者观察到。因此,主要的挑战是处理这些“冲突”的像素,理想情况下,这些像素必须向每个观看者显示不同的颜色。我们介绍了这个问题的几种解决方案,并详细描述了当前的选择方法,颜色混合和近似误差扩散的组合,在我们基于gpu的实现中实时执行。这种易于复制的设计使用了一种图案膜屏障,通过透明的聚碳酸酯层间隔器贴在显示器上。我们使用商业光学跟踪器来定位观众的位置,并为每个观众合成合适的图像(或立体图像对)。系统支持随着同时视图数量的增加而优雅降级,随着视图数量的减少而优雅改进。
{"title":"A practical multi-viewer tabletop autostereoscopic display","authors":"Gu Ye, A. State, H. Fuchs","doi":"10.1109/ISMAR.2010.5643563","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643563","url":null,"abstract":"This paper introduces a multi-user autostereoscopic tabletop display and its associated real-time rendering methods. Tabletop displays that support both multiple viewers and autostereoscopy have been extremely difficult to construct. Our new system is inspired by the “Random Hole Display” design [11] that modified the pattern of openings in a barrier mounted in front of a flat panel display from thin slits to a dense pattern of tiny, pseudo-randomly placed holes. This allows viewers anywhere in front of the display to see a different subset of the display's native pixels through the random-hole screen. However, a fraction of the visible pixels will be observable by more than a single viewer. Thus the main challenge is handling these “conflicting” pixels, which ideally must show different colors to each viewer. We introduce several solutions to this problem and describe in detail the current method of choice, a combination of color blending and approximate error diffusion, performing in real time in our GPU-based implementation. The easily reproducible design uses a pattern film barrier affixed to the display by means of a transparent polycarbonate layer spacer. We use a commercial optical tracker for viewers' locations and synthesize the appropriate image (or a stereoscopic image pair) for each viewer. The system supports graceful degradation with increasing number of simultaneous views, and graceful improvement as the number of views decreases.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127869626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Task support system by displaying instructional video onto AR workspace 任务支持系统通过显示教学视频到AR工作空间
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643554
Michihiko Goto, Yuko Uematsu, H. Saito, S. Senda, A. Iketani
This paper presents an instructional support system based on augmented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build electric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By transforming an instructional video to display, according to the user's view, and by overlaying the video onto the user's view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the user's view may be visually confused, we add various visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain operation is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a user's visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks.
提出了一种基于增强现实(AR)的教学支持系统。该系统通过与导航系统相同的方式叠加视觉信息,帮助用户直观地工作。在通常的AR系统中,要覆盖到真实空间的内容是用3D计算机图形创建的。在大多数情况下,这些内容是根据应用程序新创建的。然而,有许多2D视频展示了如何拆卸或建造电器和个人电脑,如何烹饪等。因此,我们的系统采用现有的2D视频作为教学视频。该系统通过将教学视频根据用户的视角进行转换显示,并将视频叠加到用户的视角空间上,直观地为用户提供视觉引导。为了避免教学视频的显示和用户的视线在视觉上混淆的问题,我们在教学视频中添加了各种视觉效果,如透明度和增强轮廓。通过将教学视频按照完成某项任务需要进行的操作进行分段,保证用户在完成某项操作后,可以交互式地进入教学视频的下一步。因此,用户可以按照自己的节奏继续完成任务。在可用性测试中,用户通过两个任务来评估我们系统中教学视频的使用情况:一个任务涉及构建模块,另一个任务涉及折纸。因此,我们发现,当教学视频根据用户的观点进行转换显示时,用户的可见性会提高。此外,对于视觉效果的评估,我们可以根据任务对这些效果进行分类,并获得使用我们的系统作为执行各种其他任务的教学支持系统的指南。
{"title":"Task support system by displaying instructional video onto AR workspace","authors":"Michihiko Goto, Yuko Uematsu, H. Saito, S. Senda, A. Iketani","doi":"10.1109/ISMAR.2010.5643554","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643554","url":null,"abstract":"This paper presents an instructional support system based on augmented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build electric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By transforming an instructional video to display, according to the user's view, and by overlaying the video onto the user's view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the user's view may be visually confused, we add various visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain operation is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a user's visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134508390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A precise controllable projection system for projected virtual characters and its calibration 一种用于虚拟人物投影的精确可控投影系统及其标定
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643577
Jochen Ehnes
In this paper we describe a system to project virtual characters that shall live with us in the same environment. In order to project the characters' visual representations onto room surfaces we use a controllable projector.
在本文中,我们描述了一个系统来投影虚拟人物,将与我们生活在同一个环境。为了将角色的视觉表现投射到房间表面,我们使用了一个可控的投影仪。
{"title":"A precise controllable projection system for projected virtual characters and its calibration","authors":"Jochen Ehnes","doi":"10.1109/ISMAR.2010.5643577","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643577","url":null,"abstract":"In this paper we describe a system to project virtual characters that shall live with us in the same environment. In order to project the characters' visual representations onto room surfaces we use a controllable projector.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117047955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Differential Instant Radiosity for mixed reality 混合现实的微分瞬间辐射
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643556
Martin Knecht, C. Traxler, O. Mattausch, W. Purgathofer, M. Wimmer
In this paper we present a novel plausible realistic rendering method for mixed reality systems, which is useful for many real life application scenarios, like architecture, product visualization or edutainment. To allow virtual objects to seamlessly blend into the real environment, the real lighting conditions and the mutual illumination effects between real and virtual objects must be considered, while maintaining interactive frame rates (20–30fps). The most important such effects are indirect illumination and shadows cast between real and virtual objects. Our approach combines Instant Radiosity and Differential Rendering. In contrast to some previous solutions, we only need to render the scene once in order to find the mutual effects of virtual and real scenes. The dynamic real illumination is derived from the image stream of a fish-eye lens camera. We describe a new method to assign virtual point lights to multiple primary light sources, which can be real or virtual. We use imperfect shadow maps for calculating illumination from virtual point lights and have significantly improved their accuracy by taking the surface normal of a shadow caster into account. Temporal coherence is exploited to reduce flickering artifacts. Our results show that the presented method highly improves the illusion in mixed reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.
在本文中,我们提出了一种新的似是而非的逼真的混合现实系统渲染方法,它适用于许多现实生活中的应用场景,如建筑、产品可视化或教育娱乐。为了让虚拟物体无缝地融入真实环境,必须考虑真实的照明条件和真实物体与虚拟物体之间的相互照明效果,同时保持交互帧率(20-30fps)。最重要的效果是间接照明和真实和虚拟物体之间的阴影。我们的方法结合了即时辐射和差分渲染。与之前的一些解决方案相比,我们只需要渲染场景一次,就可以找到虚拟场景和真实场景的相互效果。动态真实照度来源于鱼眼镜头相机的图像流。提出了一种将虚拟点光分配给多个主光源的新方法,主光源可以是真实的,也可以是虚拟的。我们使用不完美的阴影贴图来计算虚拟点光源的照度,并通过考虑阴影施法者的表面法线来显著提高其准确性。利用时间相干性来减少闪烁伪影。结果表明,该方法极大地改善了混合现实应用中的错觉效果,并显著减少了叠加在真实场景上的虚拟物体的人为外观。
{"title":"Differential Instant Radiosity for mixed reality","authors":"Martin Knecht, C. Traxler, O. Mattausch, W. Purgathofer, M. Wimmer","doi":"10.1109/ISMAR.2010.5643556","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643556","url":null,"abstract":"In this paper we present a novel plausible realistic rendering method for mixed reality systems, which is useful for many real life application scenarios, like architecture, product visualization or edutainment. To allow virtual objects to seamlessly blend into the real environment, the real lighting conditions and the mutual illumination effects between real and virtual objects must be considered, while maintaining interactive frame rates (20–30fps). The most important such effects are indirect illumination and shadows cast between real and virtual objects. Our approach combines Instant Radiosity and Differential Rendering. In contrast to some previous solutions, we only need to render the scene once in order to find the mutual effects of virtual and real scenes. The dynamic real illumination is derived from the image stream of a fish-eye lens camera. We describe a new method to assign virtual point lights to multiple primary light sources, which can be real or virtual. We use imperfect shadow maps for calculating illumination from virtual point lights and have significantly improved their accuracy by taking the surface normal of a shadow caster into account. Temporal coherence is exploited to reduce flickering artifacts. Our results show that the presented method highly improves the illusion in mixed reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129746381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
期刊
2010 IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1