首页 > 最新文献

2015 IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
[POSTER] A Step Closer To Reality: Closed Loop Dynamic Registration Correction in SAR [海报]离现实更近一步:SAR闭环动态配准校正
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.34
Hemal Naik, Federico Tombari, Christoph Resch, P. Keitler, Nassir Navab
In Spatial Augmented Reality (SAR) applications, real world objects are augmented with virtual content by means of a calibrated camera-projector system. A computer generated model (CAD) of the real object is used to plan the positions where the virtual content is to be projected. It is often the case that the real object deviates from its CAD model, this resulting in misregistered augmentations. We propose a new method to dynamically correct the planned augmentation by accommodating for the unknown deviations in the object geometry. We use a closed loop approach where the projected features are detected in the camera image and deployed as feedback. As a result, the registration misalignment is identified and the augmentations are corrected in the areas affected by the deviation. Our work is especially focused on SAR applications related to the industrial domain, where this problem is omnipresent. We show that our method is effective and beneficial for multiple industrial applications.
在空间增强现实(SAR)应用中,现实世界的物体通过校准的摄像机-投影仪系统被增强为虚拟内容。使用真实物体的计算机生成模型(CAD)来规划虚拟内容要投影的位置。通常情况下,实际对象会偏离其CAD模型,从而导致错误注册的增强。我们提出了一种新的方法,通过适应未知的物体几何偏差来动态修正计划的增强。我们使用闭环方法,其中投影特征在相机图像中被检测到并作为反馈部署。结果,识别出配准不对准,并在受偏差影响的区域对增强进行校正。我们的工作特别集中在与工业领域相关的SAR应用上,这个问题无处不在。我们证明了我们的方法是有效的,并有利于多种工业应用。
{"title":"[POSTER] A Step Closer To Reality: Closed Loop Dynamic Registration Correction in SAR","authors":"Hemal Naik, Federico Tombari, Christoph Resch, P. Keitler, Nassir Navab","doi":"10.1109/ISMAR.2015.34","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.34","url":null,"abstract":"In Spatial Augmented Reality (SAR) applications, real world objects are augmented with virtual content by means of a calibrated camera-projector system. A computer generated model (CAD) of the real object is used to plan the positions where the virtual content is to be projected. It is often the case that the real object deviates from its CAD model, this resulting in misregistered augmentations. We propose a new method to dynamically correct the planned augmentation by accommodating for the unknown deviations in the object geometry. We use a closed loop approach where the projected features are detected in the camera image and deployed as feedback. As a result, the registration misalignment is identified and the augmentations are corrected in the areas affected by the deviation. Our work is especially focused on SAR applications related to the industrial domain, where this problem is omnipresent. We show that our method is effective and beneficial for multiple industrial applications.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"34 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133787468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
[POSTER] Deformation Estimation of Elastic Bodies Using Multiple Silhouette Images for Endoscopic Image Augmentation [POSTER]基于多剪影图像的弹性体内窥镜图像增强变形估计
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.49
Akira Saito, M. Nakao, Yuuki Uranishi, T. Matsuda
This study proposes a method to estimate elastic deformation using silhouettes obtained from multiple endoscopic images. Our method can estimate the intraoperative deformation of organs using a volumetric mesh model reconstructed from preoperative CT data. We use this elastic body silhouette information of elastic bodies not to model the shape but to estimate the local displacements. The model shape is updated to satisfy the silhouette constraint while preserving the shape as much as possible. The result of the experiments showed that the proposed methods could estimate the deformation with root mean square (RMS) errors of 5.0–10 mm.
本研究提出了一种利用从多个内窥镜图像中获得的轮廓来估计弹性变形的方法。我们的方法可以利用术前CT数据重建的体积网格模型来估计术中器官的变形。我们利用弹性体的轮廓信息来估计局部位移,而不是建立弹性体的形状模型。模型形状被更新以满足轮廓约束,同时尽可能地保留形状。实验结果表明,该方法可以估计变形,均方根误差(RMS)为5.0±10 mm。
{"title":"[POSTER] Deformation Estimation of Elastic Bodies Using Multiple Silhouette Images for Endoscopic Image Augmentation","authors":"Akira Saito, M. Nakao, Yuuki Uranishi, T. Matsuda","doi":"10.1109/ISMAR.2015.49","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.49","url":null,"abstract":"This study proposes a method to estimate elastic deformation using silhouettes obtained from multiple endoscopic images. Our method can estimate the intraoperative deformation of organs using a volumetric mesh model reconstructed from preoperative CT data. We use this elastic body silhouette information of elastic bodies not to model the shape but to estimate the local displacements. The model shape is updated to satisfy the silhouette constraint while preserving the shape as much as possible. The result of the experiments showed that the proposed methods could estimate the deformation with root mean square (RMS) errors of 5.0–10 mm.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121107204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality 医学增强现实中三维感知的听觉和视觉-时间距离编码
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.16
F. Bork, B. Fuerst, Anja-Katharina Schneider, Francisco Pinto, C. Graumann, Nassir Navab
Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task.
图像引导的医疗干预更多地依赖于增强现实(AR)可视化来实现手术导航。目前的系统使用2-D监视器来呈现外部摄像机的视图,这不能提供感兴趣区域的理想3-D位置感知。尽管存在这个问题,但大多数研究都是针对诊断成像数据的直接叠加,只有少数研究试图改善外部相机视图中遮挡结构的感知。本文的重点在于通过在医疗应用的动态多感官AR环境中结合听觉和视觉刺激来提高增强外部摄像机视图的3d感知。我们的方法基于时间距离编码(TDC)和一种主动手术工具,与场景中遮挡的感兴趣的虚拟物体进行交互,以获得对其3d位置的改进感知。用户通过瞄准病人幻影内的虚拟病变进行模拟针活检。实验结果表明,基于tdc的可视化技术显著提高了定位精度,而听觉反馈的加入提高了任务的直观性和完成速度。
{"title":"Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality","authors":"F. Bork, B. Fuerst, Anja-Katharina Schneider, Francisco Pinto, C. Graumann, Nassir Navab","doi":"10.1109/ISMAR.2015.16","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.16","url":null,"abstract":"Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114680624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
[POSTER] Improved SPAAM Robustness through Stereo Calibration 通过立体声校准提高了SPAAM的鲁棒性
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.64
Kenneth R. Moser, J. Swan
We are investigating methods for improving the robustness and consistency of the Single Point Active Alignment Method (SPAAM) optical see-through (OST) head-mounted display (HMD) calibration procedure. Our investigation focuses on two variants of SPAAM. The first utilizes a standard monocular alignment strategy to calibrate the left and right eye separately, while the second leverages stereoscopic cues available from binocular HMDs to calibrate both eyes simultaneously. We compare results from repeated calibrations between methods using eye location estimates and inter pupillary distance (IPD) measures. Our findings indicate that the stereo SPAAM method produces more accurate and consistent results during calibration compared to the monocular variant.
我们正在研究提高单点主动对准法(SPAAM)光学透明(OST)头戴式显示器(HMD)校准程序的鲁棒性和一致性的方法。我们的研究重点是SPAAM的两种变体。第一种方法利用标准的单眼对准策略分别校准左眼和右眼,而第二种方法利用双目头戴式显示器提供的立体线索同时校准两只眼睛。我们比较了使用眼定位估计方法和瞳孔间距(IPD)测量方法反复校准的结果。我们的研究结果表明,在校准过程中,立体SPAAM方法比单眼方法产生更准确和一致的结果。
{"title":"[POSTER] Improved SPAAM Robustness through Stereo Calibration","authors":"Kenneth R. Moser, J. Swan","doi":"10.1109/ISMAR.2015.64","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.64","url":null,"abstract":"We are investigating methods for improving the robustness and consistency of the Single Point Active Alignment Method (SPAAM) optical see-through (OST) head-mounted display (HMD) calibration procedure. Our investigation focuses on two variants of SPAAM. The first utilizes a standard monocular alignment strategy to calibrate the left and right eye separately, while the second leverages stereoscopic cues available from binocular HMDs to calibrate both eyes simultaneously. We compare results from repeated calibrations between methods using eye location estimates and inter pupillary distance (IPD) measures. Our findings indicate that the stereo SPAAM method produces more accurate and consistent results during calibration compared to the monocular variant.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
[POSTER] Tracking and Mapping with a Swarm of Heterogeneous Clients [海报]一群异构客户端的跟踪和映射
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.40
Philipp Fleck, Clemens Arth, Christian Pirchheim, D. Schmalstieg
In this work, we propose a multi-user system for tracking and mapping, which accommodates mobile clients with different capabilities, mediated by a server capable of providing real-time structure from motion. Clients share their observations of the scene according to their individual capabilities. This can involve only keyframe tracking, but also mapping and map densification, if more computational resources are available. Our contribution is a system architecture that lets heterogeneous clients contribute to a collaborative mapping effort, without prescribing fixed capabilities for the client devices. We investigate the implications that the clients' capabilities have on the collaborative reconstruction effort and its use for AR applications.
在这项工作中,我们提出了一个用于跟踪和映射的多用户系统,该系统可容纳具有不同功能的移动客户端,并由能够提供实时运动结构的服务器作为中介。客户根据他们的个人能力分享他们对场景的观察。这可能只涉及关键帧跟踪,但也映射和地图密度化,如果更多的计算资源可用。我们的贡献是一个系统架构,它允许异构客户机为协作映射工作做出贡献,而不需要为客户机设备规定固定的功能。我们调查了客户的能力对协同重建工作及其在AR应用中的使用的影响。
{"title":"[POSTER] Tracking and Mapping with a Swarm of Heterogeneous Clients","authors":"Philipp Fleck, Clemens Arth, Christian Pirchheim, D. Schmalstieg","doi":"10.1109/ISMAR.2015.40","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.40","url":null,"abstract":"In this work, we propose a multi-user system for tracking and mapping, which accommodates mobile clients with different capabilities, mediated by a server capable of providing real-time structure from motion. Clients share their observations of the scene according to their individual capabilities. This can involve only keyframe tracking, but also mapping and map densification, if more computational resources are available. Our contribution is a system architecture that lets heterogeneous clients contribute to a collaborative mapping effort, without prescribing fixed capabilities for the client devices. We investigate the implications that the clients' capabilities have on the collaborative reconstruction effort and its use for AR applications.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132065585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
[POSTER] Pseudo Printed Fabrics through Projection Mapping [海报]投影映射的伪印花织物
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.51
Yuichiro Fujimoto, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato
Projection-based Augmented Reality commonly projects on rigid objects, while only few systems project on deformable objects. In this paper, we present Pseudo Printed Fabrics (PPF), which enables the projection on a deforming piece of cloth. This can be applied to previewing a cloth design while manipulating its shape. We support challenging manipulations, including heavy occlusions and stretching the cloth. In previous work, we developed a similar system, based on a novel marker pattern; PPF extends it in two important aspects. First, we improved performance by two orders of magnitudes to achieve interactive performance. Second, we developed a new interpolation algorithm to keep registration during challenging manipulations. We believe that PPF can be applied to domains including virtual-try on and fashion design.
基于投影的增强现实通常在刚性对象上进行投影,而只有少数系统在可变形对象上进行投影。在本文中,我们提出了伪印刷织物(PPF),它可以在变形的布料上进行投影。这可以应用于预览布料设计,同时操纵其形状。我们支持具有挑战性的操作,包括重度咬合和拉伸布料。在之前的工作中,我们基于一种新的标记模式开发了一个类似的系统;PPF在两个重要方面扩展了它。首先,我们将性能提高了两个数量级,实现了交互性能。其次,我们开发了一种新的插值算法,在具有挑战性的操作中保持配准。我们相信PPF可以应用于虚拟试穿和服装设计等领域。
{"title":"[POSTER] Pseudo Printed Fabrics through Projection Mapping","authors":"Yuichiro Fujimoto, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato","doi":"10.1109/ISMAR.2015.51","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.51","url":null,"abstract":"Projection-based Augmented Reality commonly projects on rigid objects, while only few systems project on deformable objects. In this paper, we present Pseudo Printed Fabrics (PPF), which enables the projection on a deforming piece of cloth. This can be applied to previewing a cloth design while manipulating its shape. We support challenging manipulations, including heavy occlusions and stretching the cloth. In previous work, we developed a similar system, based on a novel marker pattern; PPF extends it in two important aspects. First, we improved performance by two orders of magnitudes to achieve interactive performance. Second, we developed a new interpolation algorithm to keep registration during challenging manipulations. We believe that PPF can be applied to domains including virtual-try on and fashion design.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134481548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
[POSTER] Remote Welding Robot Manipulation Using Multi-view Images [海报]基于多视角图像的远程焊接机器人操作
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.38
Yuichi Hiroi, Kei Obata, Katsuhiro Suzuki, Naoto Ienaga, M. Sugimoto, H. Saito, Tadashi Takamaru
This paper proposes a remote welding robot manipulation system by using multi-view images. After an operator specifies two-dimensional path on images, the system transforms it into three-dimensional path and displays the movement of the robot by overlaying graphics with images. The accuracy of our system is sufficient to weld objects when combining with a sensor in the robot. The system allows the non-expert operator to weld objects remotely and intuitively, without the need to create a 3D model of a processed object beforehand.
提出了一种基于多视图图像的远程焊接机器人操作系统。操作员在图像上指定二维路径后,系统将其转换为三维路径,并通过图形与图像叠加的方式显示机器人的运动。当与机器人中的传感器相结合时,我们的系统的精度足以焊接物体。该系统允许非专业操作人员远程直观地焊接物体,而无需事先创建被加工物体的3D模型。
{"title":"[POSTER] Remote Welding Robot Manipulation Using Multi-view Images","authors":"Yuichi Hiroi, Kei Obata, Katsuhiro Suzuki, Naoto Ienaga, M. Sugimoto, H. Saito, Tadashi Takamaru","doi":"10.1109/ISMAR.2015.38","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.38","url":null,"abstract":"This paper proposes a remote welding robot manipulation system by using multi-view images. After an operator specifies two-dimensional path on images, the system transforms it into three-dimensional path and displays the movement of the robot by overlaying graphics with images. The accuracy of our system is sufficient to weld objects when combining with a sensor in the robot. The system allows the non-expert operator to weld objects remotely and intuitively, without the need to create a 3D model of a processed object beforehand.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128213517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
[POSTER] Geometric Mapping for Color Compensation Using Scene Adaptive Patches 使用场景自适应补丁进行颜色补偿的几何映射
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.67
Jong Hun Lee, Yong Hwi Kim, Yong Yi Lee, Kwan H. Lee
The SAR technique using a projector-camera system allows us to make various effect on a real scene without physical reconstitution. In order to project contents on a textured scene without color imperfections, geometric and radiometric compensation of a projection image should be conducted as preprocessing. In this paper, we present a new geometric mapping method for color compensation in the projector-camera system. We capture the scene and segment it into adaptive patch according to the scene structure using the SLIC segmentation. The piece-wise polynomial function is evaluated for each patch to find pixel-to-pixel correspondences between the measured and projection images. Finally, color compensation is performed by using a color mixing matrix. Experimental results show that our geometric mapping method establishes accurate correspondences and color compensation alleviates the color imperfections which is caused by texture of a general scene.
使用投影-摄像机系统的SAR技术使我们能够在真实场景中产生各种效果,而无需物理重构。为了在纹理场景中无颜色缺陷地投影内容,需要对投影图像进行几何补偿和辐射补偿预处理。本文提出了一种用于投影-摄像机系统色彩补偿的几何映射方法。我们利用SLIC分割方法捕获场景,并根据场景结构将其分割成自适应的patch。对每个patch的分段多项式函数进行评估,以找到测量图像和投影图像之间的像素对像素对应关系。最后,利用混色矩阵进行颜色补偿。实验结果表明,我们的几何映射方法建立了精确的对应关系,并且颜色补偿减轻了一般场景中由于纹理引起的颜色缺陷。
{"title":"[POSTER] Geometric Mapping for Color Compensation Using Scene Adaptive Patches","authors":"Jong Hun Lee, Yong Hwi Kim, Yong Yi Lee, Kwan H. Lee","doi":"10.1109/ISMAR.2015.67","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.67","url":null,"abstract":"The SAR technique using a projector-camera system allows us to make various effect on a real scene without physical reconstitution. In order to project contents on a textured scene without color imperfections, geometric and radiometric compensation of a projection image should be conducted as preprocessing. In this paper, we present a new geometric mapping method for color compensation in the projector-camera system. We capture the scene and segment it into adaptive patch according to the scene structure using the SLIC segmentation. The piece-wise polynomial function is evaluated for each patch to find pixel-to-pixel correspondences between the measured and projection images. Finally, color compensation is performed by using a color mixing matrix. Experimental results show that our geometric mapping method establishes accurate correspondences and color compensation alleviates the color imperfections which is caused by texture of a general scene.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114668049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
[POSTER] Avatar-Mediated Contact Interaction between Remote Users for Social Telepresence [海报]社交网真远程用户的虚拟媒介接触互动
Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.61
Jihye Oh, Yeonjoon Kim, Taeil Jin, Sukwon Lee, Youjin Lee, Sung-Hee Lee
Social touch such as a handshake increases the sense of coexistence and closeness between remote users in a social telepresence environment, but creating such coordinated contact movements with a distant person is extremely difficult if given only visual feedback, without haptic feedback. This paper presents a method to enable hand-contact interaction between remote users in an avatar-mediated telepresence environment. The key approach is, while the avatar directly follows its owner's motion in normal conditions, it adjusts the pose to maintain contact with the other user when the two users attempt to make contact interaction. To this end, we develop classifiers to recognize the users' intention for the contact interaction. The contact classifier identifies whether the users try to initiate contact when they are not in contact, and the separation classifier identifies whether the two in contact attempt to break contact. The classifiers are trained based on a set of geometric distance features. During the contact phase, inverse kinematics is solved to determine the pose of the avatar's arm so as to initiate and maintain natural contact with the other user's hand. Our system is unique in that two remote users can perform real time hand contact interaction in a social telepresence environment.
在社交远程呈现环境中,握手等社交接触增加了远程用户之间的共存感和亲近感,但如果只给予视觉反馈而没有触觉反馈,那么与远程用户创建这种协调的接触动作是极其困难的。本文提出了一种在虚拟化身介导的远程呈现环境中实现远程用户之间的手接触交互的方法。关键的方法是,在正常情况下,化身直接跟随主人的动作,当两个用户试图进行接触互动时,它会调整姿势以保持与另一个用户的接触。为此,我们开发了分类器来识别用户对接触交互的意图。接触分类器识别用户在不接触时是否试图发起接触,分离分类器识别两个接触者是否试图断开接触。分类器是基于一组几何距离特征来训练的。在接触阶段,求解逆运动学来确定虚拟人物手臂的姿势,从而启动并保持与另一个用户的手的自然接触。我们的系统的独特之处在于两个远程用户可以在社交远程呈现环境中进行实时手接触交互。
{"title":"[POSTER] Avatar-Mediated Contact Interaction between Remote Users for Social Telepresence","authors":"Jihye Oh, Yeonjoon Kim, Taeil Jin, Sukwon Lee, Youjin Lee, Sung-Hee Lee","doi":"10.1109/ISMAR.2015.61","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.61","url":null,"abstract":"Social touch such as a handshake increases the sense of coexistence and closeness between remote users in a social telepresence environment, but creating such coordinated contact movements with a distant person is extremely difficult if given only visual feedback, without haptic feedback. This paper presents a method to enable hand-contact interaction between remote users in an avatar-mediated telepresence environment. The key approach is, while the avatar directly follows its owner's motion in normal conditions, it adjusts the pose to maintain contact with the other user when the two users attempt to make contact interaction. To this end, we develop classifiers to recognize the users' intention for the contact interaction. The contact classifier identifies whether the users try to initiate contact when they are not in contact, and the separation classifier identifies whether the two in contact attempt to break contact. The classifiers are trained based on a set of geometric distance features. During the contact phase, inverse kinematics is solved to determine the pose of the avatar's arm so as to initiate and maintain natural contact with the other user's hand. Our system is unique in that two remote users can perform real time hand contact interaction in a social telepresence environment.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116552234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[POSTER] Augmented Reality for Radiation Awareness [海报]增强现实辐射意识
Pub Date : 2015-09-01 DOI: 10.1109/ISMAR.2015.21
Nicola Leucht, S. Habert, P. Wucherer, S. Weidert, Nassir Navab, P. Fallavollita
C-arm fluoroscopes are frequently used during surgeries for intraoperative guidance. Unfortunately, due to X-ray emission and scattering, increased radiation exposure occurs in the operating theatre. The objective of this work is to sensitize the surgeon to their radiation exposure, enable them to check on their exposure over time, and to help them choose their best position related to the C-arm gantry during surgery. First, we aim at simulating the amount of radiation that reaches the surgeon using the Geant4 software, a toolkit developed by CERN. Using a flexible setup in which two RGB-D cameras are mounted to the mobile C-arm, the scene is captured and modeled respectively. After the simulation of particles with specific energies, the dose at the surgeon's position, determined by the depth cameras, can be measured. The validation was performed by comparing the simulation results to both theoretical values from the C-arms user manual and real measurements made with a QUART didoSVM dosimeter. The average error was 16.46% and 16.39%, respectively. The proposed flexible setup and high simulation precision without a calibration with measured dosimeter values, has great potential to be directly used and integrated intraoperatively for dose measurement.
c臂透视在手术中经常用于术中指导。不幸的是,由于x射线的发射和散射,增加了手术室的辐射暴露。这项工作的目的是使外科医生对他们的辐射暴露敏感,使他们能够随着时间的推移检查他们的暴露情况,并帮助他们在手术中选择与c臂龙门相关的最佳位置。首先,我们的目标是用欧洲核子研究中心开发的工具包Geant4软件模拟到达外科医生的辐射量。使用一个灵活的设置,其中两个RGB-D相机安装在移动c臂上,分别捕获和建模场景。在对具有特定能量的粒子进行模拟后,就可以测量由深度摄像机确定的外科医生位置上的剂量。通过将模拟结果与c型臂用户手册中的理论值和使用QUART didoSVM剂量计进行的实际测量值进行比较,进行了验证。平均误差分别为16.46%和16.39%。该方法设置灵活,模拟精度高,无需使用剂量计测量值进行校准,具有很大的潜力,可直接用于术中剂量测量。
{"title":"[POSTER] Augmented Reality for Radiation Awareness","authors":"Nicola Leucht, S. Habert, P. Wucherer, S. Weidert, Nassir Navab, P. Fallavollita","doi":"10.1109/ISMAR.2015.21","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.21","url":null,"abstract":"C-arm fluoroscopes are frequently used during surgeries for intraoperative guidance. Unfortunately, due to X-ray emission and scattering, increased radiation exposure occurs in the operating theatre. The objective of this work is to sensitize the surgeon to their radiation exposure, enable them to check on their exposure over time, and to help them choose their best position related to the C-arm gantry during surgery. First, we aim at simulating the amount of radiation that reaches the surgeon using the Geant4 software, a toolkit developed by CERN. Using a flexible setup in which two RGB-D cameras are mounted to the mobile C-arm, the scene is captured and modeled respectively. After the simulation of particles with specific energies, the dose at the surgeon's position, determined by the depth cameras, can be measured. The validation was performed by comparing the simulation results to both theoretical values from the C-arms user manual and real measurements made with a QUART didoSVM dosimeter. The average error was 16.46% and 16.39%, respectively. The proposed flexible setup and high simulation precision without a calibration with measured dosimeter values, has great potential to be directly used and integrated intraoperatively for dose measurement.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127116455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2015 IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1