首页 > 最新文献

2010 IEEE International Symposium on Mixed and Augmented Reality最新文献

英文 中文
The effect of out-of-focus blur on visual discomfort when using stereo displays 使用立体显示器时,失焦模糊对视觉不适的影响
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643544
T. Blum, M. Wieczorek, A. Aichert, Radhika Tibrewal, Nassir Navab
Visual discomfort is a major problem for head-mounted displays and other stereo displays. One effect that is known to reduce visual comfort is double vision, which can occur due to high disparities. Previous studies suggest that adding artificial out-of-focus blur increases the fusional limits, where the left and right image can be fused without double vision. We investigate the effect of adding artificial out-of-focus blur on visual discomfort using two different setups. One uses a stereo monitor and an eye tracker to change the depth of focus based on the gaze of the user. The other one uses a video-see through head mounted display. A study involving 18 subjects showed that the viewing comfort when using blur is significantly higher in both setups for virtual scenes. However we can not confirm without doubt that the higher viewing comfort is only related to an increase of the fusional limits, as many subjects reported that double vision did not occur during the experiment. Results for additional photographed images that have been shown to the subjects were less significant. A first prototype of an AR system extracting a depth map from stereo images and adding artificial out-of-focus blur is presented.
视觉不适是头戴式显示器和其他立体显示器的主要问题。一种已知会降低视觉舒适度的影响是复视,这可能是由于高度的视差造成的。先前的研究表明,添加人工失焦模糊增加了融合限制,在没有双重视觉的情况下可以融合左右图像。我们使用两种不同的设置来研究添加人工失焦模糊对视觉不适的影响。一种是使用立体显示器和眼动追踪器,根据用户的注视来改变聚焦深度。另一种是通过头戴式显示器观看视频。一项涉及18名受试者的研究表明,在虚拟场景的两种设置中,使用模糊时的观看舒适度明显更高。然而,我们不能毫无疑问地证实,更高的观看舒适度只是与融合极限的增加有关,因为许多受试者报告在实验中没有发生复视。对已向受试者展示的额外拍摄图像的结果不那么显著。提出了从立体图像中提取深度图并添加人工失焦模糊的AR系统的第一个原型。
{"title":"The effect of out-of-focus blur on visual discomfort when using stereo displays","authors":"T. Blum, M. Wieczorek, A. Aichert, Radhika Tibrewal, Nassir Navab","doi":"10.1109/ISMAR.2010.5643544","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643544","url":null,"abstract":"Visual discomfort is a major problem for head-mounted displays and other stereo displays. One effect that is known to reduce visual comfort is double vision, which can occur due to high disparities. Previous studies suggest that adding artificial out-of-focus blur increases the fusional limits, where the left and right image can be fused without double vision. We investigate the effect of adding artificial out-of-focus blur on visual discomfort using two different setups. One uses a stereo monitor and an eye tracker to change the depth of focus based on the gaze of the user. The other one uses a video-see through head mounted display. A study involving 18 subjects showed that the viewing comfort when using blur is significantly higher in both setups for virtual scenes. However we can not confirm without doubt that the higher viewing comfort is only related to an increase of the fusional limits, as many subjects reported that double vision did not occur during the experiment. Results for additional photographed images that have been shown to the subjects were less significant. A first prototype of an AR system extracting a depth map from stereo images and adding artificial out-of-focus blur is presented.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116681406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Experiences with an AR evaluation test bed: Presence, performance, and physiological measurement AR评估试验台的经验:存在、性能和生理测量
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643560
Maribeth Gandy Coleman, R. Catrambone, B. MacIntyre, Chris Alvarez, Elsa Eiríksdóttir, M. Hilimire, Brian Davidson, A. McLaughlin
This paper discusses an experiment carried out in an AR test bed called “the pit”. Inspired by the well-known VR acrophobia study of Meehan et al. [18], the experimental goals were to explore whether VR presence instruments were useful in AR (and to modify them where appropriate), to compare additional measures to these well-researched techniques, and to determine if findings from VR evaluations can be transferred to AR. An experimental protocol appropriate for AR was developed. The initial experimental findings concern varying immersion factors (frame rate) and their effect on feelings of presence, user performance and behavior. Unlike the VR study, which found differing frame rates to affect presence measures, there were few differences in the five frame rate modes in our study as measured by the qualitative and quantitative instruments, which included physiological responses, a custom presence questionnaire, task performance, and user behavior. The AR presence questionnaire indicated users experienced a high feeling of presence in all frame rate modes. Behavior, performance, and interview results indicated the participants felt anxiety in the pit environment. However, the physiological data did not reflect this anxiety due to factors of user experience and experiment design. Efforts to develop a useful AR test bed and to identify results from a large data set has produced a body of knowledge related to AR evaluation that can inform others seeking to create AR experiments.
本文讨论了在一个名为“坑”的AR试验台上进行的实验。受Meehan等人著名的VR恐高症研究的启发[18],实验目标是探索VR存在工具在AR中是否有用(并在适当的时候修改它们),比较这些研究得很好的技术的其他措施,并确定VR评估的结果是否可以转移到AR中。开发了适合AR的实验方案。最初的实验结果涉及不同的沉浸因素(帧率)及其对存在感、用户表现和行为的影响。与VR研究发现不同帧率会影响存在感测量不同,我们的研究中通过定性和定量工具(包括生理反应、定制存在感问卷、任务表现和用户行为)测量的五种帧率模式几乎没有差异。AR存在感问卷显示,用户在所有帧率模式下都有很高的存在感。行为、表现和访谈结果表明,参与者在坑环境中感到焦虑。然而,由于用户体验和实验设计的因素,生理数据并没有反映出这种焦虑。开发有用的AR测试平台和从大型数据集中识别结果的努力已经产生了与AR评估相关的知识体系,可以为寻求创建AR实验的其他人提供信息。
{"title":"Experiences with an AR evaluation test bed: Presence, performance, and physiological measurement","authors":"Maribeth Gandy Coleman, R. Catrambone, B. MacIntyre, Chris Alvarez, Elsa Eiríksdóttir, M. Hilimire, Brian Davidson, A. McLaughlin","doi":"10.1109/ISMAR.2010.5643560","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643560","url":null,"abstract":"This paper discusses an experiment carried out in an AR test bed called “the pit”. Inspired by the well-known VR acrophobia study of Meehan et al. [18], the experimental goals were to explore whether VR presence instruments were useful in AR (and to modify them where appropriate), to compare additional measures to these well-researched techniques, and to determine if findings from VR evaluations can be transferred to AR. An experimental protocol appropriate for AR was developed. The initial experimental findings concern varying immersion factors (frame rate) and their effect on feelings of presence, user performance and behavior. Unlike the VR study, which found differing frame rates to affect presence measures, there were few differences in the five frame rate modes in our study as measured by the qualitative and quantitative instruments, which included physiological responses, a custom presence questionnaire, task performance, and user behavior. The AR presence questionnaire indicated users experienced a high feeling of presence in all frame rate modes. Behavior, performance, and interview results indicated the participants felt anxiety in the pit environment. However, the physiological data did not reflect this anxiety due to factors of user experience and experiment design. Efforts to develop a useful AR test bed and to identify results from a large data set has produced a body of knowledge related to AR evaluation that can inform others seeking to create AR experiments.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Augmentation of check in/out model for remote collaboration with Mixed Reality 增强签入/签出模型,用于与混合现实进行远程协作
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643588
G. Kamei, Takeshi Matsuyama, Ken-ichi Okada
This paper proposes augmentation of check in/out model for remote collaboration with Mixed Reality (MR). We add a 3D shared and private space into the real workspace by MR technology, and augment the check in/out model to remote collaboration. By our proposal, user can intuitively receive remote partner's work via virtual objects in the shared space and stop sharing information about an object by just moving the object into the private space if the user doesn't want to share it. We implement a system which achieves our proposal and evaluate it.
本文提出了基于混合现实(MR)的远程协作签到/签出模型的增强。我们通过MR技术在真实的工作空间中增加了一个3D共享和私人空间,并将签到/退房模型扩展到远程协作。通过我们的建议,用户可以通过共享空间中的虚拟对象直观地接收远程合作伙伴的工作,如果用户不想共享对象,只需将对象移动到私有空间中即可停止共享对象的信息。我们实现了一个系统来实现我们的建议并对其进行了评估。
{"title":"Augmentation of check in/out model for remote collaboration with Mixed Reality","authors":"G. Kamei, Takeshi Matsuyama, Ken-ichi Okada","doi":"10.1109/ISMAR.2010.5643588","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643588","url":null,"abstract":"This paper proposes augmentation of check in/out model for remote collaboration with Mixed Reality (MR). We add a 3D shared and private space into the real workspace by MR technology, and augment the check in/out model to remote collaboration. By our proposal, user can intuitively receive remote partner's work via virtual objects in the shared space and stop sharing information about an object by just moving the object into the private space if the user doesn't want to share it. We implement a system which achieves our proposal and evaluate it.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129476676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Augmented telepresence using autopilot airship and omni-directional camera 使用自动驾驶飞艇和全方位相机的增强远程呈现
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643596
Fumio Okura, M. Kanbara, N. Yokoya
This study is concerned with a large-scale telepresence system based on remote control of mobile robot or aerial vehicle. The proposed system provides a user with not only view of remote site but also related information by AR technique. Such systems are referred to as augmented telepresence in this paper. Aerial imagery can capture a wider area at once than image capturing from the ground. However, it is difficult for a user to change position and direction of viewpoint freely because of the difficulty in remote control and limitation of hardware. To overcome these problems, the proposed system uses an autopilot airship to support changing user's viewpoint and employs an omni-directional camera for changing viewing direction easily. This paper describes hardware configuration for aerial imagery, an approach for overlaying virtual objects, and automatic control of the airship, as well as experimental results using a prototype system.
本文研究了一种基于移动机器人或飞行器远程控制的大型远程呈现系统。该系统通过增强现实技术为用户提供远程站点视图和相关信息。这样的系统在本文中被称为增强远程呈现。空中图像可以一次捕获比地面图像更大的区域。然而,由于远程控制的困难和硬件的限制,用户很难自由地改变视点的位置和方向。为了克服这些问题,该系统采用自动驾驶飞艇来支持改变用户的视点,并采用全向相机来方便地改变观看方向。本文介绍了航拍图像的硬件配置、虚拟物体的叠加方法和飞艇的自动控制,并给出了原型系统的实验结果。
{"title":"Augmented telepresence using autopilot airship and omni-directional camera","authors":"Fumio Okura, M. Kanbara, N. Yokoya","doi":"10.1109/ISMAR.2010.5643596","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643596","url":null,"abstract":"This study is concerned with a large-scale telepresence system based on remote control of mobile robot or aerial vehicle. The proposed system provides a user with not only view of remote site but also related information by AR technique. Such systems are referred to as augmented telepresence in this paper. Aerial imagery can capture a wider area at once than image capturing from the ground. However, it is difficult for a user to change position and direction of viewpoint freely because of the difficulty in remote control and limitation of hardware. To overcome these problems, the proposed system uses an autopilot airship to support changing user's viewpoint and employs an omni-directional camera for changing viewing direction easily. This paper describes hardware configuration for aerial imagery, an approach for overlaying virtual objects, and automatic control of the airship, as well as experimental results using a prototype system.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115407247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A Web Service Platform dedicated to building mixed reality solutions 一个致力于构建混合现实解决方案的Web服务平台
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643619
P. Belimpasakis, Petri Selonen, Yu You
While many attempts have been done towards creating mixed reality platforms for mobile client devices, there have not been any significant efforts at the server/infrastructure side. We demonstrate our Mixed Reality Web Service Platform (MRS-WS) [2] dedicated to enabling rapid creation of mixed reality solutions, those being either desktop or mobile. Focusing on common interfaces and functions across user generated and commercial geo-content, we provide an appealing developer offering, which we are currently evaluating via a closed set of university partners. Our plan is to gradually expand the developer API access to more partners, before deciding if it is ready for fully public developer access.
虽然在为移动客户端设备创建混合现实平台方面已经做了很多尝试,但在服务器/基础设施方面还没有任何重大的努力。我们展示了我们的混合现实Web服务平台(MRS-WS)[2],致力于实现桌面或移动混合现实解决方案的快速创建。专注于用户生成和商业地理内容的通用接口和功能,我们提供了一个吸引人的开发者产品,我们目前正在通过一组封闭的大学合作伙伴对其进行评估。我们的计划是在决定是否准备好完全公开开发者访问之前,逐步将开发者API访问扩展到更多的合作伙伴。
{"title":"A Web Service Platform dedicated to building mixed reality solutions","authors":"P. Belimpasakis, Petri Selonen, Yu You","doi":"10.1109/ISMAR.2010.5643619","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643619","url":null,"abstract":"While many attempts have been done towards creating mixed reality platforms for mobile client devices, there have not been any significant efforts at the server/infrastructure side. We demonstrate our Mixed Reality Web Service Platform (MRS-WS) [2] dedicated to enabling rapid creation of mixed reality solutions, those being either desktop or mobile. Focusing on common interfaces and functions across user generated and commercial geo-content, we provide an appealing developer offering, which we are currently evaluating via a closed set of university partners. Our plan is to gradually expand the developer API access to more partners, before deciding if it is ready for fully public developer access.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"8 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123316211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Foldable augmented maps 可折叠增强地图
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643552
Sandy Martedi, Hideaki Uchiyama, G. Enriquez, H. Saito, Tsutomu Miyashita, Takenori Hara
This paper presents folded surface detection and tracking for augmented maps. For the detection, plane detection is iteratively applied to 2D correspondences between an input image and a reference plane because the folded surface is composed of multiple planes. In order to compute the exact folding line from the detected planes, the intersection line of the planes is computed from their positional relationship. After the detection is done, each plane is individually tracked by frame-by-frame descriptor update. For a natural augmentation on the folded surface, we overlay virtual geographic data on each detected plane. The user can interact with the geographic data by finger pointing because the finger tip of the user is also detected during the tracking. As scenario of use, some interactions on the folded surface are introduced. Experimental results show the accuracy and performance of folded surface detection for evaluating the effectiveness of our approach.
提出了增强地图的折叠面检测与跟踪方法。对于检测,由于折叠面由多个平面组成,因此平面检测迭代地应用于输入图像与参考平面之间的二维对应关系。为了从检测到的平面计算出精确的折叠线,根据平面的位置关系计算平面的交点线。检测完成后,通过逐帧更新描述符来单独跟踪每个平面。为了对折叠表面进行自然增强,我们在每个检测平面上叠加虚拟地理数据。用户可以通过手指与地理数据进行交互,因为在跟踪过程中也会检测到用户的指尖。作为使用场景,介绍了折叠表面上的一些相互作用。实验结果表明了该方法的准确性和性能,从而评价了该方法的有效性。
{"title":"Foldable augmented maps","authors":"Sandy Martedi, Hideaki Uchiyama, G. Enriquez, H. Saito, Tsutomu Miyashita, Takenori Hara","doi":"10.1109/ISMAR.2010.5643552","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643552","url":null,"abstract":"This paper presents folded surface detection and tracking for augmented maps. For the detection, plane detection is iteratively applied to 2D correspondences between an input image and a reference plane because the folded surface is composed of multiple planes. In order to compute the exact folding line from the detected planes, the intersection line of the planes is computed from their positional relationship. After the detection is done, each plane is individually tracked by frame-by-frame descriptor update. For a natural augmentation on the folded surface, we overlay virtual geographic data on each detected plane. The user can interact with the geographic data by finger pointing because the finger tip of the user is also detected during the tracking. As scenario of use, some interactions on the folded surface are introduced. Experimental results show the accuracy and performance of folded surface detection for evaluating the effectiveness of our approach.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
KHARMA: An open KML/HTML architecture for mobile augmented reality applications KHARMA:一个开放的KML/HTML架构,用于移动增强现实应用
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643583
A. Hill, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson, Hafez Rouzati
Widespread future adoption of augmented reality technology will rely on a broadly accessible standard for authoring and distributing content with, at a minimum, the flexibility and interactivity provided by current web authoring technologies. We introduce KHARMA, an open architecture based on KML for geospatial and relative referencing combined with HTML, JavaScript and CSS technologies for content development and delivery. This architecture uses lightweight representations that decouple infrastructure and tracking sources from authoring and content delivery. Our main contribution is a re-conceptualization of KML that turns HTML content formerly confined to balloons into first-class elements in the scene. We introduce the KARML extension that gives authors increase control over the presentation of HTML content and its spatial relationship to other content.
未来增强现实技术的广泛采用将依赖于一个广泛可访问的创作和分发内容的标准,至少要有当前web创作技术提供的灵活性和交互性。我们介绍了KHARMA,一个基于KML的开放架构,用于地理空间和相对引用,结合HTML、JavaScript和CSS技术用于内容开发和交付。该体系结构使用轻量级表示,将基础设施和跟踪源与创作和内容交付分离开来。我们的主要贡献是对KML进行了重新概念化,将以前局限于气球的HTML内容转变为场景中的一等元素。我们介绍KARML扩展,它使作者能够更好地控制HTML内容的表示及其与其他内容的空间关系。
{"title":"KHARMA: An open KML/HTML architecture for mobile augmented reality applications","authors":"A. Hill, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson, Hafez Rouzati","doi":"10.1109/ISMAR.2010.5643583","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643583","url":null,"abstract":"Widespread future adoption of augmented reality technology will rely on a broadly accessible standard for authoring and distributing content with, at a minimum, the flexibility and interactivity provided by current web authoring technologies. We introduce KHARMA, an open architecture based on KML for geospatial and relative referencing combined with HTML, JavaScript and CSS technologies for content development and delivery. This architecture uses lightweight representations that decouple infrastructure and tracking sources from authoring and content delivery. Our main contribution is a re-conceptualization of KML that turns HTML content formerly confined to balloons into first-class elements in the scene. We introduce the KARML extension that gives authors increase control over the presentation of HTML content and its spatial relationship to other content.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131932175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Accurate real-time tracking using mutual information 利用互信息进行精确的实时跟踪
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643550
Amaury Dame, É. Marchand
In this paper we present a direct tracking approach that uses Mutual Information (MI) as a metric for alignment. The proposed approach is robust, real-time and gives an accurate estimation of the displacement that makes it adapted to augmented reality applications. MI is a measure of the quantity of information shared by signals that has been widely used in medical applications. Since then, and although MI has the ability to perform robust alignment with illumination changes, multi-modality and partial occlusions, few works propose MI-based applications related to object tracking in image sequences due to some optimization problems.
在本文中,我们提出了一种使用互信息(MI)作为对齐度量的直接跟踪方法。该方法具有鲁棒性和实时性,并能准确估计位移,适用于增强现实应用。MI是一种测量信号共享信息数量的方法,已广泛用于医疗应用。从那时起,尽管MI能够对光照变化、多模态和部分遮挡进行鲁棒对准,但由于一些优化问题,很少有研究提出基于MI的应用与图像序列中的目标跟踪相关。
{"title":"Accurate real-time tracking using mutual information","authors":"Amaury Dame, É. Marchand","doi":"10.1109/ISMAR.2010.5643550","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643550","url":null,"abstract":"In this paper we present a direct tracking approach that uses Mutual Information (MI) as a metric for alignment. The proposed approach is robust, real-time and gives an accurate estimation of the displacement that makes it adapted to augmented reality applications. MI is a measure of the quantity of information shared by signals that has been widely used in medical applications. Since then, and although MI has the ability to perform robust alignment with illumination changes, multi-modality and partial occlusions, few works propose MI-based applications related to object tracking in image sequences due to some optimization problems.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"123 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131163628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
AR Shooter: An augmented reality shooting game system AR Shooter:一款增强现实射击游戏系统
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643620
D. Weng, D. Li, W. Xu, Y. Liu, Y. Wang
Our system is specially designed with augmented reality technologies and has significant commercial potential. It will be installed in a theme park in the next year.
我们的系统专门设计了增强现实技术,具有巨大的商业潜力。它将于明年被安装在一个主题公园里。
{"title":"AR Shooter: An augmented reality shooting game system","authors":"D. Weng, D. Li, W. Xu, Y. Liu, Y. Wang","doi":"10.1109/ISMAR.2010.5643620","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643620","url":null,"abstract":"Our system is specially designed with augmented reality technologies and has significant commercial potential. It will be installed in a theme park in the next year.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115783531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Large area indoor tracking for industrial augmented reality 用于工业增强现实的大面积室内跟踪
Pub Date : 2010-11-22 DOI: 10.1109/ISMAR.2010.5643601
Fabian Scheer, S. Müller
A precise tracking with minimal setup times, minimal changes to the environment and acceptable costs, satisfying industrial demands in large factory buildings is still a challenging task for augmented reality(AR) applications. We present a system to determine the pose for monitor based AR systems in large indoor environments, e.g. 200 – 200 meters and more. An infrared laser detects retroreflective targets and computes a 2D position and orientation based on the information of a preprocessed map of the targets. Based on this information the 6D pose of a video camera attached to a servo motor, that is further mounted on a mobile cart is obtained by identifying the transformation between the laser scanner and the several adjustable views of the camera through a calibration method. The adjustable steps of the servo motor are limited to a discrete number of steps to limit the calibration effort. The positional accuracy of the system is estimated by error propagation and presented.
以最小的设置时间、最小的环境变化和可接受的成本进行精确跟踪,满足大型工厂建筑中的工业需求,对于增强现实(AR)应用来说仍然是一项具有挑战性的任务。我们提出了一个系统来确定基于监视器的AR系统在大型室内环境中的姿势,例如200 - 200米及以上。红外激光器探测反向反射目标并基于目标的预处理地图信息计算二维位置和方向。在此信息的基础上,通过标定方法识别激光扫描仪与摄像机的几个可调视图之间的变换,得到了安装在移动车上的伺服电机上的摄像机的6D位姿。伺服电机的可调步长被限制为一个离散的步长数,以限制校准工作。利用误差传播法估计了系统的位置精度,并给出了估计结果。
{"title":"Large area indoor tracking for industrial augmented reality","authors":"Fabian Scheer, S. Müller","doi":"10.1109/ISMAR.2010.5643601","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643601","url":null,"abstract":"A precise tracking with minimal setup times, minimal changes to the environment and acceptable costs, satisfying industrial demands in large factory buildings is still a challenging task for augmented reality(AR) applications. We present a system to determine the pose for monitor based AR systems in large indoor environments, e.g. 200 – 200 meters and more. An infrared laser detects retroreflective targets and computes a 2D position and orientation based on the information of a preprocessed map of the targets. Based on this information the 6D pose of a video camera attached to a servo motor, that is further mounted on a mobile cart is obtained by identifying the transformation between the laser scanner and the several adjustable views of the camera through a calibration method. The adjustable steps of the servo motor are limited to a discrete number of steps to limit the calibration effort. The positional accuracy of the system is estimated by error propagation and presented.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122927335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2010 IEEE International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1