首页 > 最新文献

International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality最新文献

英文 中文
Designing support for collaboration around physical artefacts: Using augmented reality in learning environments 为围绕物理工件的协作设计支持:在学习环境中使用增强现实
Jason Weigel, Stephen Viller, M. Schulz
The aim of this thesis is to identify mechanisms for supporting collaboration around physical artefacts in co-located and remote settings. To explore the research question in the project, a Research through Design approach has been adopted. A technology probe — an evolutionary prototype of a remote collaboration system — will be used to fuel the research. The prototype will facilitate collaboration between small groups around physical artefacts in an augmented learning environment. The prototype will inform future collaborative augmented reality technology design.
这篇论文的目的是确定在同一地点和远程设置中支持围绕物理工件的协作的机制。为了探索项目中的研究问题,采用了设计研究的方法。一个技术探测器——一个远程协作系统的进化原型——将用于推动这项研究。该原型将在增强的学习环境中促进围绕物理人工制品的小组之间的协作。该原型将为未来的协作增强现实技术设计提供信息。
{"title":"Designing support for collaboration around physical artefacts: Using augmented reality in learning environments","authors":"Jason Weigel, Stephen Viller, M. Schulz","doi":"10.1109/ISMAR.2014.6948507","DOIUrl":"https://doi.org/10.1109/ISMAR.2014.6948507","url":null,"abstract":"The aim of this thesis is to identify mechanisms for supporting collaboration around physical artefacts in co-located and remote settings. To explore the research question in the project, a Research through Design approach has been adopted. A technology probe — an evolutionary prototype of a remote collaboration system — will be used to fuel the research. The prototype will facilitate collaboration between small groups around physical artefacts in an augmented learning environment. The prototype will inform future collaborative augmented reality technology design.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83537519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Ego- and Exocentric interaction for mobile AR conferencing 移动AR会议的自我和外中心交互
Timo Bleeker, Gun A. Lee, M. Billinghurst
In this research we explore how a handheld display (HHD) can be used to provide input into an Augmented Reality (AR) conferencing application shown on a head mounted display (HMD). Although AR has successfully been used for many collaborative applications, there has been little research on using HHD and HMD together to enhance remote conferencing. This research investigates two different HHD interfaces and methods for supporting file sharing in an AR conferencing application. A formal evaluation compared four different conditions and found that an Exocentric view and using Visual cues for requesting content produced the best performance. The results were used to create a set of basic design guidelines for future research and application development.
在这项研究中,我们探索了如何使用手持显示器(HHD)为头戴式显示器(HMD)上显示的增强现实(AR)会议应用程序提供输入。尽管AR已经成功地用于许多协作应用程序,但很少有研究将HHD和HMD一起使用以增强远程会议。本研究探讨了在AR会议应用程序中支持文件共享的两种不同HHD接口和方法。一个正式的评估比较了四种不同的情况,发现外心视图和使用视觉提示来请求内容产生了最好的性能。这些结果被用来为未来的研究和应用开发创建一套基本的设计指南。
{"title":"Ego- and Exocentric interaction for mobile AR conferencing","authors":"Timo Bleeker, Gun A. Lee, M. Billinghurst","doi":"10.1109/ISMAR.2013.6671823","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671823","url":null,"abstract":"In this research we explore how a handheld display (HHD) can be used to provide input into an Augmented Reality (AR) conferencing application shown on a head mounted display (HMD). Although AR has successfully been used for many collaborative applications, there has been little research on using HHD and HMD together to enhance remote conferencing. This research investigates two different HHD interfaces and methods for supporting file sharing in an AR conferencing application. A formal evaluation compared four different conditions and found that an Exocentric view and using Visual cues for requesting content produced the best performance. The results were used to create a set of basic design guidelines for future research and application development.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74774101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A projected augmented reality system for remote collaboration 用于远程协作的投影增强现实系统
Matthew Tait, Tony Tsai, Nobuchika Sakata, M. Billinghurst, E. Vartiainen
This paper describes an AR system for remote collaboration using a captured 3D model of the local user's scene. In the system a remote user can manipulate the scene independently of the view of the local user and add AR annotations that appear projected into the real world. Results from a pilot study and the design of a further full study are presented.
本文介绍了一种基于本地用户场景三维模型的远程协作增强现实系统。在该系统中,远程用户可以独立于本地用户的视图操作场景,并添加投影到现实世界中的AR注释。介绍了初步研究的结果和进一步全面研究的设计。
{"title":"A projected augmented reality system for remote collaboration","authors":"Matthew Tait, Tony Tsai, Nobuchika Sakata, M. Billinghurst, E. Vartiainen","doi":"10.1109/ISMAR.2013.6671838","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671838","url":null,"abstract":"This paper describes an AR system for remote collaboration using a captured 3D model of the local user's scene. In the system a remote user can manipulate the scene independently of the view of the local user and add AR annotations that appear projected into the real world. Results from a pilot study and the design of a further full study are presented.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91359857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Improved outdoor augmented reality through "Globalization" 通过“全球化”改善户外增强现实
Chris Sweeney, Tobias Höllerer, M. Turk
Despite the major interest in live tracking and mapping (e.g., SLAM), the field of augmented reality has yet to truly make use of the rich data provided from large-scale reconstructions generated by structure from motion. This dissertation focuses on extensible tracking and mapping for large-scale reconstructions that enables SfM and SLAM to operate cooperatively to mutually enhance the performance. We describe a multi-user, collaborative augmented reality system that will collectively extend and enhance reconstructions of urban environments at city-scales. Contrary to current outdoor augmented reality systems, this system is capable of continuous tracking through areas previously modeled as well as new, undiscovered areas. Further, we describe a new process called globalization that propagates new visual information back to the global model. Globalization allows for continuous updating of the 3D models with visual data from live users, providing data to fill coverage gaps that are common in 3D reconstructions and to provide the most current view of an environment as it changes over time. The proposed research is a crucial step toward enabling users to augment urban environments with location-specific information at any location in the world for a truly global augmented reality.
尽管人们对实时跟踪和绘图(例如SLAM)很感兴趣,但增强现实领域尚未真正利用由运动产生的结构产生的大规模重建所提供的丰富数据。本文的研究重点是大规模重建的可扩展跟踪和映射,使SfM和SLAM能够协同工作,共同提高性能。我们描述了一个多用户、协作的增强现实系统,它将在城市尺度上共同扩展和增强城市环境的重建。与目前的户外增强现实系统不同,该系统能够连续跟踪以前建模的区域以及新的未发现的区域。此外,我们描述了一个称为全球化的新过程,它将新的视觉信息传播回全局模型。全球化允许使用实时用户的可视化数据持续更新3D模型,提供数据来填补3D重建中常见的覆盖空白,并提供随时间变化的环境的最新视图。这项拟议中的研究是朝着让用户在世界上任何地方用特定位置的信息增强城市环境,实现真正的全球增强现实迈出的关键一步。
{"title":"Improved outdoor augmented reality through \"Globalization\"","authors":"Chris Sweeney, Tobias Höllerer, M. Turk","doi":"10.1109/ISMAR.2013.6671820","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671820","url":null,"abstract":"Despite the major interest in live tracking and mapping (e.g., SLAM), the field of augmented reality has yet to truly make use of the rich data provided from large-scale reconstructions generated by structure from motion. This dissertation focuses on extensible tracking and mapping for large-scale reconstructions that enables SfM and SLAM to operate cooperatively to mutually enhance the performance. We describe a multi-user, collaborative augmented reality system that will collectively extend and enhance reconstructions of urban environments at city-scales. Contrary to current outdoor augmented reality systems, this system is capable of continuous tracking through areas previously modeled as well as new, undiscovered areas. Further, we describe a new process called globalization that propagates new visual information back to the global model. Globalization allows for continuous updating of the 3D models with visual data from live users, providing data to fill coverage gaps that are common in 3D reconstructions and to provide the most current view of an environment as it changes over time. The proposed research is a crucial step toward enabling users to augment urban environments with location-specific information at any location in the world for a truly global augmented reality.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85225347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Motion capturing empowered interaction with a virtual agent in an Augmented Reality environment 动作捕捉支持在增强现实环境中与虚拟代理进行交互
Ionut Damian, René Bühling, Felix Kistler, M. Billinghurst, M. Obaid, E. André
We present an Augmented Reality (AR) system where we immerse the user's whole body in the virtual scene using a motion capturing (MoCap) suit. The goal is to allow for seamless interaction with the virtual content within the AR environment. We describe an evaluation study of a prototype application featuring an interactive scenario with a virtual agent. The scenario contains two conditions: in one, the agent has access to the full tracking data of the MoCap suit and therefore is aware of the exact actions of the user, while in the second condition, the agent does not get this information. We then report and discuss the differences we were able to detect regarding the users' perception of the interaction with the agent and give future research directions.
我们提出了一个增强现实(AR)系统,我们使用动作捕捉(MoCap)套装将用户的整个身体沉浸在虚拟场景中。目标是在AR环境中实现与虚拟内容的无缝交互。我们描述了一个具有虚拟代理交互场景的原型应用程序的评估研究。该场景包含两种情况:在一种情况下,代理可以访问动作捕捉套装的完整跟踪数据,因此知道用户的确切动作,而在第二种情况下,代理无法获得此信息。然后,我们报告和讨论我们能够检测到的关于用户对与代理交互的感知的差异,并给出未来的研究方向。
{"title":"Motion capturing empowered interaction with a virtual agent in an Augmented Reality environment","authors":"Ionut Damian, René Bühling, Felix Kistler, M. Billinghurst, M. Obaid, E. André","doi":"10.1109/ISMAR.2013.6671830","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671830","url":null,"abstract":"We present an Augmented Reality (AR) system where we immerse the user's whole body in the virtual scene using a motion capturing (MoCap) suit. The goal is to allow for seamless interaction with the virtual content within the AR environment. We describe an evaluation study of a prototype application featuring an interactive scenario with a virtual agent. The scenario contains two conditions: in one, the agent has access to the full tracking data of the MoCap suit and therefore is aware of the exact actions of the user, while in the second condition, the agent does not get this information. We then report and discuss the differences we were able to detect regarding the users' perception of the interaction with the agent and give future research directions.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91253235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Filling the gaps: Hybrid vision and inertial tracking 填补空白:混合视觉和惯性跟踪
Ky Waegel, Frederick P. Brooks
Existing head-tracking systems all suffer from various limitations, such as latency, cost, accuracy, or drift. I propose to address these limitations by using depth cameras and existing 3D reconstruction algorithms to simultaneously localize the camera position and build a map of the environment, providing stable and drift-free tracking. This method is enabled by the recent proliferation of light-weight, inexpensive depth cameras. Because these cameras have a relatively slow frame rate, I combine this technique with a low-latency inertial measurement unit to estimate movement between frames. Using the generated environment model, I further propose a collision avoidance system for use with real walking.
现有的头部跟踪系统都有各种各样的限制,比如延迟、成本、准确性或漂移。我建议通过使用深度相机和现有的3D重建算法来解决这些限制,同时定位相机位置并构建环境地图,提供稳定和无漂移的跟踪。这种方法是由于最近大量出现的轻质、廉价的深度相机而实现的。由于这些相机具有相对较慢的帧速率,因此我将这种技术与低延迟惯性测量单元结合起来,以估计帧之间的移动。利用生成的环境模型,我进一步提出了一个用于真实行走的碰撞避免系统。
{"title":"Filling the gaps: Hybrid vision and inertial tracking","authors":"Ky Waegel, Frederick P. Brooks","doi":"10.1109/ISMAR.2013.6671821","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671821","url":null,"abstract":"Existing head-tracking systems all suffer from various limitations, such as latency, cost, accuracy, or drift. I propose to address these limitations by using depth cameras and existing 3D reconstruction algorithms to simultaneously localize the camera position and build a map of the environment, providing stable and drift-free tracking. This method is enabled by the recent proliferation of light-weight, inexpensive depth cameras. Because these cameras have a relatively slow frame rate, I combine this technique with a low-latency inertial measurement unit to estimate movement between frames. Using the generated environment model, I further propose a collision avoidance system for use with real walking.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75657794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Region-based tracking using sequences of relevance measures 使用相关度量序列的基于区域的跟踪
Sandy Martedi, B. Thomas, H. Saito
We present the preliminary results of our proposal: a region-based detection and tracking method of arbitrary shapes. The method is designed to be robust against orientation and scale changes and also occlusions. In this work, we study the effectiveness of sequence of shape descriptors for matching purpose. We detect and track surfaces by matching the sequences of descriptor so called relevance measures with their correspondences in the database. First, we extract stable shapes as the detection target using Maximally Stable Extreme Region (MSER) method. The keypoints on the stable shapes are then extracted by simplifying the outline of the stable regions. The relevance measures that are composed by three keypoints are then computed and the sequences of them are composed as descriptors. During runtime, the sequences of relevance measures are extracted from the captured image and are matched with those in the database. When a particular region is matched with one in the database, the orientation of the region is then estimated and virtual annotations can be superimposed. We apply this approach in an interactive task support system that helps users for creating paper craft objects.
我们提出了我们的建议的初步结果:基于区域的检测和跟踪任意形状的方法。该方法被设计为对方向和规模变化以及遮挡具有鲁棒性。在这项工作中,我们研究了用于匹配目的的形状描述符序列的有效性。我们通过将描述符的序列与数据库中的对应关系进行匹配来检测和跟踪表面。首先,利用最大稳定极值区域(MSER)方法提取稳定形状作为检测目标;然后通过简化稳定区域的轮廓来提取稳定形状上的关键点。然后计算由三个关键点组成的相关性度量,并将它们的序列组成描述符。在运行时,从捕获的图像中提取相关度量序列,并与数据库中的相关度量序列进行匹配。当一个特定的区域与数据库中的一个区域匹配时,然后估计该区域的方向,并可以叠加虚拟注释。我们将这种方法应用于一个交互式任务支持系统,该系统可以帮助用户创建纸质工艺品。
{"title":"Region-based tracking using sequences of relevance measures","authors":"Sandy Martedi, B. Thomas, H. Saito","doi":"10.1109/ISMAR.2013.6671834","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671834","url":null,"abstract":"We present the preliminary results of our proposal: a region-based detection and tracking method of arbitrary shapes. The method is designed to be robust against orientation and scale changes and also occlusions. In this work, we study the effectiveness of sequence of shape descriptors for matching purpose. We detect and track surfaces by matching the sequences of descriptor so called relevance measures with their correspondences in the database. First, we extract stable shapes as the detection target using Maximally Stable Extreme Region (MSER) method. The keypoints on the stable shapes are then extracted by simplifying the outline of the stable regions. The relevance measures that are composed by three keypoints are then computed and the sequences of them are composed as descriptors. During runtime, the sequences of relevance measures are extracted from the captured image and are matched with those in the database. When a particular region is matched with one in the database, the orientation of the region is then estimated and virtual annotations can be superimposed. We apply this approach in an interactive task support system that helps users for creating paper craft objects.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78029944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
SIXTH middleware for sensor web enabled AR applications 第六中间件,用于传感器网络支持的AR应用程序
A. Campbell, Dominic Carr, Levent Görgü, David Lillis, Barnard Kroon, G. O’hare
We increasingly live in a world where sensors have become truly ubiquitous in nature. Many of these sensors are an integral part of devices such as smartphones, which contain sufficient sensors to allow for their use as Augmented Reality (AR) devices. This AR experience is limited by the precision and functionality of an individual device's sensors and the its capacity to process the sensor data into a useable form. This paper discuss the current work on a mobile version of the SIXTH middleware which allows for creation of Sensor Web enabled AR applications. This paper discusses current work on mobile SIXTH, which involves the creation of a sensor web between different Android and non-Android devices. This has led to several small demonstrators which are discussed in this work in progress paper. Future work on the project will be outline the aims of the project to allow for the integration of additional devices so as to explore new abilities such as leveraging additional proprieties of those devices.
我们越来越生活在一个传感器在自然界中无处不在的世界。其中许多传感器是智能手机等设备的组成部分,这些设备包含足够的传感器,可以用作增强现实(AR)设备。这种增强现实体验受到单个设备传感器的精度和功能以及将传感器数据处理成可用形式的能力的限制。本文讨论了第六中间件移动版本的当前工作,该中间件允许创建支持传感器Web的AR应用程序。本文讨论了目前在移动第六方面的工作,包括在不同的Android和非Android设备之间创建传感器网络。这导致了几个小型的示威,这些示威将在本工作进展文件中讨论。该项目的未来工作将概述该项目的目标,以允许集成其他设备,从而探索新功能,例如利用这些设备的附加特性。
{"title":"SIXTH middleware for sensor web enabled AR applications","authors":"A. Campbell, Dominic Carr, Levent Görgü, David Lillis, Barnard Kroon, G. O’hare","doi":"10.1109/ISMAR.2013.6671829","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671829","url":null,"abstract":"We increasingly live in a world where sensors have become truly ubiquitous in nature. Many of these sensors are an integral part of devices such as smartphones, which contain sufficient sensors to allow for their use as Augmented Reality (AR) devices. This AR experience is limited by the precision and functionality of an individual device's sensors and the its capacity to process the sensor data into a useable form. This paper discuss the current work on a mobile version of the SIXTH middleware which allows for creation of Sensor Web enabled AR applications. This paper discusses current work on mobile SIXTH, which involves the creation of a sensor web between different Android and non-Android devices. This has led to several small demonstrators which are discussed in this work in progress paper. Future work on the project will be outline the aims of the project to allow for the integration of additional devices so as to explore new abilities such as leveraging additional proprieties of those devices.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89090382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A pilot study for Augmented Reality supported procedure guidance to operate payload racks on-board the International Space Station 增强现实的试点研究支持了操作国际空间站上有效载荷架的程序指导
Daniela Markov-Vetter, O. Staadt
We present our current state in developing and testing of Augmented Reality supported spaceflight procedures for intra-vehicular payload activities. Our vision is to support the ground team and the flight crew to author and operate easily AR guidelines without programming and AR knowledge. For visualization of the procedural instructions using an HMD, 3D registered visual aids are overlaid onto the payload model operated by additional voice control. Embedded informational resources (e.g., images and videos) are provided through a mobile tangible user interface. In a pilot study that was performed at the ESA European Astronaut Centre by application domain experts, we evaluated the performance, workload and acceptance by comparing our AR system with the conventional method of displaying PDF documents of the procedure.
我们介绍了我们在开发和测试增强现实支持的航天器内有效载荷活动的航天程序的当前状态。我们的愿景是支持地面团队和机组人员在没有编程和AR知识的情况下轻松编写和操作AR指南。为了使用HMD可视化程序指令,3D注册视觉辅助设备被覆盖到通过额外的语音控制操作的有效载荷模型上。嵌入式信息资源(例如,图像和视频)通过移动有形用户界面提供。在欧空局欧洲宇航员中心由应用领域专家进行的一项试点研究中,我们通过将我们的AR系统与显示程序PDF文档的传统方法进行比较,评估了性能、工作量和接受度。
{"title":"A pilot study for Augmented Reality supported procedure guidance to operate payload racks on-board the International Space Station","authors":"Daniela Markov-Vetter, O. Staadt","doi":"10.1109/ISMAR.2013.6671832","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671832","url":null,"abstract":"We present our current state in developing and testing of Augmented Reality supported spaceflight procedures for intra-vehicular payload activities. Our vision is to support the ground team and the flight crew to author and operate easily AR guidelines without programming and AR knowledge. For visualization of the procedural instructions using an HMD, 3D registered visual aids are overlaid onto the payload model operated by additional voice control. Embedded informational resources (e.g., images and videos) are provided through a mobile tangible user interface. In a pilot study that was performed at the ESA European Astronaut Centre by application domain experts, we evaluated the performance, workload and acceptance by comparing our AR system with the conventional method of displaying PDF documents of the procedure.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73046445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Not just augmentation: How to re-make the world? 不只是增强:如何重塑世界?
I. Poupyrev
Summary form only given. From the dawn of the industrial revolution and establishment of mass production the world has been fixed and immutable. The objects and devices around us were designed by engineers, manufactured in factories, sold in stores and brought to our homes and offices to serve the purpose that they had been made for. Our roles were that of passive consumers. Recently, however, we have come to expect that our objects and environments are interactive, engaging and hackable anytime and anywhere. We are no longer passive consumers. With new rapid prototyping tools, open source software, readily available sensors and microprocessors, novel materials, 3D printers and printed electronics our world can be hacked, twisted, connected, re-connected and extended with functionality that it is not supposed to have. We can make living plants play digital music and human bodies transmit sound, we can build touch screens on water, extend our world by 3D printing whatever we need, create virtual objects that we can feel in free air with bare hands and produce electrical energy from ordinary paper. Everyday physical objects, both living and artificial, and entire environments can be made interactive, responsive and digital. Never before in our collective history have we as individuals had so much power to re-make the world around us, enhance it with new experiences and functionalities that educate, delight, entertain and make our lives better in countless ways. And most of them are yet to be invented. In this talk I will look back at the relation of digital technology and physical world, speculate about it's future and discuss some of the recent explorations that myself and my group have been conducting in merging digital computing and physical environments. Some of the topics include new technologies for tactile augmentation, free-air haptics, deformable and compliant computers, ad-hoc sensor augmentation, biologically inspired interfaces, energy harvesting and others. The talk will cover both early projects that I conducted at Sony Corporation and current research efforts by the Interaction Group at Disney Research, Pittsburgh.
只提供摘要形式。从工业革命和大规模生产开始,世界就一直是固定不变的。我们身边的物品和设备都是由工程师设计,由工厂制造,在商店出售,然后带到我们的家庭和办公室,以服务于它们被制造出来的目的。我们的角色是被动的消费者。然而,最近,我们开始期望我们的对象和环境是交互式的,引人入胜的,随时随地都可以破解。我们不再是被动的消费者。有了新的快速成型工具、开源软件、随时可用的传感器和微处理器、新材料、3D打印机和印刷电子产品,我们的世界可以被黑客入侵、扭曲、连接、重新连接,并扩展出它不应该拥有的功能。我们可以让有生命的植物播放数字音乐,人类的身体传递声音,我们可以在水上建造触摸屏,通过3D打印我们需要的任何东西来扩展我们的世界,创造我们可以在自由空气中徒手触摸的虚拟物体,用普通的纸张产生电能。日常的实物,无论是生活的还是人造的,以及整个环境都可以互动、响应和数字化。在我们的集体历史上,我们作为个人从未有过如此强大的力量来重新塑造我们周围的世界,用新的体验和功能来增强它,以无数的方式教育、愉悦、娱乐并使我们的生活更美好。其中大部分还没有被发明出来。在这次演讲中,我将回顾数字技术和物理世界的关系,推测它的未来,并讨论我和我的团队最近在合并数字计算和物理环境方面所进行的一些探索。其中一些主题包括触觉增强的新技术,自由空气触觉,可变形和兼容的计算机,特设传感器增强,生物启发界面,能量收集等。这次演讲将涵盖我在索尼公司进行的早期项目,以及匹兹堡迪斯尼研究院互动小组目前的研究成果。
{"title":"Not just augmentation: How to re-make the world?","authors":"I. Poupyrev","doi":"10.1109/ISMAR.2013.6671754","DOIUrl":"https://doi.org/10.1109/ISMAR.2013.6671754","url":null,"abstract":"Summary form only given. From the dawn of the industrial revolution and establishment of mass production the world has been fixed and immutable. The objects and devices around us were designed by engineers, manufactured in factories, sold in stores and brought to our homes and offices to serve the purpose that they had been made for. Our roles were that of passive consumers. Recently, however, we have come to expect that our objects and environments are interactive, engaging and hackable anytime and anywhere. We are no longer passive consumers. With new rapid prototyping tools, open source software, readily available sensors and microprocessors, novel materials, 3D printers and printed electronics our world can be hacked, twisted, connected, re-connected and extended with functionality that it is not supposed to have. We can make living plants play digital music and human bodies transmit sound, we can build touch screens on water, extend our world by 3D printing whatever we need, create virtual objects that we can feel in free air with bare hands and produce electrical energy from ordinary paper. Everyday physical objects, both living and artificial, and entire environments can be made interactive, responsive and digital. Never before in our collective history have we as individuals had so much power to re-make the world around us, enhance it with new experiences and functionalities that educate, delight, entertain and make our lives better in countless ways. And most of them are yet to be invented. In this talk I will look back at the relation of digital technology and physical world, speculate about it's future and discuss some of the recent explorations that myself and my group have been conducting in merging digital computing and physical environments. Some of the topics include new technologies for tactile augmentation, free-air haptics, deformable and compliant computers, ad-hoc sensor augmentation, biologically inspired interfaces, energy harvesting and others. The talk will cover both early projects that I conducted at Sony Corporation and current research efforts by the Interaction Group at Disney Research, Pittsburgh.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78189563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1