首页 > 最新文献

2016 IEEE Symposium on 3D User Interfaces (3DUI)最新文献

英文 中文
Collision avoidance in the presence of a virtual agent in small-scale virtual environments 小规模虚拟环境中存在虚拟代理时的避碰
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460045
A. Bönsch, B. Weyers, J. Wendt, Sebastian Freitag, T. Kuhlen
Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.
计算机控制的类人虚拟代理(VAs)通常被嵌入到沉浸式虚拟环境(ive)中,以使场景生动起来或帮助用户。需要满足某些约束条件,例如,允许用户保持个人空间的避碰策略。在现实世界和虚拟世界中,违反这个灵活的保护区域会引起不适。然而,目前还没有针对小型自动驾驶汽车的避碰研究。我们的目标是通过在CAVE中展示受控用户研究的结果来缩小这一差距。27名参与者沉浸在一个小型办公室里,任务是到达办公室门口。他们的去路被一名代表他们同事的男性或女性退伍军人挡住了。VA在凝视和运动方面表现出不同的行为模式。我们的研究结果表明,参与者更喜欢协同避免碰撞:他们希望VA让开,以便获得更多的空间,同时愿意调整自己的步行路径。
{"title":"Collision avoidance in the presence of a virtual agent in small-scale virtual environments","authors":"A. Bönsch, B. Weyers, J. Wendt, Sebastian Freitag, T. Kuhlen","doi":"10.1109/3DUI.2016.7460045","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460045","url":null,"abstract":"Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125015366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
SharpView: Improved clarity of defocused content on optical see-through head-mounted displays SharpView:提高了光学透明头戴式显示器上散焦内容的清晰度
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460049
Koheiushima, Kenneth R. Moser, D. Rompapas, J. Swan, Sei Ikeda, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato
Augmented Reality (AR) systems, which utilize optical see-through head-mounted displays, are becoming more common place, with several consumer level options already available, and the promise of additional, more advanced, devices on the horizon. A common factor among current generation optical see-through devices, though, is fixed focal distance to virtual content. While fixed focus is not a concern for video see-through AR, since both virtual and real world imagery are combined into a single image by the display, unequal distances between real world objects and the virtual display screen in optical see-through AR is unavoidable. In this work, we investigate the issue of focus blur, in particular, the blurring caused by simultaneously viewing virtual content and physical objects in the environment at differing focal distances. We additionally examine the application of dynamic sharpening filters as a straight forward, system independent, means for mitigating this effect improving the clarity of defocused AR content. We assess the utility of this method, termed SharpView, by employing an adjustment experiment in which users actively apply varying amounts of sharpening to reduce the perception of blur in AR content shown at four focal disparity levels relative to real world imagery. Our experimental results confirm that dynamic correction schemes are required for adequately addressing the presence of blur in Optical See-Through AR. Furthermore, we validate the ability of our SharpView model to improve the perceived visual clarity of focus blurred content, with optimal performance at focal differences well suited for near field AR applications.
利用光学透明头戴式显示器的增强现实(AR)系统正变得越来越普遍,已有几种消费者级选项可供选择,并且有望推出更多更先进的设备。然而,当前一代光学透明设备的一个共同因素是与虚拟内容的焦距固定。虽然固定焦点不是视频透视AR的问题,但由于虚拟世界和现实世界的图像都被显示器组合成一个图像,因此在光学透视AR中,现实世界物体和虚拟显示屏之间的距离不等是不可避免的。在这项工作中,我们研究了焦点模糊的问题,特别是在不同焦距的环境中同时观看虚拟内容和物理对象所引起的模糊。我们还研究了动态锐化过滤器的应用,作为一种直接的、独立于系统的方法,用于减轻这种影响,提高散焦AR内容的清晰度。我们评估了这种方法的效用,称为SharpView,通过采用调整实验,用户主动应用不同数量的锐化,以减少AR内容中相对于现实世界图像显示的四个焦点视差水平的模糊感。我们的实验结果证实,需要动态校正方案来充分解决光学透明AR中模糊的存在。此外,我们验证了我们的SharpView模型提高焦点模糊内容的感知视觉清晰度的能力,在焦差下的最佳性能非常适合近场AR应用。
{"title":"SharpView: Improved clarity of defocused content on optical see-through head-mounted displays","authors":"Koheiushima, Kenneth R. Moser, D. Rompapas, J. Swan, Sei Ikeda, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato","doi":"10.1109/3DUI.2016.7460049","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460049","url":null,"abstract":"Augmented Reality (AR) systems, which utilize optical see-through head-mounted displays, are becoming more common place, with several consumer level options already available, and the promise of additional, more advanced, devices on the horizon. A common factor among current generation optical see-through devices, though, is fixed focal distance to virtual content. While fixed focus is not a concern for video see-through AR, since both virtual and real world imagery are combined into a single image by the display, unequal distances between real world objects and the virtual display screen in optical see-through AR is unavoidable. In this work, we investigate the issue of focus blur, in particular, the blurring caused by simultaneously viewing virtual content and physical objects in the environment at differing focal distances. We additionally examine the application of dynamic sharpening filters as a straight forward, system independent, means for mitigating this effect improving the clarity of defocused AR content. We assess the utility of this method, termed SharpView, by employing an adjustment experiment in which users actively apply varying amounts of sharpening to reduce the perception of blur in AR content shown at four focal disparity levels relative to real world imagery. Our experimental results confirm that dynamic correction schemes are required for adequately addressing the presence of blur in Optical See-Through AR. Furthermore, we validate the ability of our SharpView model to improve the perceived visual clarity of focus blurred content, with optimal performance at focal differences well suited for near field AR applications.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A schematic eye for virtual environments 虚拟环境的示意图
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460055
J. A. Jones, Darlene E. Edewaard, R. Tyrrell, L. Hodges
This paper presents a schematic eye model designed for use by virtual environments researchers and practitioners. This model, based on a combination of several ophthalmic models, attempts to very closely approximate a user's optical centers and intraocular separation using as little as a single measurement of pupillary distance (PD). Typically, these parameters are loosely approximated based on the PD of the user while converged to some known distance. However, this may not be sufficient for users to accurately perform spatially sensitive tasks in the near field. We investigate this possibility by comparing the impact of several common PD-based models and our schematic eye model on users' ability to accurately match real and virtual targets in depth. This was done using a specially designed display and robotic positioning apparatus that allowed sub-millimeter measurement of target positions and user responses. We found that the schematic eye model resulted in significantly improved real to virtual matches with average accuracy, in some cases, well under 1mm. We also present a novel, low-cost method of accurately measuring PD using an off-the-shelf trial frame and pinhole filters. We validated this method by comparing its measurements against those taken using an ophthalmic autorefractor. Significant differences were not found between the two methods.
本文提出了一个用于虚拟环境研究人员和从业人员使用的示意图眼模型。该模型基于几种眼科模型的组合,试图非常接近用户的光学中心和眼内分离,只需使用一次瞳孔距离(PD)的测量。通常,这些参数是根据用户的PD松散近似的,同时收敛到一些已知的距离。然而,这可能不足以让用户在近场准确执行空间敏感任务。我们通过比较几种常见的基于pd的模型和我们的示意图眼模型对用户深度准确匹配真实和虚拟目标的能力的影响来研究这种可能性。这是通过特别设计的显示器和机器人定位装置完成的,该装置允许亚毫米测量目标位置和用户反应。我们发现,示意图眼模型显著提高了真实与虚拟匹配的平均精度,在某些情况下,精度远低于1毫米。我们还提出了一种新颖的、低成本的精确测量PD的方法,使用现成的试验框架和针孔滤波器。我们通过将其测量值与使用眼科自折射镜的测量值进行比较来验证该方法。两种方法间无显著差异。
{"title":"A schematic eye for virtual environments","authors":"J. A. Jones, Darlene E. Edewaard, R. Tyrrell, L. Hodges","doi":"10.1109/3DUI.2016.7460055","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460055","url":null,"abstract":"This paper presents a schematic eye model designed for use by virtual environments researchers and practitioners. This model, based on a combination of several ophthalmic models, attempts to very closely approximate a user's optical centers and intraocular separation using as little as a single measurement of pupillary distance (PD). Typically, these parameters are loosely approximated based on the PD of the user while converged to some known distance. However, this may not be sufficient for users to accurately perform spatially sensitive tasks in the near field. We investigate this possibility by comparing the impact of several common PD-based models and our schematic eye model on users' ability to accurately match real and virtual targets in depth. This was done using a specially designed display and robotic positioning apparatus that allowed sub-millimeter measurement of target positions and user responses. We found that the schematic eye model resulted in significantly improved real to virtual matches with average accuracy, in some cases, well under 1mm. We also present a novel, low-cost method of accurately measuring PD using an off-the-shelf trial frame and pinhole filters. We validated this method by comparing its measurements against those taken using an ophthalmic autorefractor. Significant differences were not found between the two methods.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Automated path prediction for redirected walking using navigation meshes 自动路径预测重定向行走使用导航网格
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460032
Mahdi Azmandian, Timofey Grechkin, M. Bolas, Evan A. Suma
Redirected walking techniques have been introduced to overcome physical space limitations for natural locomotion in virtual reality. These techniques decouple real and virtual user trajectories by subtly steering the user away from the boundaries of the physical space while maintaining the illusion that the user follows the intended virtual path. Effectiveness of redirection algorithms can significantly improve when a reliable prediction of the users future virtual path is available. In current solutions, the future user trajectory is predicted based on non-standardized manual annotations of the environment structure, which is both tedious and inflexible. We propose a method for automatically generating environment annotation graphs and predicting the user trajectory using navigation meshes. We discuss the integration of this method with existing redirected walking algorithms such as FORCE and MPCRed. Automated annotation of the virtual environments structure enables simplified deployment of these algorithms in any virtual environment.
重定向行走技术的引入是为了克服虚拟现实中自然运动的物理空间限制。这些技术通过巧妙地引导用户远离物理空间的边界,同时保持用户遵循预期虚拟路径的错觉,从而分离真实和虚拟用户轨迹。当能够可靠地预测用户未来的虚拟路径时,重定向算法的有效性将得到显著提高。在目前的解决方案中,未来的用户轨迹是基于对环境结构的非标准化手动注释来预测的,这既繁琐又不灵活。提出了一种利用导航网格自动生成环境标注图和预测用户轨迹的方法。我们讨论了该方法与现有的重定向行走算法如FORCE和MPCRed的集成。虚拟环境结构的自动注释可以简化这些算法在任何虚拟环境中的部署。
{"title":"Automated path prediction for redirected walking using navigation meshes","authors":"Mahdi Azmandian, Timofey Grechkin, M. Bolas, Evan A. Suma","doi":"10.1109/3DUI.2016.7460032","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460032","url":null,"abstract":"Redirected walking techniques have been introduced to overcome physical space limitations for natural locomotion in virtual reality. These techniques decouple real and virtual user trajectories by subtly steering the user away from the boundaries of the physical space while maintaining the illusion that the user follows the intended virtual path. Effectiveness of redirection algorithms can significantly improve when a reliable prediction of the users future virtual path is available. In current solutions, the future user trajectory is predicted based on non-standardized manual annotations of the environment structure, which is both tedious and inflexible. We propose a method for automatically generating environment annotation graphs and predicting the user trajectory using navigation meshes. We discuss the integration of this method with existing redirected walking algorithms such as FORCE and MPCRed. Automated annotation of the virtual environments structure enables simplified deployment of these algorithms in any virtual environment.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Floating charts: Data plotting using free-floating acoustically levitated representations 浮动图表:使用自由浮动的声学悬浮表示法绘制数据
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460051
Themis Omirou, A. Pérez, S. Subramanian, A. Roudaut
Charts are graphical representations of numbers that help us to extract trends, relations and in general to have a better understanding of data. For this reason, multiple systems have been developed to display charts in a digital or physical manner. Here, we introduce Floating Charts, a modular display that utilizes acoustic levitation for positioning free-floating objects. Multiple objects are individually levitated to compose a dynamic floating chart with the ability to move in real time to reflect changes in data. Floating objects can have different sizes and colours to represent extra information. Additionally, they can be levitated across other physical structures to improve depth perception. We present the system design, a technical evaluation and a catalogue of chart variations.
图表是数字的图形表示,帮助我们提取趋势、关系,总体上更好地理解数据。由于这个原因,已经开发了多种系统以数字或物理方式显示图表。在这里,我们介绍了浮动图表,这是一种模块化的显示器,利用声学悬浮来定位自由漂浮的物体。多个对象单独悬浮,组成一个动态浮动图表,能够实时移动以反映数据的变化。浮动物体可以有不同的大小和颜色来表示额外的信息。此外,他们可以悬浮在其他物理结构上,以提高深度感知。我们提出了系统设计,技术评估和图表变化的目录。
{"title":"Floating charts: Data plotting using free-floating acoustically levitated representations","authors":"Themis Omirou, A. Pérez, S. Subramanian, A. Roudaut","doi":"10.1109/3DUI.2016.7460051","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460051","url":null,"abstract":"Charts are graphical representations of numbers that help us to extract trends, relations and in general to have a better understanding of data. For this reason, multiple systems have been developed to display charts in a digital or physical manner. Here, we introduce Floating Charts, a modular display that utilizes acoustic levitation for positioning free-floating objects. Multiple objects are individually levitated to compose a dynamic floating chart with the ability to move in real time to reflect changes in data. Floating objects can have different sizes and colours to represent extra information. Additionally, they can be levitated across other physical structures to improve depth perception. We present the system design, a technical evaluation and a catalogue of chart variations.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"101-102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132978871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
In-situ flood visualisation using mobile AR 利用移动增强现实技术进行洪水现场可视化
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460061
P. Haynes, Eckart Lange
We present a prototype augmented reality (AR) app for flood visualisation using techniques of in situ geometry modeling and constructive solid geometry (CSG). Natural and augmented point correspondences are computed using a method of interactive triangulation. Prototype geometry is oriented to pairs of triangulated points to model buildings and other structures within the scene. A CSG difference operation between a plane and the geometry produces the virtual flood plane, which can be translated vertically. Registration and tracking is achieved using the Qualcomm Vuforia software development kit (SDK). Focus is given to the means with which the objective is achieved using readily available technology.
我们提出了一个原型增强现实(AR)应用程序,用于洪水可视化,使用原位几何建模和构造实体几何(CSG)技术。自然点对应和增广点对应使用交互三角剖分方法计算。原型几何是面向成对的三角点,以模拟场景中的建筑物和其他结构。平面和几何图形之间的CSG差分操作产生虚拟洪水平面,可以垂直平移。注册和跟踪是使用高通Vuforia软件开发工具包(SDK)实现的。重点放在利用现成的技术实现目标的手段上。
{"title":"In-situ flood visualisation using mobile AR","authors":"P. Haynes, Eckart Lange","doi":"10.1109/3DUI.2016.7460061","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460061","url":null,"abstract":"We present a prototype augmented reality (AR) app for flood visualisation using techniques of in situ geometry modeling and constructive solid geometry (CSG). Natural and augmented point correspondences are computed using a method of interactive triangulation. Prototype geometry is oriented to pairs of triangulated points to model buildings and other structures within the scene. A CSG difference operation between a plane and the geometry produces the virtual flood plane, which can be translated vertically. Registration and tracking is achieved using the Qualcomm Vuforia software development kit (SDK). Focus is given to the means with which the objective is achieved using readily available technology.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115755547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Collaborative hybrid virtual environment 协同混合虚拟环境
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460081
Leonardo Pavanatto Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli, M. Pinho, Regis Kopper
Supposing that, in a system operated by two users in different positions, it is easier for one of them to perform some operations, we developed a 3D User Interface (3DUI) that allows two users to interact together with an object, using the three modification operations (scale, rotate and translate) to reach a goal. The operations can be performed using two augmented reality cubes, which can obtain up to 6 degrees of freedom, and every user can select any operation by using a button on the keyboard to cycle through them. To the cubes are assigned two different points of view: an exocentric view, where the user will stand at a given distance from the object, with a point of view similar to the one of a human being; and an egocentric view, where the user will stand much closer to the object, having the point of view from the object's perspective. These points of view are locked to each user, which means that one user cannot use both views, just the one assigned to his ID. The cameras have a small margin of movement, allowing just a tilt to the sides, according to the Oculus's movements. With these features, this 3DUI aims to test which point of view is better for each operation, and how the degrees of freedom should be separated between the users.
假设在一个由两个用户在不同位置操作的系统中,其中一个用户更容易执行某些操作,我们开发了一个3D用户界面(3DUI),允许两个用户一起与一个对象进行交互,使用三种修改操作(缩放,旋转和平移)来达到一个目标。这些操作可以使用两个增强现实立方体来执行,它可以获得多达6个自由度,每个用户都可以通过使用键盘上的按钮来选择任何操作。立方体被分配了两个不同的视角:一个是外心视角,用户将站在与物体给定距离的位置,其视角类似于人类的视角;另一种是自我中心视角,用户会站在离物体更近的地方,从物体的角度看问题。这些视图被锁定给每个用户,这意味着一个用户不能使用两个视图,只能使用分配给他的ID的视图。摄像机有一个小的移动余量,只允许倾斜到两侧,根据Oculus的运动。有了这些功能,这个3DUI旨在测试哪个角度对每个操作更好,以及如何在用户之间分离自由度。
{"title":"Collaborative hybrid virtual environment","authors":"Leonardo Pavanatto Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli, M. Pinho, Regis Kopper","doi":"10.1109/3DUI.2016.7460081","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460081","url":null,"abstract":"Supposing that, in a system operated by two users in different positions, it is easier for one of them to perform some operations, we developed a 3D User Interface (3DUI) that allows two users to interact together with an object, using the three modification operations (scale, rotate and translate) to reach a goal. The operations can be performed using two augmented reality cubes, which can obtain up to 6 degrees of freedom, and every user can select any operation by using a button on the keyboard to cycle through them. To the cubes are assigned two different points of view: an exocentric view, where the user will stand at a given distance from the object, with a point of view similar to the one of a human being; and an egocentric view, where the user will stand much closer to the object, having the point of view from the object's perspective. These points of view are locked to each user, which means that one user cannot use both views, just the one assigned to his ID. The cameras have a small margin of movement, allowing just a tilt to the sides, according to the Oculus's movements. With these features, this 3DUI aims to test which point of view is better for each operation, and how the degrees of freedom should be separated between the users.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115574768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The benefits of rotational head tracking 旋转头部跟踪的好处
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460028
Swaroop K. Pal, Marriam Khan, Ryan P. McMahan
There are three common types of head tracking provided by virtual reality (VR) systems based on their degrees of freedom (DOF): complete 6-DOF, rotational 3-DOF, and translational 3-DOF. Prior research has indicated that complete 6-DOF head tracking provides significantly better user performance than not having head tracking, but there is little to no research comparing the three common types of head tracking. In this paper, we present one of the first studies to investigate and compare the effects of complete head tracking, rotational head tracking, and translational head tracking. The results of this study indicate that translational head tracking was significantly worse than complete and rotational head tracking, in terms of task time, task errors, reported usability, and presence. Surprisingly, we did not find any significant differences between complete and rotational head tracking. We discuss potential reasons why, in addition to the implications of the results.
基于自由度(DOF),虚拟现实(VR)系统提供了三种常见的头部跟踪类型:完全6-DOF、旋转3-DOF和动动3-DOF。先前的研究表明,完整的6自由度头部跟踪比不进行头部跟踪提供了明显更好的用户性能,但很少有研究比较三种常见的头部跟踪类型。在本文中,我们提出了第一个研究,以调查和比较完整的头部跟踪,旋转头部跟踪和动头部跟踪的效果。本研究的结果表明,在任务时间、任务错误、报告的可用性和存在性方面,平移性头部跟踪明显低于完整和旋转头部跟踪。令人惊讶的是,我们没有发现完整和旋转头部跟踪之间有任何显著差异。除了结果的含义,我们还讨论了可能的原因。
{"title":"The benefits of rotational head tracking","authors":"Swaroop K. Pal, Marriam Khan, Ryan P. McMahan","doi":"10.1109/3DUI.2016.7460028","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460028","url":null,"abstract":"There are three common types of head tracking provided by virtual reality (VR) systems based on their degrees of freedom (DOF): complete 6-DOF, rotational 3-DOF, and translational 3-DOF. Prior research has indicated that complete 6-DOF head tracking provides significantly better user performance than not having head tracking, but there is little to no research comparing the three common types of head tracking. In this paper, we present one of the first studies to investigate and compare the effects of complete head tracking, rotational head tracking, and translational head tracking. The results of this study indicate that translational head tracking was significantly worse than complete and rotational head tracking, in terms of task time, task errors, reported usability, and presence. Surprisingly, we did not find any significant differences between complete and rotational head tracking. We discuss potential reasons why, in addition to the implications of the results.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125035191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Augmented virtuality in real time for pre-visualization in film 增强虚拟现实在实时预可视化电影
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460050
Alex Stamm, Patrick Teall, Guillermo Blanco Benedicto
This project looks into creating an augmented virtuality pre-visualization system to empower indie filmmakers during the onset production process. Indie directors are currently unable to pre-visualize their virtual set without the funds to pay for a high-fidelity 3D visualization system. Our team has created a pre-visualization prototype that allows independent filmmakers to perform augmented virtuality by placing actors into a computer-generated 3D environment for the purposes of virtual production. After performing our preliminary usability research, we have determined a clear and effective 3D interface for film directors to use during the production process. The implication for this research sets the groundwork for building a pre-visualization system for on-set production that satisfies independent and emerging filmmakers.
这个项目着眼于创建一个增强的虚拟预可视化系统,以便在开始制作过程中授权独立电影制作人。独立导演目前无法在没有资金支付高保真3D可视化系统的情况下预先可视化他们的虚拟场景。我们的团队已经创建了一个预可视化原型,允许独立电影制作人通过将演员放置在计算机生成的3D环境中来执行增强虚拟,以达到虚拟制作的目的。在进行了初步的可用性研究后,我们确定了一个清晰有效的3D界面供电影导演在制作过程中使用。本研究的意义为建立一个满足独立和新兴电影人的现场制作预可视化系统奠定了基础。
{"title":"Augmented virtuality in real time for pre-visualization in film","authors":"Alex Stamm, Patrick Teall, Guillermo Blanco Benedicto","doi":"10.1109/3DUI.2016.7460050","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460050","url":null,"abstract":"This project looks into creating an augmented virtuality pre-visualization system to empower indie filmmakers during the onset production process. Indie directors are currently unable to pre-visualize their virtual set without the funds to pay for a high-fidelity 3D visualization system. Our team has created a pre-visualization prototype that allows independent filmmakers to perform augmented virtuality by placing actors into a computer-generated 3D environment for the purposes of virtual production. After performing our preliminary usability research, we have determined a clear and effective 3D interface for film directors to use during the production process. The implication for this research sets the groundwork for building a pre-visualization system for on-set production that satisfies independent and emerging filmmakers.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133379839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CollaborativeConstraint: UI for collaborative 3D manipulation operations 协作约束:用于协作3D操作的UI
Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460076
Naëm Baron
Collaboration in virtual environments (VEs) is important as it offers a new perspective on interactions with and within these environments. We propose a 3D manipulation method designed for a multi-user scenario, taking advantage of the extended information available to all users. CollaborativeConstraint (ColCo) is a simple method to perform canonical 3D manipulation operations by mean of a 3D user interface (UI). It is focused on collaborative tasks in virtual environments based on constraints definition. The communication needs are reduced as much as possible by using easy to understand synchronization mechanism and visual feedbacks. In this paper we present the ColCo concept in detail and demonstrate its application with a test setup.
虚拟环境(ve)中的协作非常重要,因为它提供了与这些环境交互和在这些环境中交互的新视角。我们提出了一种针对多用户场景设计的3D操作方法,利用所有用户可用的扩展信息。ColCo (collaborativeeconconstraint)是一种通过3D用户界面(UI)执行规范3D操作的简单方法。它主要研究基于约束定义的虚拟环境中的协作任务。通过使用易于理解的同步机制和视觉反馈,尽可能减少通信需求。在本文中,我们详细介绍了ColCo的概念,并通过一个测试装置演示了它的应用。
{"title":"CollaborativeConstraint: UI for collaborative 3D manipulation operations","authors":"Naëm Baron","doi":"10.1109/3DUI.2016.7460076","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460076","url":null,"abstract":"Collaboration in virtual environments (VEs) is important as it offers a new perspective on interactions with and within these environments. We propose a 3D manipulation method designed for a multi-user scenario, taking advantage of the extended information available to all users. CollaborativeConstraint (ColCo) is a simple method to perform canonical 3D manipulation operations by mean of a 3D user interface (UI). It is focused on collaborative tasks in virtual environments based on constraints definition. The communication needs are reduced as much as possible by using easy to understand synchronization mechanism and visual feedbacks. In this paper we present the ColCo concept in detail and demonstrate its application with a test setup.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2016 IEEE Symposium on 3D User Interfaces (3DUI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1