首页 > 最新文献

Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry最新文献

英文 中文
Sodeisha Sculptural Ceramics: Digitalization and VR Interaction Sodeisha雕塑陶瓷:数字化与VR互动
Zi Siang See, U. Rey, Faye Neilson, Michael Cuneo, Alexander Barnes-Keoghan, Luke O'Donnell, Donovan Jones, L. Goodman, Sarah Johnson
This demonstration presents the development of a virtual reality (VR) research project for the VR interaction and digitization of “Sodeisha Sculptural Ceramics”, a transmedia approach showcases photogrammetry scanned Japanese ceramic artworks in an educational and public VR exhibition setting. The early prototype has involved the photogrammetry scanning of 10 sculptural ceramic works of art. These works were created by the innovative Japanese post-war artist group, known as ‘Sodeisha’. Newcastle Art Gallery holds one of the largest collections of Sodeisha ceramics outside of Japan and recently featured the collection in a large-scale exhibition titled SODEISHA: connected to Australia from March – May 2019. The audience used controllers to interact with objects in a virtual environment, with the option of seeing a pair of VR hands or full VR arms.
本次展示展示了一个虚拟现实(VR)研究项目的发展,该项目旨在实现“Sodeisha sculpture Ceramics”的VR交互和数字化,通过跨媒体的方式,在教育和公共VR展览环境中展示摄影测量扫描的日本陶瓷艺术品。早期的原型包括对10件雕塑陶瓷艺术品进行摄影测量扫描。这些作品是由日本战后创新艺术家团体“烧台社”创作的。纽卡斯尔美术馆拥有日本以外最大的Sodeisha陶瓷收藏之一,最近在2019年3月至5月举行的名为“Sodeisha:连接澳大利亚”的大型展览中展出了这些藏品。观众使用控制器与虚拟环境中的物体进行交互,可以选择看到一双VR手或完整的VR手臂。
{"title":"Sodeisha Sculptural Ceramics: Digitalization and VR Interaction","authors":"Zi Siang See, U. Rey, Faye Neilson, Michael Cuneo, Alexander Barnes-Keoghan, Luke O'Donnell, Donovan Jones, L. Goodman, Sarah Johnson","doi":"10.1145/3359997.3365741","DOIUrl":"https://doi.org/10.1145/3359997.3365741","url":null,"abstract":"This demonstration presents the development of a virtual reality (VR) research project for the VR interaction and digitization of “Sodeisha Sculptural Ceramics”, a transmedia approach showcases photogrammetry scanned Japanese ceramic artworks in an educational and public VR exhibition setting. The early prototype has involved the photogrammetry scanning of 10 sculptural ceramic works of art. These works were created by the innovative Japanese post-war artist group, known as ‘Sodeisha’. Newcastle Art Gallery holds one of the largest collections of Sodeisha ceramics outside of Japan and recently featured the collection in a large-scale exhibition titled SODEISHA: connected to Australia from March – May 2019. The audience used controllers to interact with objects in a virtual environment, with the option of seeing a pair of VR hands or full VR arms.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130126544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Reality for Midwifery Learning: MR VR Demonstration 助产学学习的扩展现实:MR VR演示
Donovan Jones, Zi Siang See, M. Billinghurst, L. Goodman, Shanna Fealy
This demonstration presents a development of a Mixed Reality (MR) and Virtual Reality (VR) research project for midwifery student learning, and a novel approach for showing extended reality content in an educational setting. The Road to Birth (RTB) visualises the changes that occur in the female body during pregnancy, and the five days immediately after birth (postpartum) in a detailed 3D setting. In the Base Anatomy studio, users can observe the base anatomical layers of an adult female. In Pregnancy Timeline, they can scroll through the weeks of gestation to see the development of the baby and the anatomical changes of the mother throughout the pregnancy and postpartum. Finally, users can learn about the different possible birthing positions that may present in Birth Considerations. During the demo, users can experience the system in either MR or VR.
本演示展示了用于助产学学生学习的混合现实(MR)和虚拟现实(VR)研究项目的发展,以及在教育环境中显示扩展现实内容的新方法。分娩之路(RTB)以详细的3D设置可视化女性身体在怀孕期间和分娩后五天(产后)发生的变化。在基础解剖工作室,用户可以观察到一个成年女性的基础解剖层。在怀孕时间轴上,他们可以滚动查看怀孕的几周,以查看婴儿的发育以及母亲在整个怀孕期间和产后的解剖变化。最后,用户可以了解不同可能的分娩姿势,可能出现在分娩考虑。在演示过程中,用户可以在MR或VR中体验系统。
{"title":"Extended Reality for Midwifery Learning: MR VR Demonstration","authors":"Donovan Jones, Zi Siang See, M. Billinghurst, L. Goodman, Shanna Fealy","doi":"10.1145/3359997.3365739","DOIUrl":"https://doi.org/10.1145/3359997.3365739","url":null,"abstract":"This demonstration presents a development of a Mixed Reality (MR) and Virtual Reality (VR) research project for midwifery student learning, and a novel approach for showing extended reality content in an educational setting. The Road to Birth (RTB) visualises the changes that occur in the female body during pregnancy, and the five days immediately after birth (postpartum) in a detailed 3D setting. In the Base Anatomy studio, users can observe the base anatomical layers of an adult female. In Pregnancy Timeline, they can scroll through the weeks of gestation to see the development of the baby and the anatomical changes of the mother throughout the pregnancy and postpartum. Finally, users can learn about the different possible birthing positions that may present in Birth Considerations. During the demo, users can experience the system in either MR or VR.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130472749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Motion Volume: Visualization of Human Motion Manifolds 运动体积:人体运动流形的可视化
Masaki Oshita
The understanding of human motion is important in many areas such as sports, dance, and animation. In this paper, we propose a method for visualizing the manifold of human motions. A motion manifold is defined by a set of motions in a specific motion form. Our method visualizes the ranges of time-varying positions and orientations of a body part by generating volumetric shapes for representing them. It selects representative keyposes from the keyposes of all input motions to visualize the range of keyposes at each key timing. A geometrical volume that contains the trajectories from all input motions is generated for each body part. In addition, a geometrical volume that contains the orientations from all input motions is generated for a sample point on the trajectory. The user can understand the motion manifold by visualizing these motion volumes. In this paper, we present some experimental examples for a tennis shot form.
对人体运动的理解在许多领域都很重要,比如体育、舞蹈和动画。在本文中,我们提出了一种可视化人体运动流形的方法。运动流形是由一组特定运动形式的运动来定义的。我们的方法通过生成代表它们的体积形状来可视化身体部位随时间变化的位置和方向的范围。它从所有输入动作的键姿中选择具有代表性的键姿,以可视化每个键时键姿的范围。为每个身体部分生成包含所有输入运动轨迹的几何体积。此外,为轨迹上的样本点生成包含所有输入运动方向的几何体。用户可以通过可视化这些运动体积来理解运动歧管。在本文中,我们给出了一些网球击球形式的实验例子。
{"title":"Motion Volume: Visualization of Human Motion Manifolds","authors":"Masaki Oshita","doi":"10.1145/3359997.3365684","DOIUrl":"https://doi.org/10.1145/3359997.3365684","url":null,"abstract":"The understanding of human motion is important in many areas such as sports, dance, and animation. In this paper, we propose a method for visualizing the manifold of human motions. A motion manifold is defined by a set of motions in a specific motion form. Our method visualizes the ranges of time-varying positions and orientations of a body part by generating volumetric shapes for representing them. It selects representative keyposes from the keyposes of all input motions to visualize the range of keyposes at each key timing. A geometrical volume that contains the trajectories from all input motions is generated for each body part. In addition, a geometrical volume that contains the orientations from all input motions is generated for a sample point on the trajectory. The user can understand the motion manifold by visualizing these motion volumes. In this paper, we present some experimental examples for a tennis shot form.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"609 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126699162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Creation and Live Performance of Dance and Music Based on a Body-part Motion Synthesis System 基于肢体动作合成系统的舞蹈音乐创作与现场表演
A. Soga
We developed a Body-part Motion Synthesis System (BMSS), which allows users to create choreography by synthesizing body-part motions and to simulate them in 3D animation. To explore the possibilities of using BMSS for creative activities, two dances with different concepts were created and performed by a dancer and a musician. We confirmed that BMSS might be able to generate effective choreographic motions for dance and easily and quickly to support its creation. Moreover, creation using BMSS might fuel new collaboration or interaction between dancers and musicians.
我们开发了一个身体部分运动合成系统(BMSS),它允许用户通过合成身体部分运动来创建舞蹈,并在3D动画中模拟它们。为了探索利用BMSS进行创意活动的可能性,一名舞者和一名音乐家创作并表演了两种不同概念的舞蹈。我们证实,BMSS可能能够生成有效的舞蹈动作,并方便快捷地支持其创作。此外,使用BMSS的创作可能会促进舞者和音乐家之间的新合作或互动。
{"title":"Creation and Live Performance of Dance and Music Based on a Body-part Motion Synthesis System","authors":"A. Soga","doi":"10.1145/3359997.3365749","DOIUrl":"https://doi.org/10.1145/3359997.3365749","url":null,"abstract":"We developed a Body-part Motion Synthesis System (BMSS), which allows users to create choreography by synthesizing body-part motions and to simulate them in 3D animation. To explore the possibilities of using BMSS for creative activities, two dances with different concepts were created and performed by a dancer and a musician. We confirmed that BMSS might be able to generate effective choreographic motions for dance and easily and quickly to support its creation. Moreover, creation using BMSS might fuel new collaboration or interaction between dancers and musicians.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126250615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visualizing and Interacting with Hierarchical Menus in Immersive Augmented Reality 沉浸式增强现实中分层菜单的可视化和交互
Majid Pourmemar, Charalambos (Charis) Poullis
Graphical User Interfaces (GUIs) have long been used as a way to inform the user of the large number of available actions and options. GUIs in desktop applications traditionally appear in the form of two-dimensional hierarchical menus due to the limited screen real estate, the spatial restrictions imposed by the hardware e.g. 2D, and the available input modalities e.g. mouse/keyboard point-and-click, touch, dwell-time etc. In immersive Augmented Reality (AR), there are no such restrictions and the available input modalities are different (i.e. hand gestures, head pointing or voice recognition), yet the majority of the applications in AR still use the same type of GUIs as with desktop applications. In this paper we focus on identifying the most efficient combination of (hierarchical menu type, input modality) to use in immersive applications using AR headsets. We report on the results of a within-subjects study with 25 participants who performed a number of tasks using four combinations of the most popular hierarchical menu types with the most popular input modalities in AR, namely: (drop-down menu, hand gestures), (drop-down menu, voice), (radial menu, hand gestures), and (radial menu, head pointing). Results show that the majority of the participants (60%, 15) achieved a faster performance using the hierarchical radial menu with head pointing control. Furthermore, the participants clearly indicated the radial menu with head pointing control as the most preferred interaction technique due to the limited physical demand as opposed to the current de facto interaction technique in AR i.e. hand gestures, which after prolonged use becomes physically demanding leading to arm fatigue known as ’Gorilla arms’.
长期以来,图形用户界面(gui)一直被用作通知用户大量可用操作和选项的方式。由于有限的屏幕空间、硬件(如2D)施加的空间限制以及可用的输入方式(如鼠标/键盘的点击、触摸、停留时间等),桌面应用程序中的gui传统上以二维层次菜单的形式出现。在沉浸式增强现实(AR)中,没有这样的限制,可用的输入方式是不同的(即手势,头部指向或语音识别),然而AR中的大多数应用程序仍然使用与桌面应用程序相同类型的gui。在本文中,我们专注于识别最有效的组合(分层菜单类型,输入模式),以在使用AR头显的沉浸式应用程序中使用。我们报告了一项由25名参与者参与的主题内研究的结果,他们使用AR中最流行的四种输入方式(即(下拉菜单,手势)、(下拉菜单,语音)、(径向菜单,手势)和(径向菜单,指向头部)的四种最流行的分层菜单类型组合来执行多项任务。结果表明,大多数参与者(60%,15)在使用带有头部指向控制的分层径向菜单时获得了更快的性能。此外,参与者明确指出,由于有限的身体需求,带有头部指向控制的径向菜单是最受欢迎的交互技术,而不是当前AR中事实上的交互技术,即手势,长时间使用后会导致手臂疲劳,被称为“大猩猩手臂”。
{"title":"Visualizing and Interacting with Hierarchical Menus in Immersive Augmented Reality","authors":"Majid Pourmemar, Charalambos (Charis) Poullis","doi":"10.1145/3359997.3365693","DOIUrl":"https://doi.org/10.1145/3359997.3365693","url":null,"abstract":"Graphical User Interfaces (GUIs) have long been used as a way to inform the user of the large number of available actions and options. GUIs in desktop applications traditionally appear in the form of two-dimensional hierarchical menus due to the limited screen real estate, the spatial restrictions imposed by the hardware e.g. 2D, and the available input modalities e.g. mouse/keyboard point-and-click, touch, dwell-time etc. In immersive Augmented Reality (AR), there are no such restrictions and the available input modalities are different (i.e. hand gestures, head pointing or voice recognition), yet the majority of the applications in AR still use the same type of GUIs as with desktop applications. In this paper we focus on identifying the most efficient combination of (hierarchical menu type, input modality) to use in immersive applications using AR headsets. We report on the results of a within-subjects study with 25 participants who performed a number of tasks using four combinations of the most popular hierarchical menu types with the most popular input modalities in AR, namely: (drop-down menu, hand gestures), (drop-down menu, voice), (radial menu, hand gestures), and (radial menu, head pointing). Results show that the majority of the participants (60%, 15) achieved a faster performance using the hierarchical radial menu with head pointing control. Furthermore, the participants clearly indicated the radial menu with head pointing control as the most preferred interaction technique due to the limited physical demand as opposed to the current de facto interaction technique in AR i.e. hand gestures, which after prolonged use becomes physically demanding leading to arm fatigue known as ’Gorilla arms’.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114138518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Bowl-Shaped Display for Controlling Remote Vehicles 一种用于遥控车辆的碗状显示器
Shio Miyafuji, Florian Perteneder, Toshiki Sato, H. Koike, G. Klinker
This paper proposes a bowl-shaped hemispherical display to observe omnidirectional images. This display type has many advantages over conventional, flat 2D displays, in particular when it is used for controlling remote vehicles. First, it allows users to observe an azimuthal equidistant view of omnidirectional images by looking from above. Second, it provides a first-person view by looking into the inside of the hemispherical surface from diagonally above. Third, it provides a pseudo–third-person view as if we watched the remote vehicle from its back, by observing both the inside and outside at the same time from obliquely above. These characteristics solve the issues of blind angles around the remote vehicle. We conduct a VR-based user study to compare the bowl-shaped display to an equirectangular projection on a 2D display and a first-person view used in head-mounted displays. Based on the insights gained in the study, we present a real-world implementation and describe the uniqueness, advantages but also shortcomings of our method.
本文提出了一种碗状半球形显示器,用于观察全向图像。这种显示类型比传统的平面2D显示具有许多优点,特别是当它用于控制远程车辆时。首先,它允许用户从上方观察全方位图像的方位角等距视图。其次,它提供了第一人称视角,从对角线上方观察半球形表面的内部。第三,它提供了一种伪第三人称视角,就好像我们从远程车辆的背面观察一样,通过同时从倾斜上方观察内部和外部。这些特点解决了遥控车辆周围的盲角问题。我们进行了一项基于虚拟现实的用户研究,将碗形显示器与2D显示器上的等矩形投影和头戴式显示器中使用的第一人称视图进行比较。基于在研究中获得的见解,我们提出了一个现实世界的实现,并描述了我们的方法的独特性,优点和缺点。
{"title":"A Bowl-Shaped Display for Controlling Remote Vehicles","authors":"Shio Miyafuji, Florian Perteneder, Toshiki Sato, H. Koike, G. Klinker","doi":"10.1145/3359997.3365706","DOIUrl":"https://doi.org/10.1145/3359997.3365706","url":null,"abstract":"This paper proposes a bowl-shaped hemispherical display to observe omnidirectional images. This display type has many advantages over conventional, flat 2D displays, in particular when it is used for controlling remote vehicles. First, it allows users to observe an azimuthal equidistant view of omnidirectional images by looking from above. Second, it provides a first-person view by looking into the inside of the hemispherical surface from diagonally above. Third, it provides a pseudo–third-person view as if we watched the remote vehicle from its back, by observing both the inside and outside at the same time from obliquely above. These characteristics solve the issues of blind angles around the remote vehicle. We conduct a VR-based user study to compare the bowl-shaped display to an equirectangular projection on a 2D display and a first-person view used in head-mounted displays. Based on the insights gained in the study, we present a real-world implementation and describe the uniqueness, advantages but also shortcomings of our method.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"52 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127572287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi–Modal High–End Visualization System 多模态高端可视化系统
Conan Bourke, T. Bednarz
This paper describes a production-grade software toolkit used for shared multi-model visualization systems developed by the Expanded Perception and Interaction Centre. Our High-End Visualization System (HEVS) can be used as a framework to enable content to be run transparently on a wider range of platforms (Figure 2) with fewer compatibility issues and dependencies on commercial software. Content can be transferred more easily from large screens (including cluster-driven systems) such as CAVE-like platforms, hemispherical domes, and projected cylindrical displays through to multi-wall displays and HMDs such as VRR or AR. This common framework is able to provide a unifying approach to visual analytics and visualizations. In addition to supporting multi-modal displays, multiple platforms can be connected to create multi-user collaborative experiences across remotely located labs. We aim to demonstrate multiple projects developed with HEVS that have been deployed to various multi-modal display devices.
本文描述了由扩展感知与交互中心开发的用于共享多模型可视化系统的生产级软件工具包。我们的高端可视化系统(HEVS)可以用作一个框架,使内容能够在更广泛的平台上透明地运行(图2),从而减少兼容性问题和对商业软件的依赖。内容可以更容易地从大屏幕(包括集群驱动的系统)(如类cave平台、半球形圆顶和投影圆柱形显示器)转移到多壁显示器和头显(如VRR或AR)。这个通用框架能够提供视觉分析和可视化的统一方法。除了支持多模态显示外,还可以连接多个平台,以跨远程实验室创建多用户协作体验。我们的目标是展示使用HEVS开发的多个项目,这些项目已经部署到各种多模态显示设备上。
{"title":"Multi–Modal High–End Visualization System","authors":"Conan Bourke, T. Bednarz","doi":"10.1145/3359997.3365731","DOIUrl":"https://doi.org/10.1145/3359997.3365731","url":null,"abstract":"This paper describes a production-grade software toolkit used for shared multi-model visualization systems developed by the Expanded Perception and Interaction Centre. Our High-End Visualization System (HEVS) can be used as a framework to enable content to be run transparently on a wider range of platforms (Figure 2) with fewer compatibility issues and dependencies on commercial software. Content can be transferred more easily from large screens (including cluster-driven systems) such as CAVE-like platforms, hemispherical domes, and projected cylindrical displays through to multi-wall displays and HMDs such as VRR or AR. This common framework is able to provide a unifying approach to visual analytics and visualizations. In addition to supporting multi-modal displays, multiple platforms can be connected to create multi-user collaborative experiences across remotely located labs. We aim to demonstrate multiple projects developed with HEVS that have been deployed to various multi-modal display devices.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133185416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
3D Human Avatar Digitization from a Single Image 从单个图像数字化三维人类化身
Zhong Li, Lele Chen, Celong Liu, Yu Gao, Yuanzhou Ha, Chenliang Xu, Shuxue Quan, Yi Xu
With the development of AR/VR technologies, a reliable and straightforward way to digitize three-dimensional human body is in high demand. Most existing methods use complex equipment and sophisticated algorithms. This is impractical for everyday users. In this paper, we propose a pipeline that reconstructs 3D human shape avatar at a glance. Our approach simultaneously reconstructs the three-dimensional human geometry and whole body texture map with only a single RGB image as input. We first segment the human body part from the image and then obtain an initial body geometry by fitting the segment to a parametric model. Next, we warp the initial geometry to the final shape by applying a silhouette-based dense correspondence. Finally, to infer invisible backside texture from a frontal image, we propose a network we call InferGAN. Comprehensive experiments demonstrate that our solution is robust and effective on both public and our own captured data. Our human avatars can be easily rigged and animated using MoCap data. We developed a mobile application that demonstrates this capability in AR/VR settings.
随着AR/VR技术的发展,迫切需要一种可靠、直观的方法来实现三维人体的数字化。大多数现有的方法使用复杂的设备和复杂的算法。这对于日常用户来说是不切实际的。在本文中,我们提出了一种可以一目了然地重建三维人形化身的流水线。我们的方法只需要一个RGB图像作为输入,就可以同时重建三维人体几何和全身纹理图。我们首先从图像中分割人体部分,然后通过将分割拟合到参数模型中获得初始的人体几何形状。接下来,我们通过应用基于轮廓的密集对应将初始几何变形为最终形状。最后,为了从正面图像中推断不可见的背面纹理,我们提出了一个称为InferGAN的网络。综合实验表明,我们的解决方案对公共数据和我们自己捕获的数据都是鲁棒和有效的。我们的人类化身可以很容易地操纵和动画使用动作捕捉数据。我们开发了一个移动应用程序,在AR/VR环境中展示了这种能力。
{"title":"3D Human Avatar Digitization from a Single Image","authors":"Zhong Li, Lele Chen, Celong Liu, Yu Gao, Yuanzhou Ha, Chenliang Xu, Shuxue Quan, Yi Xu","doi":"10.1145/3359997.3365707","DOIUrl":"https://doi.org/10.1145/3359997.3365707","url":null,"abstract":"With the development of AR/VR technologies, a reliable and straightforward way to digitize three-dimensional human body is in high demand. Most existing methods use complex equipment and sophisticated algorithms. This is impractical for everyday users. In this paper, we propose a pipeline that reconstructs 3D human shape avatar at a glance. Our approach simultaneously reconstructs the three-dimensional human geometry and whole body texture map with only a single RGB image as input. We first segment the human body part from the image and then obtain an initial body geometry by fitting the segment to a parametric model. Next, we warp the initial geometry to the final shape by applying a silhouette-based dense correspondence. Finally, to infer invisible backside texture from a frontal image, we propose a network we call InferGAN. Comprehensive experiments demonstrate that our solution is robust and effective on both public and our own captured data. Our human avatars can be easily rigged and animated using MoCap data. We developed a mobile application that demonstrates this capability in AR/VR settings.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115277820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
From Lab to Field: Demonstrating Mixed Reality Prototypes for Augmented Sports Experiences 从实验室到现场:展示增强运动体验的混合现实原型
Wei Hong Lo, S. Zollmann, H. Regenbrecht, Moritz Loos
Traditional sports events related data have no direct spatial relationship to what spectators see when attending a live sports event. The idea of our work is to address this gap and ultimately to provide spectators insights of a sports game by embedding sports statistics into their field of view of the game using mobile Augmented Reality. Research in the area of live sport events comes with several challenges such as tracking and visualisation challenges as well as with the challenge that there are only limited opportunities to test and study new features during live games on-site. In this work, we developed a set of prototypes that allow for researching dedicated features for an AR sports spectator experience off-site in the lab before testing them live on the field.
传统体育赛事相关数据与观众在现场观看体育赛事时所看到的内容没有直接的空间关系。我们的工作理念是解决这一差距,并最终通过使用移动增强现实将体育统计数据嵌入到他们的比赛视野中,为观众提供体育比赛的见解。体育赛事直播领域的研究面临着一些挑战,比如跟踪和可视化的挑战,以及在比赛现场直播中测试和研究新功能的机会有限的挑战。在这项工作中,我们开发了一套原型,允许在实验室中研究AR体育观众体验的专用功能,然后在现场进行现场测试。
{"title":"From Lab to Field: Demonstrating Mixed Reality Prototypes for Augmented Sports Experiences","authors":"Wei Hong Lo, S. Zollmann, H. Regenbrecht, Moritz Loos","doi":"10.1145/3359997.3365728","DOIUrl":"https://doi.org/10.1145/3359997.3365728","url":null,"abstract":"Traditional sports events related data have no direct spatial relationship to what spectators see when attending a live sports event. The idea of our work is to address this gap and ultimately to provide spectators insights of a sports game by embedding sports statistics into their field of view of the game using mobile Augmented Reality. Research in the area of live sport events comes with several challenges such as tracking and visualisation challenges as well as with the challenge that there are only limited opportunities to test and study new features during live games on-site. In this work, we developed a set of prototypes that allow for researching dedicated features for an AR sports spectator experience off-site in the lab before testing them live on the field.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122838786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality 体现天气:通过沉浸式多感官虚拟现实促进公众对极端天气的了解
Pingchuan Ke, Kai-Ning Keng, Shanshan Jiang, Shaoyu Cai, Zhiyi Rong, Kening Zhu
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). VRCAI ’19, November 14–16, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-7002-8/19/11. https://doi.org/10.1145/3359997.3365718 ACM Reference Format: Pingchuan Ke, Kai-Ning Keng, Shanshan Jiang, Shaoyu Cai, Zhiyi Rong, and Kening Zhu. 2019. Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality. In The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI ’19), November 14–16, 2019, Brisbane, QLD, Australia. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/ 3359997.3365718
允许制作部分或全部作品的数字或硬拷贝供个人或课堂使用,但不收取任何费用,前提是制作或分发副本不是为了盈利或商业利益,并且副本在第一页上带有本通知和完整的引用。本作品的第三方组件的版权必须得到尊重。对于所有其他用途,请联系所有者/作者。VRCAI ' 19, 2019年11月14日至16日,澳大利亚昆士兰州布里斯班©2019版权归所有人/作者所有。Acm isbn 978-1-4503-7002-8/19/11。https://doi.org/10.1145/3359997.3365718 ACM参考格式:柯平川,耿凯宁,蒋珊珊,蔡少宇,荣志毅,朱克宁。2019。体现天气:通过沉浸式多感官虚拟现实促进公众对极端天气的了解。第十七届虚拟现实连续体及其在工业中的应用国际会议(VRCAI ' 19), 2019年11月14-16日,布里斯班,昆士兰州,澳大利亚。ACM,纽约,美国,2页。https://doi.org/10.1145/ 3359997.3365718
{"title":"Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality","authors":"Pingchuan Ke, Kai-Ning Keng, Shanshan Jiang, Shaoyu Cai, Zhiyi Rong, Kening Zhu","doi":"10.1145/3359997.3365718","DOIUrl":"https://doi.org/10.1145/3359997.3365718","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). VRCAI ’19, November 14–16, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-7002-8/19/11. https://doi.org/10.1145/3359997.3365718 ACM Reference Format: Pingchuan Ke, Kai-Ning Keng, Shanshan Jiang, Shaoyu Cai, Zhiyi Rong, and Kening Zhu. 2019. Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality. In The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI ’19), November 14–16, 2019, Brisbane, QLD, Australia. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/ 3359997.3365718","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127274330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1