Christoph Maurhofer, Gökçen Çimen, Mattia Ryffel, R. Sumner, M. Guay
We describe a method to automatically augment a watertight digital costume onto a person's body from a monocular RGB image. When overlaying a digital costume onto a body using pose matching, several parts of the person's cloth or skin remain visible due to differences in shape and proportions. In this paper, we present a practical solution to these artifacts which requires minimal costume parameterization work, and a straightforward inpainting approach. To our knowledge, our approach is the first to deliver plausible watertight costumes from RGB imagery only, and is compatible with mobile devices. We show successful results at high resolution in our video and project web page, as well as defects in the discussion section of paper, which can serve as comparison for future improvements.
{"title":"AR costumes: automatically augmenting watertight costumes from a single RGB image","authors":"Christoph Maurhofer, Gökçen Çimen, Mattia Ryffel, R. Sumner, M. Guay","doi":"10.1145/3284398.3284402","DOIUrl":"https://doi.org/10.1145/3284398.3284402","url":null,"abstract":"We describe a method to automatically augment a watertight digital costume onto a person's body from a monocular RGB image. When overlaying a digital costume onto a body using pose matching, several parts of the person's cloth or skin remain visible due to differences in shape and proportions. In this paper, we present a practical solution to these artifacts which requires minimal costume parameterization work, and a straightforward inpainting approach. To our knowledge, our approach is the first to deliver plausible watertight costumes from RGB imagery only, and is compatible with mobile devices. We show successful results at high resolution in our video and project web page, as well as defects in the discussion section of paper, which can serve as comparison for future improvements.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114600669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shota Homma, Masaki Abe, Taichi Watanabe, Koji Mikami
We have conducted a research on the time-loss experience during game play. From the study, we propose system design for preventing loss of replay desire in games by guiding voluntary leave. Experiments have shown the effectiveness of the proposed system. In the first step of this study, we have conducted a preliminary experiment for finding whether increasing cognitive workload can guide voluntary leave. The results indicated that adjusting difficulty level by modifications to the appearances of the game objects makes a difference in the cognitive workload.
{"title":"System for preventing loss of replay desire in games by guiding the user to voluntary leaving from and returning to the game sessions","authors":"Shota Homma, Masaki Abe, Taichi Watanabe, Koji Mikami","doi":"10.1145/3284398.3284418","DOIUrl":"https://doi.org/10.1145/3284398.3284418","url":null,"abstract":"We have conducted a research on the time-loss experience during game play. From the study, we propose system design for preventing loss of replay desire in games by guiding voluntary leave. Experiments have shown the effectiveness of the proposed system. In the first step of this study, we have conducted a preliminary experiment for finding whether increasing cognitive workload can guide voluntary leave. The results indicated that adjusting difficulty level by modifications to the appearances of the game objects makes a difference in the cognitive workload.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117082735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Building information modeling (BIM) is the concept that involves building a three-dimensional model (3D model) as a virtual building and using this virtual building to perform various simulations and evaluations. 3D visualization using virtual reality is a typical use of BIM. A game engine---that is, a development environment for video games---has been effectively utilized as a development environment for architectural virtual reality (VR) applications. Various functions provided by the game engine can be used to develop VR applications. However, the intermediate file-based data coordination that is conventionally used requires problematic tasks associated with the cyclic process involving design and visualization. A two-way cooperation that interconnects architectural 3D CAD and a VR application developed in a game engine is described in this paper, with a special emphasis on 3D model management. In the two-way cooperation system, 3D models in architectural 3D CAD are shared with the VR application through a relay database. Three-dimensional model representation that is mesh-based and parametric-based is employed to manage 3D models. In addition, two expansions, a delta update function and an editing function, make cooperation smooth and bidirectional. An immersive VR application designed for decision making with stakeholders such as architects or clients is demonstrated in this study.
{"title":"Two-way cooperation of architectural 3d cad and game engine","authors":"Keita Kado, Gakuhito Hirasawa","doi":"10.1145/3284398.3284420","DOIUrl":"https://doi.org/10.1145/3284398.3284420","url":null,"abstract":"Building information modeling (BIM) is the concept that involves building a three-dimensional model (3D model) as a virtual building and using this virtual building to perform various simulations and evaluations. 3D visualization using virtual reality is a typical use of BIM. A game engine---that is, a development environment for video games---has been effectively utilized as a development environment for architectural virtual reality (VR) applications. Various functions provided by the game engine can be used to develop VR applications. However, the intermediate file-based data coordination that is conventionally used requires problematic tasks associated with the cyclic process involving design and visualization. A two-way cooperation that interconnects architectural 3D CAD and a VR application developed in a game engine is described in this paper, with a special emphasis on 3D model management. In the two-way cooperation system, 3D models in architectural 3D CAD are shared with the VR application through a relay database. Three-dimensional model representation that is mesh-based and parametric-based is employed to manage 3D models. In addition, two expansions, a delta update function and an editing function, make cooperation smooth and bidirectional. An immersive VR application designed for decision making with stakeholders such as architects or clients is demonstrated in this study.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126997457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaoxin Li, Jin Sun, Xiaoqian Li, Zhanpeng Zhang, Hui Cheng, Xiaogang Wang
Learning based robotic grasping methods achieve substantial progress with the development of the deep neural networks. However, the requirement of large-scale training data in the real world limits the application scopes of these methods. Given the 3D models of the target objects, we propose a new learning-based grasping approach built on 6D object poses estimation from a monocular RGB image. We aim to leverage both a large-scale synthesized 6D object pose dataset and a small scale of the real-world weakly labeled dataset (e.g., mark the number of objects in the image), to reduce the system deployment difficulty. In particular, the deep network combines the 6D pose estimation task and an auxiliary task of weak labels to perform knowledge transfer between the synthesized and real-world data. We demonstrate the effectiveness of the method in a real robotic environment and show substantial improvements in the successful grasping rate (about 11.9% on average) to the proposed knowledge transfer scheme.
{"title":"Weakly supervised 6D pose estimation for robotic grasping","authors":"Yaoxin Li, Jin Sun, Xiaoqian Li, Zhanpeng Zhang, Hui Cheng, Xiaogang Wang","doi":"10.1145/3284398.3284408","DOIUrl":"https://doi.org/10.1145/3284398.3284408","url":null,"abstract":"Learning based robotic grasping methods achieve substantial progress with the development of the deep neural networks. However, the requirement of large-scale training data in the real world limits the application scopes of these methods. Given the 3D models of the target objects, we propose a new learning-based grasping approach built on 6D object poses estimation from a monocular RGB image. We aim to leverage both a large-scale synthesized 6D object pose dataset and a small scale of the real-world weakly labeled dataset (e.g., mark the number of objects in the image), to reduce the system deployment difficulty. In particular, the deep network combines the 6D pose estimation task and an auxiliary task of weak labels to perform knowledge transfer between the synthesized and real-world data. We demonstrate the effectiveness of the method in a real robotic environment and show substantial improvements in the successful grasping rate (about 11.9% on average) to the proposed knowledge transfer scheme.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121721440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consumer virtual reality (VR) is a relatively new mass technology that needs to be explored further so we can understand how it affects players from a psychophysiological perspective and how it differs from a common display during gameplay. One aspect of VR gameplay that is particularly underexplored when it comes to psychophysiological data is viewing perspective. This research aims to assess and analyse how players' brainwaves correlate to their subjective feelings when playing in first-person and third-person viewing perspectives in VR. We report the findings of an experiment in which participants play a game in VR under first-and third-person perspectives and have their EEG data recorded. Our findings indicate that some waves correlate to subjective feelings regardless of viewing perspective, whereas others are more reliant on which perspective the gamers are playing in (e.g. Alpha). The results of this research are applicable to the design of VR games, especially those aimed at improving players' health.
{"title":"Correlating gamers' brainwaves to their subjective feelings in virtual reality games under different viewing perspectives","authors":"D. Monteiro, Hai-Ning Liang, N. Baghaei","doi":"10.1145/3284398.3284430","DOIUrl":"https://doi.org/10.1145/3284398.3284430","url":null,"abstract":"Consumer virtual reality (VR) is a relatively new mass technology that needs to be explored further so we can understand how it affects players from a psychophysiological perspective and how it differs from a common display during gameplay. One aspect of VR gameplay that is particularly underexplored when it comes to psychophysiological data is viewing perspective. This research aims to assess and analyse how players' brainwaves correlate to their subjective feelings when playing in first-person and third-person viewing perspectives in VR. We report the findings of an experiment in which participants play a game in VR under first-and third-person perspectives and have their EEG data recorded. Our findings indicate that some waves correlate to subjective feelings regardless of viewing perspective, whereas others are more reliant on which perspective the gamers are playing in (e.g. Alpha). The results of this research are applicable to the design of VR games, especially those aimed at improving players' health.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"251 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130029687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. F. M. Zaidi, Craig Duthie, E. Carr, Samah Hassan Abd El Maksoud
Loneliness has been shown to have a negative effect on both mental and physical health. Regular video game players always keep themselves busy with the games they love playing. Whereas, non-gamers are mostly involved in other physical/social activities that sustain positive impact on their lives. When it comes to not being able to keep up with these activities, loneliness starts to creep in their lives, especially when they get older. Virtual reality(VR) provides a more accessible option for people to become engaged in physical and social activities from the comfort of their homes as they get immersed into the 3D simulation. This paper proposes a usability framework for (VR) casual game development for non-gamers, that would help developers to get important game elements in pre-production phase avoiding major changes in the production phase. Moreover, when the prototype was tested for user-experience, results were positive, suggesting that the framework we proposed was successful in producing an immersive VR game prototype for our target users. The prototype helped alleviating feeling of loneliness and elevating positive mood.
{"title":"Conceptual framework for the usability evaluation of gamified virtual reality environment for non-gamers","authors":"S. F. M. Zaidi, Craig Duthie, E. Carr, Samah Hassan Abd El Maksoud","doi":"10.1145/3284398.3284431","DOIUrl":"https://doi.org/10.1145/3284398.3284431","url":null,"abstract":"Loneliness has been shown to have a negative effect on both mental and physical health. Regular video game players always keep themselves busy with the games they love playing. Whereas, non-gamers are mostly involved in other physical/social activities that sustain positive impact on their lives. When it comes to not being able to keep up with these activities, loneliness starts to creep in their lives, especially when they get older. Virtual reality(VR) provides a more accessible option for people to become engaged in physical and social activities from the comfort of their homes as they get immersed into the 3D simulation. This paper proposes a usability framework for (VR) casual game development for non-gamers, that would help developers to get important game elements in pre-production phase avoiding major changes in the production phase. Moreover, when the prototype was tested for user-experience, results were positive, suggesting that the framework we proposed was successful in producing an immersive VR game prototype for our target users. The prototype helped alleviating feeling of loneliness and elevating positive mood.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116788056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a wearable projection system that displays both public and private information concurrently. The system is composed of a 2-layer forearm mounted screen and a head worn device which projects images on the screen. The screen can be switched between two modes, light-diffusing mode, where projected image is visible to the surrounding viewers and light-transmitting mode, where projected image is only limited to the user. We conducted a set of design workshop to identify the optimal placement and interactions with this display system. We also implemented three display methods to present public or private information, which are contextualized with several use-case applications, supported with four sets of gestures.
{"title":"Public/private interactive wearable projection display","authors":"Yuta Sugiura, T. Chong, Wataru Kawai, B. Thomas","doi":"10.1145/3284398.3284415","DOIUrl":"https://doi.org/10.1145/3284398.3284415","url":null,"abstract":"We propose a wearable projection system that displays both public and private information concurrently. The system is composed of a 2-layer forearm mounted screen and a head worn device which projects images on the screen. The screen can be switched between two modes, light-diffusing mode, where projected image is visible to the surrounding viewers and light-transmitting mode, where projected image is only limited to the user. We conducted a set of design workshop to identify the optimal placement and interactions with this display system. We also implemented three display methods to present public or private information, which are contextualized with several use-case applications, supported with four sets of gestures.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131315043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Llogari Casas, Loïc Ciccone, Gökçen Çimen, P. Wiedemann, M. Fauconneau, R. Sumner, Kenny Mitchell
Interactive play can take very different forms, from playing with physical board games to fully digital video games. In recent years, new video game paradigms were introduced to connect real-world objects to virtual game characters. However, even these applications focus on a specific section of the Reality-Virtuality Continuum, where the visual embodiment of characters is either largely static toys in the real world or pre-animated within the virtual world according to a determined set of motions. We introduce a novel concept, called Multi-Reality Games, that encompasses interactions with real and virtual objects to span the entire spectrum of the Reality-Virtuality Continuum, from the real world to digital and/or back. Our application on real-virtual game interaction makes an evolutionary step toward the convergence of real and virtual game characters. Rather than static toys or pre-built and unconfigurable virtual counterparts, we bring together technologies from the entire Reality-Virtuality Continuum to target new game experiences. We showcase our framework by proposing a game application on a mobile device. Without the need to change the location or set, we enable intuitive and seamless interactions between physical, augmented and virtual elements. The experience brings both worlds closer, and enables the user to customize the virtual scenario according to physical references.
{"title":"Multi-reality games: an experience across the entire reality-virtuality continuum","authors":"Llogari Casas, Loïc Ciccone, Gökçen Çimen, P. Wiedemann, M. Fauconneau, R. Sumner, Kenny Mitchell","doi":"10.1145/3284398.3284411","DOIUrl":"https://doi.org/10.1145/3284398.3284411","url":null,"abstract":"Interactive play can take very different forms, from playing with physical board games to fully digital video games. In recent years, new video game paradigms were introduced to connect real-world objects to virtual game characters. However, even these applications focus on a specific section of the Reality-Virtuality Continuum, where the visual embodiment of characters is either largely static toys in the real world or pre-animated within the virtual world according to a determined set of motions. We introduce a novel concept, called Multi-Reality Games, that encompasses interactions with real and virtual objects to span the entire spectrum of the Reality-Virtuality Continuum, from the real world to digital and/or back. Our application on real-virtual game interaction makes an evolutionary step toward the convergence of real and virtual game characters. Rather than static toys or pre-built and unconfigurable virtual counterparts, we bring together technologies from the entire Reality-Virtuality Continuum to target new game experiences. We showcase our framework by proposing a game application on a mobile device. Without the need to change the location or set, we enable intuitive and seamless interactions between physical, augmented and virtual elements. The experience brings both worlds closer, and enables the user to customize the virtual scenario according to physical references.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117160766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amartuvshin Renchin-Ochir, Enkhbayar Altantsetseg, K. Konno
Extracting reconstructed decoration and body parts from Buddha statues collection contributes to cultural and archaeological analyses. To fully analyze the similarity between different statues, reconstructed body parts without decoration are required. This paper proposes a sufficient hole-filling method to reconstruct solid models of a statue arm that does not feature decoration on the surface. In our approach, decoration and body parts are separated according to segmented decoration points. Once decoration parts are separated from the original model, complex holes and gaps are left in the body part. Thus, a hole-filling method is introduced in this paper, to construct a complete surface model of an arm. We implemented our hole-filling method using a general B-spline surface-fitting technique based on information from divided rectangular surfaces. Once the body part is completed using the hole-filling method, a solid model is reconstructed. The proposed method was tested on arm parts from Mongolian Buddha statues and the precision of the hole-filling method was confirmed. The solid model of an arm part was reconstructed from point clouds of completed body model.
{"title":"Hole-filling method for reconstructing separated arm models of the mongolian buddha statue for analyzing body shape of statues","authors":"Amartuvshin Renchin-Ochir, Enkhbayar Altantsetseg, K. Konno","doi":"10.1145/3284398.3284405","DOIUrl":"https://doi.org/10.1145/3284398.3284405","url":null,"abstract":"Extracting reconstructed decoration and body parts from Buddha statues collection contributes to cultural and archaeological analyses. To fully analyze the similarity between different statues, reconstructed body parts without decoration are required. This paper proposes a sufficient hole-filling method to reconstruct solid models of a statue arm that does not feature decoration on the surface. In our approach, decoration and body parts are separated according to segmented decoration points. Once decoration parts are separated from the original model, complex holes and gaps are left in the body part. Thus, a hole-filling method is introduced in this paper, to construct a complete surface model of an arm. We implemented our hole-filling method using a general B-spline surface-fitting technique based on information from divided rectangular surfaces. Once the body part is completed using the hole-filling method, a solid model is reconstructed. The proposed method was tested on arm parts from Mongolian Buddha statues and the precision of the hole-filling method was confirmed. The solid model of an arm part was reconstructed from point clouds of completed body model.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126569887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Eckhoff, C. Sandor, Christian Lins, U. Eck, Denis Kalkofen, Andreas Hein
With Augmented Reality (AR) on Optical See-Through Head-Mounted Displays (OST-HMD), users can observe the real world and computer graphics at the same time. In this work, we present the design and implementation of TutAR, a pipeline that semi-automatically creates AR tutorials of 2D RGB videos with hands-only procedures such as cardiopulmonary resuscitation (CPR). TutAR extracts relevant 3D hand motion from the input video. The derived hand motion will be displayed as an animated 3D hand relative to the human body and plays synchronously with the motion in the video on an OST-HMD.
{"title":"TutAR","authors":"Daniel Eckhoff, C. Sandor, Christian Lins, U. Eck, Denis Kalkofen, Andreas Hein","doi":"10.1145/3284398.3284399","DOIUrl":"https://doi.org/10.1145/3284398.3284399","url":null,"abstract":"With Augmented Reality (AR) on Optical See-Through Head-Mounted Displays (OST-HMD), users can observe the real world and computer graphics at the same time. In this work, we present the design and implementation of TutAR, a pipeline that semi-automatically creates AR tutorials of 2D RGB videos with hands-only procedures such as cardiopulmonary resuscitation (CPR). TutAR extracts relevant 3D hand motion from the input video. The derived hand motion will be displayed as an animated 3D hand relative to the human body and plays synchronously with the motion in the video on an OST-HMD.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114576001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}