We present an optical system design for a 3D display that is spherical, full-parallax, and occlusion-capable with a wide viewing zone and no head tracking. The proposed system provides a new approach for the 3D display and thereby addresses limitations of the conventional light-field display structure. Specifically, a spherical full-parallax light-field display is difficult to achieve because it is challenging to curve the conventional structure of the light-field displays. The key elements of the system are a specially designed ball mirror and a high-speed projector. The ball mirror uniaxially rotates and reflects rays from the projector to various angles. The intensities of these rays are controlled by the projector. Rays from a virtual object inside the ball mirror are reconstructed, and the system acts as a light-field display based on the time-division multiplexing method. We implemented this ball mirror by 3D printing and metal plating. The prototype successfully displays a 3D image and the system feasibility is confirmed. Our system is thus suitable for displaying 3D images to many viewers simultaneously and it can be effectively employed as in art or advertisement installation.
{"title":"Spherical full-parallax light-field display using ball of fly-eye mirror","authors":"H. Yano, T. Yendo","doi":"10.1145/3214907.3214917","DOIUrl":"https://doi.org/10.1145/3214907.3214917","url":null,"abstract":"We present an optical system design for a 3D display that is spherical, full-parallax, and occlusion-capable with a wide viewing zone and no head tracking. The proposed system provides a new approach for the 3D display and thereby addresses limitations of the conventional light-field display structure. Specifically, a spherical full-parallax light-field display is difficult to achieve because it is challenging to curve the conventional structure of the light-field displays. The key elements of the system are a specially designed ball mirror and a high-speed projector. The ball mirror uniaxially rotates and reflects rays from the projector to various angles. The intensities of these rays are controlled by the projector. Rays from a virtual object inside the ball mirror are reconstructed, and the system acts as a light-field display based on the time-division multiplexing method. We implemented this ball mirror by 3D printing and metal plating. The prototype successfully displays a 3D image and the system feasibility is confirmed. Our system is thus suitable for displaying 3D images to many viewers simultaneously and it can be effectively employed as in art or advertisement installation.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121372725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nitish Padmanaban, Robert Konrad, Gordon Wetzstein
Presbyopia, the loss of accommodation due to the stiffening of the crystalline lens, affects nearly 20% of the population worldwide. Traditional forms of presbyopia correction use fixed focal elements that inherently trade off field of view or stereo vision for a greater range of distances at which the wearer can see clearly. However, none of these offer the same natural refocusing enjoyed in youth. In this work, we built a new presbyopia correction, dubbed Autofocals, which externally mimics the natural accommodation response by combining data from eye trackers and a depth sensor, and then automatically drives focus-tunable lenses. In our testing, wearers generally reported that the Autofocals compare favorably with their own current corrective eyewear.
{"title":"Autofocals: gaze-contingent eyeglasses for presbyopes","authors":"Nitish Padmanaban, Robert Konrad, Gordon Wetzstein","doi":"10.1145/3214907.3214918","DOIUrl":"https://doi.org/10.1145/3214907.3214918","url":null,"abstract":"Presbyopia, the loss of accommodation due to the stiffening of the crystalline lens, affects nearly 20% of the population worldwide. Traditional forms of presbyopia correction use fixed focal elements that inherently trade off field of view or stereo vision for a greater range of distances at which the wearer can see clearly. However, none of these offer the same natural refocusing enjoyed in youth. In this work, we built a new presbyopia correction, dubbed Autofocals, which externally mimics the natural accommodation response by combining data from eye trackers and a depth sensor, and then automatically drives focus-tunable lenses. In our testing, wearers generally reported that the Autofocals compare favorably with their own current corrective eyewear.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126492690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seungwoo Je, Hyelip Lee, Myung Jin Kim, Andrea Bianchi
Ungrounded haptic force-feedback is a crucial element for applications that aim to immerse users in virtual environments where also mobility is an important component of the experience, like for example Virtual Reality games. In this paper, we present a novel wearable interface that generates a force-feedback by spinning two drone-propellers mounted on a wrist. The device is interfaced with a game running in Unity, and it is capable to render different haptic stimuli mapped to four weapons. A simple evaluation with users demonstrates the feasibility of the proposed approach.
{"title":"Wind-blaster: a wearable propeller-based prototype that provides ungrounded force-feedback","authors":"Seungwoo Je, Hyelip Lee, Myung Jin Kim, Andrea Bianchi","doi":"10.1145/3214907.3214915","DOIUrl":"https://doi.org/10.1145/3214907.3214915","url":null,"abstract":"Ungrounded haptic force-feedback is a crucial element for applications that aim to immerse users in virtual environments where also mobility is an important component of the experience, like for example Virtual Reality games. In this paper, we present a novel wearable interface that generates a force-feedback by spinning two drone-propellers mounted on a wrist. The device is interfaced with a game running in Unity, and it is capable to render different haptic stimuli mapped to four weapons. A simple evaluation with users demonstrates the feasibility of the proposed approach.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126356065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Aoyama, K. Sakurai, Akinobu Morishima, T. Maeda, H. Ando
Galvanic tongue stimulation (GTS) is a technology used to change and induce taste sensation with electrical stimulation. It is known from previous studies that cathodal current stimulation induces two types of effects. The first is the taste suppression that renders the taste induced by electrolytic materials weaker during the stimulation. The second is taste enhancement that makes taste stronger shortly after ending the stimulation. These effects stand a better possibility to affect the ability to emulate taste, which can ultimately control the strength of taste sensation with freedom. Taste emulation has been considered in various applications, such as in virtual reality, in diet efforts, and in other applications. However, conventional GTS is associated with some problems. For example, the duration of taste enhancement is too short for use in diet efforts, and it necessitates the attachment of electrodes in the mouth. Moreover, conventional GTS cannot induce taste at the throat but at the mouth instead. Thus, this study and our associated demonstration introduces some approaches to address and solve these problems. Our approaches realize that taste changes voluntarily and the effects persist for lengthy periods of time.
{"title":"Taste controller: galvanic chin stimulation enhances, inhibits, and creates tastes","authors":"K. Aoyama, K. Sakurai, Akinobu Morishima, T. Maeda, H. Ando","doi":"10.1145/3214907.3214916","DOIUrl":"https://doi.org/10.1145/3214907.3214916","url":null,"abstract":"Galvanic tongue stimulation (GTS) is a technology used to change and induce taste sensation with electrical stimulation. It is known from previous studies that cathodal current stimulation induces two types of effects. The first is the taste suppression that renders the taste induced by electrolytic materials weaker during the stimulation. The second is taste enhancement that makes taste stronger shortly after ending the stimulation. These effects stand a better possibility to affect the ability to emulate taste, which can ultimately control the strength of taste sensation with freedom. Taste emulation has been considered in various applications, such as in virtual reality, in diet efforts, and in other applications. However, conventional GTS is associated with some problems. For example, the duration of taste enhancement is too short for use in diet efforts, and it necessitates the attachment of electrodes in the mouth. Moreover, conventional GTS cannot induce taste at the throat but at the mouth instead. Thus, this study and our associated demonstration introduces some approaches to address and solve these problems. Our approaches realize that taste changes voluntarily and the effects persist for lengthy periods of time.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123028898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hwan Kim, Hyeon-Beom Yi, Richard Chulwoo Park, Woohun Lee
We developed a tactile actuator named HapCube that provides tangential and normal pseudo-force feedback on user's fingertip. The tangential feedback is generated by synthesizing two orthogonal asymmetric vibrations, and it simulates frictional force in any desired tangential directions. The normal feedback simulates tactile sensations when pressing various types of button. In addition, by combining the two feedbacks, it can produce frictional force and surface texture simultaneously.
{"title":"Hapcube: a tactile actuator providing tangential and normal pseudo-force feedback on a fingertip","authors":"Hwan Kim, Hyeon-Beom Yi, Richard Chulwoo Park, Woohun Lee","doi":"10.1145/3214907.3214922","DOIUrl":"https://doi.org/10.1145/3214907.3214922","url":null,"abstract":"We developed a tactile actuator named HapCube that provides tangential and normal pseudo-force feedback on user's fingertip. The tangential feedback is generated by synthesizing two orthogonal asymmetric vibrations, and it simulates frictional force in any desired tangential directions. The normal feedback simulates tactile sensations when pressing various types of button. In addition, by combining the two feedbacks, it can produce frictional force and surface texture simultaneously.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also a possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed. Maxwellian optics based retinal projection was proposed in 1990s [Kollin 1993]. Laser scanning [Liao and Tsai 2009], laser projection using spatial light modulator (SLM) or holographic optical elements were also explored [Jang et al. 2017]. In the commercial field, QD Laser1 with a viewing angle of 26 degrees is available. However, as the lenses and iris of an eyeball are in front of the retina, which is a limitation of a human eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. Due to these problems, retinal projection displays are still a rare commodity because of their difficulty in optical schematics design.
视网膜投影是全天提供沉浸式视觉体验的xR应用程序所必需的。如果能够以低成本实现通用的视网膜投影方法,不仅可以使用更少的能量在视网膜上显示图像,而且还有可能从AR护目镜中去除投影单元本身的重量。以前已经提出了几种视网膜投影方法。基于麦克斯韦光学的视网膜投影在20世纪90年代被提出[Kollin 1993]。激光扫描[Liao and Tsai 2009],使用空间光调制器(SLM)或全息光学元件的激光投影也进行了探索[Jang et . 2017]。在商业领域,可提供26度视角的QD Laser1。然而,由于眼球的晶状体和虹膜在视网膜的前面,这是人类眼球的局限性,视网膜投影的提议通常充满了狭窄的视角和小眼箱的问题。由于这些问题,由于光学原理图设计困难,视网膜投影显示器仍然是一种罕见的商品。
{"title":"Make your own retinal projector: retinal near-eye displays via metamaterials","authors":"Yoichi Ochiai, Kazuki Otao, Yuta Itoh, Shouki Imai, Kazuki Takazawa, Hiroyuki Osone, Atsushi Mori, Ippei Suzuki","doi":"10.1145/3214907.3214910","DOIUrl":"https://doi.org/10.1145/3214907.3214910","url":null,"abstract":"Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also a possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed. Maxwellian optics based retinal projection was proposed in 1990s [Kollin 1993]. Laser scanning [Liao and Tsai 2009], laser projection using spatial light modulator (SLM) or holographic optical elements were also explored [Jang et al. 2017]. In the commercial field, QD Laser1 with a viewing angle of 26 degrees is available. However, as the lenses and iris of an eyeball are in front of the retina, which is a limitation of a human eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. Due to these problems, retinal projection displays are still a rare commodity because of their difficulty in optical schematics design.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121994663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Zhou, Georg Hagemann, S. Fels, D. Fafard, A. J. Wagemakers, Chris Chamberlain, I. Stavness
Fish Tank Virtual Reality (FTVR) creates a compelling 3D illusion for a single person by rendering to their perspective with head-tracking. However, typically, other participants cannot share in the experience since they see a weirdly distorted image when they look at the FTVR display making it difficult to work and play together. To overcome this problem, we have created CoGlobe: a large spherical FTVR display for multiple users. Using CoGlobe, Siggraph attendees will experience the latest advance of FTVR that supports multiple people co-located in a shared space working and playing together through two different multiplayer games and tasks. We have created a competitive two-person 3D Pong game (Figure 1b) for attendees to experience a highly interactive two-person game looking at the CoGlobe. Onlookers can also watch using a variation of mixed reality with a tracked mobile smartphone. Using a smartphone as a second screen registered to the same virtual world enables multiple people to interact together as well. We have also created a cooperative multi-person 3D drone game (Figure 1c) to illustrate cooperation in FTVR. Attendees will also see how effective co-located 3D FTVR is when cooperating on a complex 3D mental rotation (Figure 1d) and a path-tracing task (Figure 1a). CoGlobe overcomes the limited situation awareness of headset VR, while retaining the benefits of cooperative 3D interaction and thus is an exciting direction for the next wave of 3D displays for work and fun for Siggraph attendees to experience.
{"title":"Coglobe: a co-located multi-person FTVR experience","authors":"Qian Zhou, Georg Hagemann, S. Fels, D. Fafard, A. J. Wagemakers, Chris Chamberlain, I. Stavness","doi":"10.1145/3214907.3214914","DOIUrl":"https://doi.org/10.1145/3214907.3214914","url":null,"abstract":"Fish Tank Virtual Reality (FTVR) creates a compelling 3D illusion for a single person by rendering to their perspective with head-tracking. However, typically, other participants cannot share in the experience since they see a weirdly distorted image when they look at the FTVR display making it difficult to work and play together. To overcome this problem, we have created CoGlobe: a large spherical FTVR display for multiple users. Using CoGlobe, Siggraph attendees will experience the latest advance of FTVR that supports multiple people co-located in a shared space working and playing together through two different multiplayer games and tasks. We have created a competitive two-person 3D Pong game (Figure 1b) for attendees to experience a highly interactive two-person game looking at the CoGlobe. Onlookers can also watch using a variation of mixed reality with a tracked mobile smartphone. Using a smartphone as a second screen registered to the same virtual world enables multiple people to interact together as well. We have also created a cooperative multi-person 3D drone game (Figure 1c) to illustrate cooperation in FTVR. Attendees will also see how effective co-located 3D FTVR is when cooperating on a complex 3D mental rotation (Figure 1d) and a path-tracing task (Figure 1a). CoGlobe overcomes the limited situation awareness of headset VR, while retaining the benefits of cooperative 3D interaction and thus is an exciting direction for the next wave of 3D displays for work and fun for Siggraph attendees to experience.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125165839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce "Headlight", a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.
{"title":"Headlight: egocentric visual augmentation by wearable wide projector","authors":"Shunichi Kasahara","doi":"10.1145/3214907.3214926","DOIUrl":"https://doi.org/10.1145/3214907.3214926","url":null,"abstract":"Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce \"Headlight\", a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128072701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Spielmann, V. Helzle, Andreas Schuster, Jonas Trottnow, Kai Götz, Patricia Rohr
The work on intuitive Virtual Production tools at Filmakademie Baden-Württemberg has focused on an open platform tied to existing film creation pipelines. The Virtual Production Editing Tools (VPET) started in a former project on Virtual Production funded by the European Union and are published and constantly updated on the open source software development platform Github. We introduce an intuitive workflow where Augmented Reality, inside-out tracking and real-time color keying can be applied on the fly to extend a real movie set with editable, virtual extensions in a collaborative setup.
{"title":"VPET","authors":"S. Spielmann, V. Helzle, Andreas Schuster, Jonas Trottnow, Kai Götz, Patricia Rohr","doi":"10.1145/3214907.3233760","DOIUrl":"https://doi.org/10.1145/3214907.3233760","url":null,"abstract":"The work on intuitive Virtual Production tools at Filmakademie Baden-Württemberg has focused on an open platform tied to existing film creation pipelines. The Virtual Production Editing Tools (VPET) started in a former project on Virtual Production funded by the European Union and are published and constantly updated on the open source software development platform Github. We introduce an intuitive workflow where Augmented Reality, inside-out tracking and real-time color keying can be applied on the fly to extend a real movie set with editable, virtual extensions in a collaborative setup.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116310488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yong-Ho Lee, Mincheol Kim, Hwang-Youn Kim, Dongmyoung Lee, Bum-Jae You
In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.
{"title":"CHICAP","authors":"Yong-Ho Lee, Mincheol Kim, Hwang-Youn Kim, Dongmyoung Lee, Bum-Jae You","doi":"10.1145/3214907.3214924","DOIUrl":"https://doi.org/10.1145/3214907.3214924","url":null,"abstract":"In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128806868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}