Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke
We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.
{"title":"AI-Mediated 3D Video Conferencing","authors":"Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke","doi":"10.1145/3588037.3595385","DOIUrl":"https://doi.org/10.1145/3588037.3595385","url":null,"abstract":"We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"178 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114092561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Transtiff, a stick-shaped device that can display various stiffness for stick-based haptic interaction. The device has a stiffness-changing joint replicating an artificial muscle mechanism in the relay portion of the stick to change its stiffness. Transtiff can be applied to touch interaction of the screen, augmenting the haptic experience of operating with a stylus pen, which is usually felt uniform. As applications, users can experience the sensation of pen and brush writing on a single device. In addition, it is possible to change the stiffness of the device for each object on the screen to reproduce the tactile feel of that object.
{"title":"Transtiff: Haptic Interaction with a Stick Interface with Various Stiffness","authors":"Ayumu Ogura, Kodai Ito, Shigeo Yoshida, Kazutoshi Tanaka, Yuichi Itoh","doi":"10.1145/3588037.3595402","DOIUrl":"https://doi.org/10.1145/3588037.3595402","url":null,"abstract":"We propose Transtiff, a stick-shaped device that can display various stiffness for stick-based haptic interaction. The device has a stiffness-changing joint replicating an artificial muscle mechanism in the relay portion of the stick to change its stiffness. Transtiff can be applied to touch interaction of the screen, augmenting the haptic experience of operating with a stylus pen, which is usually felt uniform. As applications, users can experience the sensation of pen and brush writing on a single device. In addition, it is possible to change the stiffness of the device for each object on the screen to reproduce the tactile feel of that object.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115061720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although there is no distinctive header, this is the abstract. This submission template allows authors to submit their papers for review to an ACM Conference or Journal without any output design specifications incorporated at this point in the process. The ACM manuscript template is a single column document that allows authors to type their content into the pre-existing set of paragraph formatting styles applied to the sample placeholder text here. Throughout the document you will find further instructions on how to format your text. If your conference's review process will be double-blind: The submitted document should not include author information and should not include acknowledgments, citations or discussion of related work that would make the authorship apparent. Submissions containing author identifying information may be subject to rejection without review. Upon acceptance, the author and affiliation information must be added to your paper.
{"title":"Brain-Machine Interface for neurorehabilitation and human augmentation: Applications of BMI technology and prospects","authors":"J. Ushiba, M. Hayashi, Seitaro Iwama","doi":"10.1145/3588037.3605555","DOIUrl":"https://doi.org/10.1145/3588037.3605555","url":null,"abstract":"Although there is no distinctive header, this is the abstract. This submission template allows authors to submit their papers for review to an ACM Conference or Journal without any output design specifications incorporated at this point in the process. The ACM manuscript template is a single column document that allows authors to type their content into the pre-existing set of paragraph formatting styles applied to the sample placeholder text here. Throughout the document you will find further instructions on how to format your text. If your conference's review process will be double-blind: The submitted document should not include author information and should not include acknowledgments, citations or discussion of related work that would make the authorship apparent. Submissions containing author identifying information may be subject to rejection without review. Upon acceptance, the author and affiliation information must be added to your paper.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116454660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman
We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.
{"title":"Retinal-Resolution Varifocal VR","authors":"Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman","doi":"10.1145/3588037.3595389","DOIUrl":"https://doi.org/10.1145/3588037.3595389","url":null,"abstract":"We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125213291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Romain Nith, Jacob Serfaty, Samuel G Shatzkin, Alan Shen, Pedro Lopes
Vertical force-feedback is extremely rare in mainstream interactive experiences. This happens because existing haptic devices capable of sufficiently strong forces that would modify a user's jump require grounding (e.g., motion platforms or pulleys) or cumbersome actuators (e.g., large propellers attached or held by the user). To enable interactive experiences to feature jump-based haptics without sacrificing wearability, we propose JumpMod, an untethered backpack that modifies one's sense of jumping. JumpMod achieves this by moving a weight up/down along the user's back, which modifies perceived jump momentum—creating accelerated & decelerated jump sensations. Our device can render five distinct effects: jump higher, land harder/softer, pulled higher/lower, which we demonstrate at SIGGRAPH 2023 Emerging Technologies in two jump-based VR experiences.
{"title":"Demonstrating JumpMod: Haptic Backpack that Modifies Users' Perceived Jump","authors":"Romain Nith, Jacob Serfaty, Samuel G Shatzkin, Alan Shen, Pedro Lopes","doi":"10.1145/3588037.3595387","DOIUrl":"https://doi.org/10.1145/3588037.3595387","url":null,"abstract":"Vertical force-feedback is extremely rare in mainstream interactive experiences. This happens because existing haptic devices capable of sufficiently strong forces that would modify a user's jump require grounding (e.g., motion platforms or pulleys) or cumbersome actuators (e.g., large propellers attached or held by the user). To enable interactive experiences to feature jump-based haptics without sacrificing wearability, we propose JumpMod, an untethered backpack that modifies one's sense of jumping. JumpMod achieves this by moving a weight up/down along the user's back, which modifies perceived jump momentum—creating accelerated & decelerated jump sensations. Our device can render five distinct effects: jump higher, land harder/softer, pulled higher/lower, which we demonstrate at SIGGRAPH 2023 Emerging Technologies in two jump-based VR experiences.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128105724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khrystyna Vasylevska, Tobias Batik, Hugo Brument, Kiumars Sharifmoghaddam, G. Nawratil, Emanuel Vonach, Soroosh Mortezapoor, H. Kaufmann
Origami offers an innovative way to implement haptic interaction with minimum actuation, particularly in immersive encountered-type haptics and robotics. This paper presents two novel action-origami-inspired haptic devices for Virtual Reality (VR). The Zipper Flower Tube is a rigid-foldable origami structure that can provide different stiffness sensations to simulate the elastic response of a material. The Shiftly is a shape-shifting haptic display that employs origami to enable a real-time experience of different shapes and edges of virtual objects or the softness of materials. The modular approach of our action origami haptic devices provides a high-fidelity, energy-efficient and low-cost solution for interacting with virtual materials and objects in VR.
{"title":"Action-Origami Inspired Haptic Devices for Virtual Reality","authors":"Khrystyna Vasylevska, Tobias Batik, Hugo Brument, Kiumars Sharifmoghaddam, G. Nawratil, Emanuel Vonach, Soroosh Mortezapoor, H. Kaufmann","doi":"10.1145/3588037.3595393","DOIUrl":"https://doi.org/10.1145/3588037.3595393","url":null,"abstract":"Origami offers an innovative way to implement haptic interaction with minimum actuation, particularly in immersive encountered-type haptics and robotics. This paper presents two novel action-origami-inspired haptic devices for Virtual Reality (VR). The Zipper Flower Tube is a rigid-foldable origami structure that can provide different stiffness sensations to simulate the elastic response of a material. The Shiftly is a shape-shifting haptic display that employs origami to enable a real-time experience of different shapes and edges of virtual objects or the softness of materials. The modular approach of our action origami haptic devices provides a high-fidelity, energy-efficient and low-cost solution for interacting with virtual materials and objects in VR.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"15 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116813975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a system that focuses on physics-based manipulation and haptic rendering to achieve realistic dexterous manipulation of virtual objects in VR environments. The system uses a coreless motor with wire as the haptic actuator and physics engine in the software to create a virtual hand that provides haptic feedback through multi-channel audio signals. The device simulates contact collision, pressure, and friction, including stick-slip, to provide users with a realistic and immersive experience. Our device is lightweight and does not interfere with real-world operations or the performance of vision-based hand-tracking technology.
{"title":"Realistic Dexterous Manipulation of Virtual Objects with Physics-Based Haptic Rendering","authors":"Yunxiu Xu, Siyu Wang, S. Hasegawa","doi":"10.1145/3588037.3595400","DOIUrl":"https://doi.org/10.1145/3588037.3595400","url":null,"abstract":"This paper introduces a system that focuses on physics-based manipulation and haptic rendering to achieve realistic dexterous manipulation of virtual objects in VR environments. The system uses a coreless motor with wire as the haptic actuator and physics engine in the software to create a virtual hand that provides haptic feedback through multi-channel audio signals. The device simulates contact collision, pressure, and friction, including stick-slip, to provide users with a realistic and immersive experience. Our device is lightweight and does not interfere with real-world operations or the performance of vision-based hand-tracking technology.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130787922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Untethered VR/AR HMDs can only last 2-3 hours on a single charge. Toward resolving this issue, we develop a real-time gaze-contingent power saving filter which modulates peripheral pixel color while preserving visual fidelity. At SIGGRAPH 2023, participants will be able to view a short panoramic video within a VR HMD with our perceptually-aware power saving filter turned on. Participants will also have the opportunity to view the power output of scenes through our power measurement setup.
{"title":"Imperceptible Color Modulation for Power Saving in VR/AR","authors":"Kenneth Chen, Budmonde Duinkharjav, Nisarg Ujjainkar, Ethan Shahan, Abhishek Tyagi, Jiaying He, Yuhao Zhu, Qiuyue Sun","doi":"10.1145/3588037.3595388","DOIUrl":"https://doi.org/10.1145/3588037.3595388","url":null,"abstract":"Untethered VR/AR HMDs can only last 2-3 hours on a single charge. Toward resolving this issue, we develop a real-time gaze-contingent power saving filter which modulates peripheral pixel color while preserving visual fidelity. At SIGGRAPH 2023, participants will be able to view a short panoramic video within a VR HMD with our perceptually-aware power saving filter turned on. Participants will also have the opportunity to view the power output of scenes through our power measurement setup.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"104 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131014925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suyeon Choi, Manu Gopakumar, Brian Chao, Gunhee Lee, Jonghyun Kim, Gordon Wetzstein
By manipulating light as a wavefront, holographic displays have the potential to revolutionize virtual reality (VR) and augmented reality (AR) systems. These displays support 3D focus cues for visual comfort, vision correcting capabilities, and high light efficiency. However, despite their incredible promise, holographic displays have consistently been hampered by poor image quality. Recently, artificial intelligence–driven computer-generated holography (CGH) algorithms have emerged as a solution to this obstacle. On a prototype holographic display, we demonstrate how the progress of recent state-of-the-art Neural Holography algorithms can produce high-quality dynamic 3D holograms with accurate focus cues. The advances demonstrated in this work aim to provide a glimpse into a future where our displays can fully reproduce three-dimensional virtual content.
{"title":"Neural Holographic Near-eye Displays for Virtual Reality","authors":"Suyeon Choi, Manu Gopakumar, Brian Chao, Gunhee Lee, Jonghyun Kim, Gordon Wetzstein","doi":"10.1145/3588037.3595395","DOIUrl":"https://doi.org/10.1145/3588037.3595395","url":null,"abstract":"By manipulating light as a wavefront, holographic displays have the potential to revolutionize virtual reality (VR) and augmented reality (AR) systems. These displays support 3D focus cues for visual comfort, vision correcting capabilities, and high light efficiency. However, despite their incredible promise, holographic displays have consistently been hampered by poor image quality. Recently, artificial intelligence–driven computer-generated holography (CGH) algorithms have emerged as a solution to this obstacle. On a prototype holographic display, we demonstrate how the progress of recent state-of-the-art Neural Holography algorithms can produce high-quality dynamic 3D holograms with accurate focus cues. The advances demonstrated in this work aim to provide a glimpse into a future where our displays can fully reproduce three-dimensional virtual content.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"423 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116688634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}