Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan
The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.
{"title":"Single-Shot VR","authors":"Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan","doi":"10.1145/3588037.3595396","DOIUrl":"https://doi.org/10.1145/3588037.3595396","url":null,"abstract":"The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123477260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae
We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.
{"title":"An Interactive Showcase of RCSketch: Sketch, Build, and Control Your Dream Vehicles","authors":"Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae","doi":"10.1145/3588037.3595398","DOIUrl":"https://doi.org/10.1145/3588037.3595398","url":null,"abstract":"We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129621193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto
We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.
{"title":"LivEdge: Haptic Live Stream Interaction on a Smartphone by Electro-Tactile Sensation Through the Edges","authors":"Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto","doi":"10.1145/3588037.3595386","DOIUrl":"https://doi.org/10.1145/3588037.3595386","url":null,"abstract":"We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.
{"title":"SomatoShift: A Wearable Haptic Display for Somatomotor Reconfiguration via Modifying Acceleration of Body Movement","authors":"Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi","doi":"10.1145/3588037.3595390","DOIUrl":"https://doi.org/10.1145/3588037.3595390","url":null,"abstract":"This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128627454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.
{"title":"Material Texture Design: Texture Representation System Utilizing Pseudo-Attraction Force Sensation","authors":"Masaharu Hirose, M. Inami","doi":"10.1145/3588037.3595397","DOIUrl":"https://doi.org/10.1145/3588037.3595397","url":null,"abstract":"We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127669381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda
Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.
{"title":"Reprojection-Free VR Passthrough","authors":"Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda","doi":"10.1145/3588037.3595391","DOIUrl":"https://doi.org/10.1145/3588037.3595391","url":null,"abstract":"Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126103500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto
Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.
{"title":"SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments","authors":"Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto","doi":"10.1145/3588037.3595401","DOIUrl":"https://doi.org/10.1145/3588037.3595401","url":null,"abstract":"Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130084452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara
We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.
{"title":"A Demonstration of Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing","authors":"Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara","doi":"10.1145/3588037.3595394","DOIUrl":"https://doi.org/10.1145/3588037.3595394","url":null,"abstract":"We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke
We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.
{"title":"AI-Mediated 3D Video Conferencing","authors":"Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke","doi":"10.1145/3588037.3595385","DOIUrl":"https://doi.org/10.1145/3588037.3595385","url":null,"abstract":"We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"178 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114092561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman
We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.
{"title":"Retinal-Resolution Varifocal VR","authors":"Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman","doi":"10.1145/3588037.3595389","DOIUrl":"https://doi.org/10.1145/3588037.3595389","url":null,"abstract":"We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125213291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}