Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan
The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.
{"title":"Single-Shot VR","authors":"Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan","doi":"10.1145/3588037.3595396","DOIUrl":"https://doi.org/10.1145/3588037.3595396","url":null,"abstract":"The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123477260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.
{"title":"SomatoShift: A Wearable Haptic Display for Somatomotor Reconfiguration via Modifying Acceleration of Body Movement","authors":"Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi","doi":"10.1145/3588037.3595390","DOIUrl":"https://doi.org/10.1145/3588037.3595390","url":null,"abstract":"This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128627454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae
We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.
{"title":"An Interactive Showcase of RCSketch: Sketch, Build, and Control Your Dream Vehicles","authors":"Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae","doi":"10.1145/3588037.3595398","DOIUrl":"https://doi.org/10.1145/3588037.3595398","url":null,"abstract":"We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129621193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto
We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.
{"title":"LivEdge: Haptic Live Stream Interaction on a Smartphone by Electro-Tactile Sensation Through the Edges","authors":"Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto","doi":"10.1145/3588037.3595386","DOIUrl":"https://doi.org/10.1145/3588037.3595386","url":null,"abstract":"We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.
{"title":"Material Texture Design: Texture Representation System Utilizing Pseudo-Attraction Force Sensation","authors":"Masaharu Hirose, M. Inami","doi":"10.1145/3588037.3595397","DOIUrl":"https://doi.org/10.1145/3588037.3595397","url":null,"abstract":"We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127669381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda
Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.
{"title":"Reprojection-Free VR Passthrough","authors":"Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda","doi":"10.1145/3588037.3595391","DOIUrl":"https://doi.org/10.1145/3588037.3595391","url":null,"abstract":"Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126103500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto
Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.
{"title":"SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments","authors":"Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto","doi":"10.1145/3588037.3595401","DOIUrl":"https://doi.org/10.1145/3588037.3595401","url":null,"abstract":"Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130084452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara
We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.
{"title":"A Demonstration of Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing","authors":"Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara","doi":"10.1145/3588037.3595394","DOIUrl":"https://doi.org/10.1145/3588037.3595394","url":null,"abstract":"We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We designed a VR controller to integrate experimental haptic technology into a practical controller. The device consists of two independent controllers, each with a weight-shifting module that can provide vibration, impact, and shape perception yet is sufficiently compact to be handled as a conventional commodity controller. Combining two controllers allows the device to be held differently for various applications.
{"title":"Augmented Haptic VR Experience Combining Two Weight-Shifting Versatile Controllers","authors":"Yuhu Liu, Yuri Ishikawa, Yohei Fukuma, Yusuke Nakagawa","doi":"10.1145/3588037.3595399","DOIUrl":"https://doi.org/10.1145/3588037.3595399","url":null,"abstract":"We designed a VR controller to integrate experimental haptic technology into a practical controller. The device consists of two independent controllers, each with a weight-shifting module that can provide vibration, impact, and shape perception yet is sufficiently compact to be handled as a conventional commodity controller. Combining two controllers allows the device to be held differently for various applications.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128422841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shoko Kimura, K. Ito, Ayaka Fujii, Rihito Tsuboi, Kazuki Okawa, Hibiki Kojima, K. Kitagawa, Yoshinori Natsume
A large percentage of people with autism or developmental disorders, which are mental disabilities, have sensory hypersensitivity. Therefore, the spread of “quiet rooms” in which they can feel at ease in social life is a necessary element in realizing a symbiotic society. However, the high cost of installing quiet rooms, which require highly soundproof rooms isolated from the outside, is an obstacle to their widespread use. The Inclusive Quiet Room is a new concept of portable quiet rooms that combines an easy-to-construct instant house, immersive videos, and relaxing sounds. In addition to enabling many people to experience the benefits of the room, the work proposes an image of the future quiet rooms that can be easily constructed anywhere. In this paper, we analyze the effectiveness of the Inclusive Quiet Room, exhibited in France, based on survey data from 372 respondents. Through the analysis, the relaxation effects and the demands for quiet rooms are substantiated. The room gives the feeling of being warmly embraced and secured. If all people including those without mental disorders could experience this embraced feeling, they would understand the need and benefits of relaxing environments for the people with sensory hypersensitivities.
{"title":"Inclusive Quiet Room -for building an inclusive society-","authors":"Shoko Kimura, K. Ito, Ayaka Fujii, Rihito Tsuboi, Kazuki Okawa, Hibiki Kojima, K. Kitagawa, Yoshinori Natsume","doi":"10.1145/3588037.3603420","DOIUrl":"https://doi.org/10.1145/3588037.3603420","url":null,"abstract":"A large percentage of people with autism or developmental disorders, which are mental disabilities, have sensory hypersensitivity. Therefore, the spread of “quiet rooms” in which they can feel at ease in social life is a necessary element in realizing a symbiotic society. However, the high cost of installing quiet rooms, which require highly soundproof rooms isolated from the outside, is an obstacle to their widespread use. The Inclusive Quiet Room is a new concept of portable quiet rooms that combines an easy-to-construct instant house, immersive videos, and relaxing sounds. In addition to enabling many people to experience the benefits of the room, the work proposes an image of the future quiet rooms that can be easily constructed anywhere. In this paper, we analyze the effectiveness of the Inclusive Quiet Room, exhibited in France, based on survey data from 372 respondents. Through the analysis, the relaxation effects and the demands for quiet rooms are substantiated. The room gives the feeling of being warmly embraced and secured. If all people including those without mental disorders could experience this embraced feeling, they would understand the need and benefits of relaxing environments for the people with sensory hypersensitivities.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122898639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}