首页 > 最新文献

ACM SIGGRAPH 2023 Emerging Technologies最新文献

英文 中文
Single-Shot VR 单发虚拟现实
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595396
Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan
The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.
物理世界有不同深度的内容,这使得我们的眼睛可以压扁或放松以聚焦不同的距离;这通常被称为人眼的调节提示。为了实现逼真的3D观看体验,支持调节提示至关重要——3D显示器需要显示不同深度的内容。然而,支持眼睛的原生聚焦一直是3D显示器面临的巨大挑战。商用近眼VR显示器使用双眼视差作为诱导深度感知的主要线索,但在这一挑战中失败了,因为它们显示的所有内容都来自固定的深度——忽略了眼睛的聚焦。许多VR显示器的研究原型确实考虑了调节提示;然而,在3D显示器的其他典型评估标准中,支持调节提示总是伴随着性能损失。为了解决这些挑战,我们提出了一种新型的近眼3D显示器,可以在单个镜头中创建支持逼真调节线索的3D场景,即无需使用时间复用或眼动追踪。我们在演示中展示的这种显示器可以在大深度范围内以4K空间分辨率实时传输3D内容。我们的显示器向真正身临其境的实时3D体验迈出了令人兴奋的一步。参与者将在演示体验中欣赏3D电影和玩互动游戏。
{"title":"Single-Shot VR","authors":"Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan","doi":"10.1145/3588037.3595396","DOIUrl":"https://doi.org/10.1145/3588037.3595396","url":null,"abstract":"The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123477260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Interactive Showcase of RCSketch: Sketch, Build, and Control Your Dream Vehicles rcssketch的互动展示:草图,构建和控制您的梦想车辆
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595398
Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae
We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.
我们提出rcssketch,屡获殊荣的互动系统,让任何人在3D中勾勒出他们的梦想车辆,构建这些车辆的移动结构,并从多个视点控制它们。这个互动展示的参观者可以使用我们的系统和设计自己的车辆,并在他们的车辆上的广阔数字景观中执行各种各样的现实运动。
{"title":"An Interactive Showcase of RCSketch: Sketch, Build, and Control Your Dream Vehicles","authors":"Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae","doi":"10.1145/3588037.3595398","DOIUrl":"https://doi.org/10.1145/3588037.3595398","url":null,"abstract":"We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129621193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LivEdge: Haptic Live Stream Interaction on a Smartphone by Electro-Tactile Sensation Through the Edges LivEdge:通过边缘的电触觉在智能手机上进行触觉直播互动
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595386
Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto
We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.
我们提出了LivEdge,一种利用边缘电触觉在智能手机上进行实时流交互的新方法。智能手机上用户和流媒体之间的传统交互仅限于流媒体通过用户评论或效果的响应。我们的目标是通过使用触觉技术提供更加身临其境的互动。LivEdge可以通过附着在智能手机两侧的电极阵列的电刺激来传递空间触觉。这种空间触觉刺激代表了流光的物理存在和与屏幕边缘接触的运动。初步实验表明LivEdge增强了直播体验。
{"title":"LivEdge: Haptic Live Stream Interaction on a Smartphone by Electro-Tactile Sensation Through the Edges","authors":"Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto","doi":"10.1145/3588037.3595386","DOIUrl":"https://doi.org/10.1145/3588037.3595386","url":null,"abstract":"We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SomatoShift: A Wearable Haptic Display for Somatomotor Reconfiguration via Modifying Acceleration of Body Movement SomatoShift:一种可穿戴式触觉显示器,通过修改身体运动的加速度来实现躯体运动的重新配置
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595390
Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi
This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.
本文提出了一种可穿戴的触觉设备,该设备利用控制力矩陀螺仪和运动传感器来实现躯体运动的重新配置,改变用户对身体的躯体感知。该设备可以操纵感觉,使身体部位感觉更重或更轻,并在与物体互动时调整运动的容易程度。考虑到它在化身技术、体育和辅助技术方面的潜在应用,这个提议的设备代表了丰富用户身体体验的一个有前途的途径。
{"title":"SomatoShift: A Wearable Haptic Display for Somatomotor Reconfiguration via Modifying Acceleration of Body Movement","authors":"Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi","doi":"10.1145/3588037.3595390","DOIUrl":"https://doi.org/10.1145/3588037.3595390","url":null,"abstract":"This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128627454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Material Texture Design: Texture Representation System Utilizing Pseudo-Attraction Force Sensation 材料纹理设计:利用伪引力感的纹理表示系统
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595397
Masaharu Hirose, M. Inami
We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.
我们提出了一种材质纹理表示系统——材质纹理设计。该系统在响应用户的运动时呈现出一种伪吸引力感觉,并在指尖显示出剪切感。用户从剪切感中感知到重心的变化,感受人造材料的质感。实验结果表明,通过调整频率可以改变感知到的纹理。通过演示,用户可以根据频率和延迟区分不同的纹理,如水、果冻或橡皮球。我们提出这个系统作为一个小的、轻量级的、简单的纹理表示实现系统。
{"title":"Material Texture Design: Texture Representation System Utilizing Pseudo-Attraction Force Sensation","authors":"Masaharu Hirose, M. Inami","doi":"10.1145/3588037.3595397","DOIUrl":"https://doi.org/10.1145/3588037.3595397","url":null,"abstract":"We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127669381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reprojection-Free VR Passthrough 无重投影VR直通
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595391
Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda
Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.
虚拟现实(VR)通过使用耳机前面的外部摄像头让用户看到他们的环境。然而,穿透式摄像头不能与用户的眼睛同时放置,因此穿透式图像的视角与用户不戴耳机时看到的视角不同。虽然图像可以通过计算重新投影到期望的视图中,但深度估计的错误和遮挡边界处的信息缺失可能导致不期望的伪影。我们提出了一种新型的计算相机,它可以直接对传感器后面几厘米处进入用户眼睛的光线进行采样。我们的设计包含一组镜头,每个镜头后面都有一个光圈,并且光圈的位置很有策略,只允许所需的光线通过。由此产生的薄而扁平的架构具有适合VR的外形因素,并且图像重建在计算上是轻量级的,可以实现低延迟的透传。我们在一个具有实际校准和实时图像重建功能的全功能双目穿透原型中实验证明了我们的方法。
{"title":"Reprojection-Free VR Passthrough","authors":"Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda","doi":"10.1145/3588037.3595391","DOIUrl":"https://doi.org/10.1145/3588037.3595391","url":null,"abstract":"Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126103500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments 分布式物理环境中机器人手臂并行操作的注视驱动目标面向对象操作
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595401
Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto
Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.
通过使用多个机构来增强人的能力一直是一项重要的研究议程。当多个主体在不同的环境中进行同步操作时,由于环境位置的差异,使其难以同时与物体进行交互。相反,如果执行自动控制来补充差异并执行并行任务,则用户和机械臂运动之间的不匹配会产生视觉运动不一致,导致整个身体的体现下降。这可能导致难以完成任务或实现目标,甚至可能导致沮丧或焦虑。为了解决这个问题,我们开发了一个系统,通过帮助用户的视线不指向的手臂,同时保持对机械手臂的体现感,允许同步的多个机械手臂并行操作。
{"title":"SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments","authors":"Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto","doi":"10.1145/3588037.3595401","DOIUrl":"https://doi.org/10.1145/3588037.3595401","url":null,"abstract":"Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130084452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Demonstration of Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing 身份变形的论证:透过人际面部变形探索自我-他者身份连续体
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595394
Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara
We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.
我们通过设计一种人际面部变形体验来探索自我-他者身份的持续变化,在这种体验中,两个用户的面部图像被混合,然后随着时间的推移交换。为了通过不同的社会关系来探索这一点,我们通过公开展览进行了定性和定量的调查。我们发现,在面部变形过程中存在着自我认同的窗口和多种人际体验。从这些见解中,我们合成了一个由能动性和面部身份感代表的自我-他者连续体。这种连续体在人际交往的社会和主观方面都有影响,这使进一步的情景设计成为可能,并可以补充远程交流互动设备研究的结果。
{"title":"A Demonstration of Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing","authors":"Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara","doi":"10.1145/3588037.3595394","DOIUrl":"https://doi.org/10.1145/3588037.3595394","url":null,"abstract":"We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Mediated 3D Video Conferencing 人工智能介导的3D视频会议
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595385
Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke
We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.
我们提出了一种人工智能介导的3D视频会议系统,该系统可以使用消费级计算资源和最小的捕获设备重建和自动立体显示真人大小的说话头。我们的3D捕获使用了一种新颖的3D提升方法,该方法将给定的2D输入编码为用户的有效三平面神经表示,可以从新的视点实时渲染。我们基于人工智能的技术大大降低了3D捕获的成本,同时以传统2D视频流的成本在接收器端提供高保真的3D表示。我们基于人工智能的方法的其他优点包括能够适应逼真的和风格化的化身,以及能够在多向视频会议中实现相互眼神接触的能力。我们使用跟踪立体显示器来演示我们的系统,用于个人观看体验,以及用于房间规模的多观众体验的光场显示器。
{"title":"AI-Mediated 3D Video Conferencing","authors":"Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke","doi":"10.1145/3588037.3595385","DOIUrl":"https://doi.org/10.1145/3588037.3595385","url":null,"abstract":"We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"178 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114092561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retinal-Resolution Varifocal VR 视网膜分辨率变焦VR
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595389
Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman
We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.
我们开发了一种虚拟现实(VR)头戴式显示器(HMD),实现了接近视网膜的分辨率,角像素密度高达56像素/度(PPD),支持从0到4屈光度(即无限到25厘米)的宽范围内的眼睛调节,并与至少10屈光度/秒的峰值速度和100屈光度/秒的加速度匹配眼睛调节动态。该系统包括一个高分辨率的光学设计,一个机械驱动的眼球跟踪变焦显示器,它遵循用户的辐辏点,以及一个闭环显示失真渲染管道,确保VR内容在不同的显示放大倍率下保持正确的视角。据我们所知,这项工作是第一个接近视网膜分辨率并完全支持人眼调节范围和动态的VR头戴式显示器原型。我们展示这个装置是为了展示变焦显示器的视觉优势,特别是对于高分辨率、近场交互任务,如阅读文本和在VR中使用3D模型。
{"title":"Retinal-Resolution Varifocal VR","authors":"Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman","doi":"10.1145/3588037.3595389","DOIUrl":"https://doi.org/10.1145/3588037.3595389","url":null,"abstract":"We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125213291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM SIGGRAPH 2023 Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1