首页 > 最新文献

ACM SIGGRAPH 2023 Emerging Technologies最新文献

英文 中文
AI-Mediated 3D Video Conferencing 人工智能介导的3D视频会议
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595385
Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke
We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.
我们提出了一种人工智能介导的3D视频会议系统,该系统可以使用消费级计算资源和最小的捕获设备重建和自动立体显示真人大小的说话头。我们的3D捕获使用了一种新颖的3D提升方法,该方法将给定的2D输入编码为用户的有效三平面神经表示,可以从新的视点实时渲染。我们基于人工智能的技术大大降低了3D捕获的成本,同时以传统2D视频流的成本在接收器端提供高保真的3D表示。我们基于人工智能的方法的其他优点包括能够适应逼真的和风格化的化身,以及能够在多向视频会议中实现相互眼神接触的能力。我们使用跟踪立体显示器来演示我们的系统,用于个人观看体验,以及用于房间规模的多观众体验的光场显示器。
{"title":"AI-Mediated 3D Video Conferencing","authors":"Michael Stengel, Koki Nagano, Chao Liu, Matthew Chan, Alex Trevithick, Shalini De Mello, Jonghyun Kim, D. Luebke","doi":"10.1145/3588037.3595385","DOIUrl":"https://doi.org/10.1145/3588037.3595385","url":null,"abstract":"We present an AI-mediated 3D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head using consumer-grade compute resources and minimal capture equipment. Our 3D capture uses a novel 3D lifting method that encodes a given 2D input into an efficient triplanar neural representation of the user, which can be rendered from novel viewpoints in real-time. Our AI-based techniques drastically reduce the cost for 3D capture, while providing a high-fidelity 3D representation on the receiver’s end at the cost of traditional 2D video streaming. Additional advantages of our AI-based approach include the ability to accommodate both photorealistic and stylized avatars, and the ability to enable mutual eye contact in multi-directional video conferencing. We demonstrate our system using a tracked stereo display for a personal viewing experience as well as a lightfield display for a room-scale multi-viewer experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"178 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114092561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transtiff: Haptic Interaction with a Stick Interface with Various Stiffness Transtiff:具有不同刚度的操纵杆界面的触觉交互
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595402
Ayumu Ogura, Kodai Ito, Shigeo Yoshida, Kazutoshi Tanaka, Yuichi Itoh
We propose Transtiff, a stick-shaped device that can display various stiffness for stick-based haptic interaction. The device has a stiffness-changing joint replicating an artificial muscle mechanism in the relay portion of the stick to change its stiffness. Transtiff can be applied to touch interaction of the screen, augmenting the haptic experience of operating with a stylus pen, which is usually felt uniform. As applications, users can experience the sensation of pen and brush writing on a single device. In addition, it is possible to change the stiffness of the device for each object on the screen to reproduce the tactile feel of that object.
我们提出Transtiff,这是一种棍子状的设备,可以显示基于棍子的触觉交互的各种刚度。该装置具有改变刚度的关节,复制杆的接力部分的人工肌肉机构以改变其刚度。Transtiff可以应用于屏幕的触摸交互,增强用触控笔操作的触觉体验,而触控笔通常感觉均匀。作为应用程序,用户可以在单个设备上体验笔和毛笔书写的感觉。此外,还可以为屏幕上的每个物体改变设备的刚度,以重现该物体的触感。
{"title":"Transtiff: Haptic Interaction with a Stick Interface with Various Stiffness","authors":"Ayumu Ogura, Kodai Ito, Shigeo Yoshida, Kazutoshi Tanaka, Yuichi Itoh","doi":"10.1145/3588037.3595402","DOIUrl":"https://doi.org/10.1145/3588037.3595402","url":null,"abstract":"We propose Transtiff, a stick-shaped device that can display various stiffness for stick-based haptic interaction. The device has a stiffness-changing joint replicating an artificial muscle mechanism in the relay portion of the stick to change its stiffness. Transtiff can be applied to touch interaction of the screen, augmenting the haptic experience of operating with a stylus pen, which is usually felt uniform. As applications, users can experience the sensation of pen and brush writing on a single device. In addition, it is possible to change the stiffness of the device for each object on the screen to reproduce the tactile feel of that object.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115061720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain-Machine Interface for neurorehabilitation and human augmentation: Applications of BMI technology and prospects 脑机接口用于神经康复和人体增强:BMI技术的应用与展望
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3605555
J. Ushiba, M. Hayashi, Seitaro Iwama
Although there is no distinctive header, this is the abstract. This submission template allows authors to submit their papers for review to an ACM Conference or Journal without any output design specifications incorporated at this point in the process. The ACM manuscript template is a single column document that allows authors to type their content into the pre-existing set of paragraph formatting styles applied to the sample placeholder text here. Throughout the document you will find further instructions on how to format your text. If your conference's review process will be double-blind: The submitted document should not include author information and should not include acknowledgments, citations or discussion of related work that would make the authorship apparent. Submissions containing author identifying information may be subject to rejection without review. Upon acceptance, the author and affiliation information must be added to your paper.
虽然没有特别的标题,但这是摘要。此提交模板允许作者提交论文以供ACM会议或期刊评审,而无需在此过程中加入任何输出设计规范。ACM手稿模板是一个单列文档,允许作者将其内容键入应用于此处示例占位符文本的预先存在的段落格式样式集。在整个文档中,您将找到有关如何格式化文本的进一步说明。如果您的会议评审过程是双盲的:提交的文件不应该包括作者信息,不应该包括致谢、引用或相关工作的讨论,这些会使作者身份明显。包含作者识别信息的投稿可能会被拒绝而无需审查。一旦被接受,作者和隶属关系信息必须添加到您的论文中。
{"title":"Brain-Machine Interface for neurorehabilitation and human augmentation: Applications of BMI technology and prospects","authors":"J. Ushiba, M. Hayashi, Seitaro Iwama","doi":"10.1145/3588037.3605555","DOIUrl":"https://doi.org/10.1145/3588037.3605555","url":null,"abstract":"Although there is no distinctive header, this is the abstract. This submission template allows authors to submit their papers for review to an ACM Conference or Journal without any output design specifications incorporated at this point in the process. The ACM manuscript template is a single column document that allows authors to type their content into the pre-existing set of paragraph formatting styles applied to the sample placeholder text here. Throughout the document you will find further instructions on how to format your text. If your conference's review process will be double-blind: The submitted document should not include author information and should not include acknowledgments, citations or discussion of related work that would make the authorship apparent. Submissions containing author identifying information may be subject to rejection without review. Upon acceptance, the author and affiliation information must be added to your paper.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116454660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retinal-Resolution Varifocal VR 视网膜分辨率变焦VR
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595389
Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman
We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.
我们开发了一种虚拟现实(VR)头戴式显示器(HMD),实现了接近视网膜的分辨率,角像素密度高达56像素/度(PPD),支持从0到4屈光度(即无限到25厘米)的宽范围内的眼睛调节,并与至少10屈光度/秒的峰值速度和100屈光度/秒的加速度匹配眼睛调节动态。该系统包括一个高分辨率的光学设计,一个机械驱动的眼球跟踪变焦显示器,它遵循用户的辐辏点,以及一个闭环显示失真渲染管道,确保VR内容在不同的显示放大倍率下保持正确的视角。据我们所知,这项工作是第一个接近视网膜分辨率并完全支持人眼调节范围和动态的VR头戴式显示器原型。我们展示这个装置是为了展示变焦显示器的视觉优势,特别是对于高分辨率、近场交互任务,如阅读文本和在VR中使用3D模型。
{"title":"Retinal-Resolution Varifocal VR","authors":"Yang Zhao, D. Lindberg, Bruce Cleary, O. Mercier, Ryan Mcclelland, Eric Penner, Yu-Jen Lin, Julia Majors, Douglas Lanman","doi":"10.1145/3588037.3595389","DOIUrl":"https://doi.org/10.1145/3588037.3595389","url":null,"abstract":"We develop a virtual reality (VR) head-mounted display (HMD) that achieves near retinal resolution with an angular pixel density up to 56 pixels per degree (PPD), supporting a wide range of eye accommodation from 0 to 4 diopter (i.e. infinity to 25 cm), and matching the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration. This system includes a high-resolution optical design, a mechanically actuated, eye-tracked varifocal display that follows the user’s vergence point, and a closed-loop display distortion rendering pipeline that ensures VR content remains correct in perspective despite the varying display magnification. To our knowledge, this work is the first VR HMD prototype that approaches retinal resolution and fully supports human eye accommodation in range and dynamics. We present this installation to exhibit the visual benefits of varifocal displays, particularly for high-resolution, near-field interaction tasks, such as reading text and working with 3D models in VR.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125213291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Demonstrating JumpMod: Haptic Backpack that Modifies Users' Perceived Jump 演示JumpMod:修改用户感知跳跃的触觉背包
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595387
Romain Nith, Jacob Serfaty, Samuel G Shatzkin, Alan Shen, Pedro Lopes
Vertical force-feedback is extremely rare in mainstream interactive experiences. This happens because existing haptic devices capable of sufficiently strong forces that would modify a user's jump require grounding (e.g., motion platforms or pulleys) or cumbersome actuators (e.g., large propellers attached or held by the user). To enable interactive experiences to feature jump-based haptics without sacrificing wearability, we propose JumpMod, an untethered backpack that modifies one's sense of jumping. JumpMod achieves this by moving a weight up/down along the user's back, which modifies perceived jump momentum—creating accelerated & decelerated jump sensations. Our device can render five distinct effects: jump higher, land harder/softer, pulled higher/lower, which we demonstrate at SIGGRAPH 2023 Emerging Technologies in two jump-based VR experiences.
垂直力反馈在主流互动体验中极为罕见。这是因为现有的触觉设备能够产生足够强的力来改变用户的跳跃,需要接地(例如,运动平台或滑轮)或笨重的执行器(例如,由用户连接或握住的大型螺旋桨)。为了在不牺牲可穿戴性的情况下实现跳跃触觉的互动体验,我们提出了JumpMod,这是一款可以改变跳跃感的无系带背包。JumpMod通过在用户背部上下移动重物来实现这一点,从而改变感知到的跳跃动量——创造加速和减速的跳跃感觉。我们的设备可以呈现五种不同的效果:跳得更高,着陆更硬/更软,拉得更高/更低,我们在SIGGRAPH 2023新兴技术大会上展示了两个基于跳跃的VR体验。
{"title":"Demonstrating JumpMod: Haptic Backpack that Modifies Users' Perceived Jump","authors":"Romain Nith, Jacob Serfaty, Samuel G Shatzkin, Alan Shen, Pedro Lopes","doi":"10.1145/3588037.3595387","DOIUrl":"https://doi.org/10.1145/3588037.3595387","url":null,"abstract":"Vertical force-feedback is extremely rare in mainstream interactive experiences. This happens because existing haptic devices capable of sufficiently strong forces that would modify a user's jump require grounding (e.g., motion platforms or pulleys) or cumbersome actuators (e.g., large propellers attached or held by the user). To enable interactive experiences to feature jump-based haptics without sacrificing wearability, we propose JumpMod, an untethered backpack that modifies one's sense of jumping. JumpMod achieves this by moving a weight up/down along the user's back, which modifies perceived jump momentum—creating accelerated & decelerated jump sensations. Our device can render five distinct effects: jump higher, land harder/softer, pulled higher/lower, which we demonstrate at SIGGRAPH 2023 Emerging Technologies in two jump-based VR experiences.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128105724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Action-Origami Inspired Haptic Devices for Virtual Reality 动作折纸启发的虚拟现实触觉设备
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595393
Khrystyna Vasylevska, Tobias Batik, Hugo Brument, Kiumars Sharifmoghaddam, G. Nawratil, Emanuel Vonach, Soroosh Mortezapoor, H. Kaufmann
Origami offers an innovative way to implement haptic interaction with minimum actuation, particularly in immersive encountered-type haptics and robotics. This paper presents two novel action-origami-inspired haptic devices for Virtual Reality (VR). The Zipper Flower Tube is a rigid-foldable origami structure that can provide different stiffness sensations to simulate the elastic response of a material. The Shiftly is a shape-shifting haptic display that employs origami to enable a real-time experience of different shapes and edges of virtual objects or the softness of materials. The modular approach of our action origami haptic devices provides a high-fidelity, energy-efficient and low-cost solution for interacting with virtual materials and objects in VR.
折纸提供了一种创新的方式来实现触觉交互与最小的驱动,特别是在沉浸式碰到型触觉和机器人。提出了两种基于动作折纸的虚拟现实触觉装置。拉链花管是一种刚性可折叠折纸结构,可以提供不同的刚度感觉来模拟材料的弹性响应。Shiftly是一款可变形的触觉显示器,它采用折纸技术,可以实时体验虚拟物体的不同形状和边缘或材料的柔软度。我们的动作折纸触觉设备的模块化方法为与虚拟现实中的虚拟材料和对象进行交互提供了高保真,节能和低成本的解决方案。
{"title":"Action-Origami Inspired Haptic Devices for Virtual Reality","authors":"Khrystyna Vasylevska, Tobias Batik, Hugo Brument, Kiumars Sharifmoghaddam, G. Nawratil, Emanuel Vonach, Soroosh Mortezapoor, H. Kaufmann","doi":"10.1145/3588037.3595393","DOIUrl":"https://doi.org/10.1145/3588037.3595393","url":null,"abstract":"Origami offers an innovative way to implement haptic interaction with minimum actuation, particularly in immersive encountered-type haptics and robotics. This paper presents two novel action-origami-inspired haptic devices for Virtual Reality (VR). The Zipper Flower Tube is a rigid-foldable origami structure that can provide different stiffness sensations to simulate the elastic response of a material. The Shiftly is a shape-shifting haptic display that employs origami to enable a real-time experience of different shapes and edges of virtual objects or the softness of materials. The modular approach of our action origami haptic devices provides a high-fidelity, energy-efficient and low-cost solution for interacting with virtual materials and objects in VR.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"15 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116813975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Realistic Dexterous Manipulation of Virtual Objects with Physics-Based Haptic Rendering 基于物理的触觉渲染的虚拟物体的逼真灵巧操作
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595400
Yunxiu Xu, Siyu Wang, S. Hasegawa
This paper introduces a system that focuses on physics-based manipulation and haptic rendering to achieve realistic dexterous manipulation of virtual objects in VR environments. The system uses a coreless motor with wire as the haptic actuator and physics engine in the software to create a virtual hand that provides haptic feedback through multi-channel audio signals. The device simulates contact collision, pressure, and friction, including stick-slip, to provide users with a realistic and immersive experience. Our device is lightweight and does not interfere with real-world operations or the performance of vision-based hand-tracking technology.
本文介绍了一种基于物理操作和触觉渲染的系统,以实现虚拟现实环境中虚拟物体的逼真灵巧操作。该系统使用带电线的无芯电机作为触觉执行器,并在软件中使用物理引擎来创建虚拟手,该虚拟手通过多通道音频信号提供触觉反馈。该设备模拟接触碰撞、压力和摩擦,包括粘滑,为用户提供逼真的沉浸式体验。我们的设备重量轻,不会干扰现实世界的操作或基于视觉的手部跟踪技术的性能。
{"title":"Realistic Dexterous Manipulation of Virtual Objects with Physics-Based Haptic Rendering","authors":"Yunxiu Xu, Siyu Wang, S. Hasegawa","doi":"10.1145/3588037.3595400","DOIUrl":"https://doi.org/10.1145/3588037.3595400","url":null,"abstract":"This paper introduces a system that focuses on physics-based manipulation and haptic rendering to achieve realistic dexterous manipulation of virtual objects in VR environments. The system uses a coreless motor with wire as the haptic actuator and physics engine in the software to create a virtual hand that provides haptic feedback through multi-channel audio signals. The device simulates contact collision, pressure, and friction, including stick-slip, to provide users with a realistic and immersive experience. Our device is lightweight and does not interfere with real-world operations or the performance of vision-based hand-tracking technology.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130787922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imperceptible Color Modulation for Power Saving in VR/AR VR/AR中不可察觉的色彩调制节能技术
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595388
Kenneth Chen, Budmonde Duinkharjav, Nisarg Ujjainkar, Ethan Shahan, Abhishek Tyagi, Jiaying He, Yuhao Zhu, Qiuyue Sun
Untethered VR/AR HMDs can only last 2-3 hours on a single charge. Toward resolving this issue, we develop a real-time gaze-contingent power saving filter which modulates peripheral pixel color while preserving visual fidelity. At SIGGRAPH 2023, participants will be able to view a short panoramic video within a VR HMD with our perceptually-aware power saving filter turned on. Participants will also have the opportunity to view the power output of scenes through our power measurement setup.
不受束缚的VR/AR头戴式设备一次充电只能持续2-3小时。为了解决这个问题,我们开发了一种实时的凝视节电滤波器,它可以在保持视觉保真度的同时调节外围像素颜色。在SIGGRAPH 2023上,参与者将能够在VR头戴式显示器中观看一段简短的全景视频,并打开我们的感知节能滤波器。参与者还将有机会通过我们的功率测量装置查看场景的功率输出。
{"title":"Imperceptible Color Modulation for Power Saving in VR/AR","authors":"Kenneth Chen, Budmonde Duinkharjav, Nisarg Ujjainkar, Ethan Shahan, Abhishek Tyagi, Jiaying He, Yuhao Zhu, Qiuyue Sun","doi":"10.1145/3588037.3595388","DOIUrl":"https://doi.org/10.1145/3588037.3595388","url":null,"abstract":"Untethered VR/AR HMDs can only last 2-3 hours on a single charge. Toward resolving this issue, we develop a real-time gaze-contingent power saving filter which modulates peripheral pixel color while preserving visual fidelity. At SIGGRAPH 2023, participants will be able to view a short panoramic video within a VR HMD with our perceptually-aware power saving filter turned on. Participants will also have the opportunity to view the power output of scenes through our power measurement setup.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"104 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131014925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Holographic Near-eye Displays for Virtual Reality 虚拟现实的神经全息近眼显示
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595395
Suyeon Choi, Manu Gopakumar, Brian Chao, Gunhee Lee, Jonghyun Kim, Gordon Wetzstein
By manipulating light as a wavefront, holographic displays have the potential to revolutionize virtual reality (VR) and augmented reality (AR) systems. These displays support 3D focus cues for visual comfort, vision correcting capabilities, and high light efficiency. However, despite their incredible promise, holographic displays have consistently been hampered by poor image quality. Recently, artificial intelligence–driven computer-generated holography (CGH) algorithms have emerged as a solution to this obstacle. On a prototype holographic display, we demonstrate how the progress of recent state-of-the-art Neural Holography algorithms can produce high-quality dynamic 3D holograms with accurate focus cues. The advances demonstrated in this work aim to provide a glimpse into a future where our displays can fully reproduce three-dimensional virtual content.
通过操纵光作为波前,全息显示器有可能彻底改变虚拟现实(VR)和增强现实(AR)系统。这些显示器支持3D焦点提示,视觉舒适,视觉校正能力和高光效率。然而,尽管有令人难以置信的前景,全息显示一直受到图像质量差的阻碍。最近,人工智能驱动的计算机生成全息(CGH)算法已经成为解决这一障碍的一种方法。在原型全息显示器上,我们展示了最近最先进的神经全息算法的进展如何产生具有准确焦点提示的高质量动态3D全息图。在这项工作中所展示的进步旨在为我们的显示器能够完全再现三维虚拟内容的未来提供一瞥。
{"title":"Neural Holographic Near-eye Displays for Virtual Reality","authors":"Suyeon Choi, Manu Gopakumar, Brian Chao, Gunhee Lee, Jonghyun Kim, Gordon Wetzstein","doi":"10.1145/3588037.3595395","DOIUrl":"https://doi.org/10.1145/3588037.3595395","url":null,"abstract":"By manipulating light as a wavefront, holographic displays have the potential to revolutionize virtual reality (VR) and augmented reality (AR) systems. These displays support 3D focus cues for visual comfort, vision correcting capabilities, and high light efficiency. However, despite their incredible promise, holographic displays have consistently been hampered by poor image quality. Recently, artificial intelligence–driven computer-generated holography (CGH) algorithms have emerged as a solution to this obstacle. On a prototype holographic display, we demonstrate how the progress of recent state-of-the-art Neural Holography algorithms can produce high-quality dynamic 3D holograms with accurate focus cues. The advances demonstrated in this work aim to provide a glimpse into a future where our displays can fully reproduce three-dimensional virtual content.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"423 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116688634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACM SIGGRAPH 2023 Emerging Technologies ACM SIGGRAPH 2023新兴技术
Pub Date : 1900-01-01 DOI: 10.1145/3588037
{"title":"ACM SIGGRAPH 2023 Emerging Technologies","authors":"","doi":"10.1145/3588037","DOIUrl":"https://doi.org/10.1145/3588037","url":null,"abstract":"","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132830971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2023 Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1