首页 > 最新文献

ACM SIGGRAPH 2023 Emerging Technologies最新文献

英文 中文
Single-Shot VR 单发虚拟现实
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595396
Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan
The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.
物理世界有不同深度的内容,这使得我们的眼睛可以压扁或放松以聚焦不同的距离;这通常被称为人眼的调节提示。为了实现逼真的3D观看体验,支持调节提示至关重要——3D显示器需要显示不同深度的内容。然而,支持眼睛的原生聚焦一直是3D显示器面临的巨大挑战。商用近眼VR显示器使用双眼视差作为诱导深度感知的主要线索,但在这一挑战中失败了,因为它们显示的所有内容都来自固定的深度——忽略了眼睛的聚焦。许多VR显示器的研究原型确实考虑了调节提示;然而,在3D显示器的其他典型评估标准中,支持调节提示总是伴随着性能损失。为了解决这些挑战,我们提出了一种新型的近眼3D显示器,可以在单个镜头中创建支持逼真调节线索的3D场景,即无需使用时间复用或眼动追踪。我们在演示中展示的这种显示器可以在大深度范围内以4K空间分辨率实时传输3D内容。我们的显示器向真正身临其境的实时3D体验迈出了令人兴奋的一步。参与者将在演示体验中欣赏3D电影和玩互动游戏。
{"title":"Single-Shot VR","authors":"Yingsi Qin, Wei-yu Chen, Matthew O’Toole, Aswin C. Sankaranarayanan","doi":"10.1145/3588037.3595396","DOIUrl":"https://doi.org/10.1145/3588037.3595396","url":null,"abstract":"The physical world has contents at varying depths, allowing our eye to squish or relax to focus at different distances; this is commonly referred to as the accommodation cue for human eyes. To allow a realistic 3D viewing experience, it is crucial to support the accommodation cue—the 3D display needs to show contents at different depths. However, supporting the native focusing of the eye has been an immense challenge to 3D displays. Commercial near-eye VR displays, which use binocular disparity as the primary cue for inducing depth perception, fail this challenge since all contents they show arise from a fixed depth—ignoring the focusing of the eye. Many research prototypes of VR displays do account for the accommodation cue; however, supporting accommodation cues invariably comes with performance loss among other typically assessed criteria for 3D displays. To tackle these challenges, we present a novel kind of near-eye 3D display that can create 3D scenes supporting realistic accommodation cues in a single shot, i.e., without using time multiplexing or eye tracking. This display, which we present in our demo, can stream 3D content over a large depth range, at 4K spatial resolution, and in real-time. Our display offers an exciting step forward towards a truly immersive real-time 3D experience. Participants will get to enjoy 3D movies and play interactive games in their demo experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123477260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SomatoShift: A Wearable Haptic Display for Somatomotor Reconfiguration via Modifying Acceleration of Body Movement SomatoShift:一种可穿戴式触觉显示器,通过修改身体运动的加速度来实现躯体运动的重新配置
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595390
Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi
This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.
本文提出了一种可穿戴的触觉设备,该设备利用控制力矩陀螺仪和运动传感器来实现躯体运动的重新配置,改变用户对身体的躯体感知。该设备可以操纵感觉,使身体部位感觉更重或更轻,并在与物体互动时调整运动的容易程度。考虑到它在化身技术、体育和辅助技术方面的潜在应用,这个提议的设备代表了丰富用户身体体验的一个有前途的途径。
{"title":"SomatoShift: A Wearable Haptic Display for Somatomotor Reconfiguration via Modifying Acceleration of Body Movement","authors":"Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi","doi":"10.1145/3588037.3595390","DOIUrl":"https://doi.org/10.1145/3588037.3595390","url":null,"abstract":"This paper proposes a wearable haptic device that utilizes control moment gyroscopes and a motion sensor to achieve somatomotor reconfiguration, altering the user’s somatic perception of their body. The device can manipulate sensations, making body parts feel heavier or lighter, and modify the ease of movement during interactions with objects. Given its potential applications in avatar technology, sports, and assistive technology, this proposed device represents a promising avenue for enriching the user’s bodily experiences.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128627454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Interactive Showcase of RCSketch: Sketch, Build, and Control Your Dream Vehicles rcssketch的互动展示:草图,构建和控制您的梦想车辆
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595398
Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae
We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.
我们提出rcssketch,屡获殊荣的互动系统,让任何人在3D中勾勒出他们的梦想车辆,构建这些车辆的移动结构,并从多个视点控制它们。这个互动展示的参观者可以使用我们的系统和设计自己的车辆,并在他们的车辆上的广阔数字景观中执行各种各样的现实运动。
{"title":"An Interactive Showcase of RCSketch: Sketch, Build, and Control Your Dream Vehicles","authors":"Han-Eul Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae","doi":"10.1145/3588037.3595398","DOIUrl":"https://doi.org/10.1145/3588037.3595398","url":null,"abstract":"We present RCSketch, the award-winning interactive system that lets anyone sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. Visitors to this interactive showcase are able to use our system and design vehicles of their own and perform a wide variety of realistic movements across the vast digital landscape onboard their vehicles.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129621193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LivEdge: Haptic Live Stream Interaction on a Smartphone by Electro-Tactile Sensation Through the Edges LivEdge:通过边缘的电触觉在智能手机上进行触觉直播互动
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595386
Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto
We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.
我们提出了LivEdge,一种利用边缘电触觉在智能手机上进行实时流交互的新方法。智能手机上用户和流媒体之间的传统交互仅限于流媒体通过用户评论或效果的响应。我们的目标是通过使用触觉技术提供更加身临其境的互动。LivEdge可以通过附着在智能手机两侧的电极阵列的电刺激来传递空间触觉。这种空间触觉刺激代表了流光的物理存在和与屏幕边缘接触的运动。初步实验表明LivEdge增强了直播体验。
{"title":"LivEdge: Haptic Live Stream Interaction on a Smartphone by Electro-Tactile Sensation Through the Edges","authors":"Taiki Takami, Taiga Saito, Takayuki Kameoka, H. Kajimoto","doi":"10.1145/3588037.3595386","DOIUrl":"https://doi.org/10.1145/3588037.3595386","url":null,"abstract":"We present LivEdge, a novel method for live stream interaction on smartphones utilizing electro-tactile sensation through the edges. Conventional interactions between users and a streamer on a smartphone are restricted to the streamer’s response through user comments or effects. Our goal is to provide a more immersive interaction through the use of haptic technology. LivEdge can convey spatial tactile sensations through electrical stimulations from electrode arrays affixed to both edges of the smartphone. This spatial tactile stimulus represents the streamer’s physical presence and movements in contact with the edge of the screen. Preliminary experiment showed LivEdge enhances the live stream experience.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Material Texture Design: Texture Representation System Utilizing Pseudo-Attraction Force Sensation 材料纹理设计:利用伪引力感的纹理表示系统
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595397
Masaharu Hirose, M. Inami
We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.
我们提出了一种材质纹理表示系统——材质纹理设计。该系统在响应用户的运动时呈现出一种伪吸引力感觉,并在指尖显示出剪切感。用户从剪切感中感知到重心的变化,感受人造材料的质感。实验结果表明,通过调整频率可以改变感知到的纹理。通过演示,用户可以根据频率和延迟区分不同的纹理,如水、果冻或橡皮球。我们提出这个系统作为一个小的、轻量级的、简单的纹理表示实现系统。
{"title":"Material Texture Design: Texture Representation System Utilizing Pseudo-Attraction Force Sensation","authors":"Masaharu Hirose, M. Inami","doi":"10.1145/3588037.3595397","DOIUrl":"https://doi.org/10.1145/3588037.3595397","url":null,"abstract":"We propose Material Texture Design, a material texture representation system. This system presents a pseudo-attraction force sensation in response to the user’s motion, and displays a shear sensation at the fingertips. The user perceives a change in the center of gravity from the shear sensation and feels the artificial material texture. Experimental results showed that the perceived texture could be changed by adjusting the frequency. Through demonstration, users can distinguish different textures such as water, jelly, or a rubber ball, depending on the frequency and latency. We propose this system as a small, lightweight, and simple implementation system for texture representation.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127669381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reprojection-Free VR Passthrough 无重投影VR直通
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595391
Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda
Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.
虚拟现实(VR)通过使用耳机前面的外部摄像头让用户看到他们的环境。然而,穿透式摄像头不能与用户的眼睛同时放置,因此穿透式图像的视角与用户不戴耳机时看到的视角不同。虽然图像可以通过计算重新投影到期望的视图中,但深度估计的错误和遮挡边界处的信息缺失可能导致不期望的伪影。我们提出了一种新型的计算相机,它可以直接对传感器后面几厘米处进入用户眼睛的光线进行采样。我们的设计包含一组镜头,每个镜头后面都有一个光圈,并且光圈的位置很有策略,只允许所需的光线通过。由此产生的薄而扁平的架构具有适合VR的外形因素,并且图像重建在计算上是轻量级的,可以实现低延迟的透传。我们在一个具有实际校准和实时图像重建功能的全功能双目穿透原型中实验证明了我们的方法。
{"title":"Reprojection-Free VR Passthrough","authors":"Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, N. Matsuda","doi":"10.1145/3588037.3595391","DOIUrl":"https://doi.org/10.1145/3588037.3595391","url":null,"abstract":"Virtual reality (VR) passthrough uses external cameras on the front of a headset to allow the user to see their environment. However, passthrough cameras cannot physically be co-located with the user’s eyes, so the passthrough images have a different perspective than what the user would see without the headset. Although the images can be computationally reprojected into the desired view, errors in depth estimation and missing information at occlusion boundaries can lead to undesirable artifacts. We propose a novel computational camera that directly samples the rays that would have gone into the user’s eye, several centimeters behind the sensor. Our design contains an array of lenses with an aperture behind each lens, and the apertures are strategically placed to allow through only the desired rays. The resulting thin, flat architecture has suitable form factor for VR, and the image reconstruction is computationally lightweight, enabling low-latency passthrough. We demonstrate our approach experimentally in a fully functional binocular passthrough prototype with practical calibration and real-time image reconstruction.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126103500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments 分布式物理环境中机器人手臂并行操作的注视驱动目标面向对象操作
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595401
Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto
Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.
通过使用多个机构来增强人的能力一直是一项重要的研究议程。当多个主体在不同的环境中进行同步操作时,由于环境位置的差异,使其难以同时与物体进行交互。相反,如果执行自动控制来补充差异并执行并行任务,则用户和机械臂运动之间的不匹配会产生视觉运动不一致,导致整个身体的体现下降。这可能导致难以完成任务或实现目标,甚至可能导致沮丧或焦虑。为了解决这个问题,我们开发了一个系统,通过帮助用户的视线不指向的手臂,同时保持对机械手臂的体现感,允许同步的多个机械手臂并行操作。
{"title":"SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments","authors":"Koki Kawamura, Shunichi Kasahara, M. Fukuoka, Katsutoshi Masai, Ryota Kondo, M. Sugimoto","doi":"10.1145/3588037.3595401","DOIUrl":"https://doi.org/10.1145/3588037.3595401","url":null,"abstract":"Enhancing human capabilities through the use of multiple bodies has been a significant research agenda. When multiple bodies are synchronously operated in different environments, the differences in environment placement make it difficult to interact with objects simultaneously. In contrast, if automatic control is performed to complement the differences and to perform a parallel task, the mismatch between the user and robotic arm movements generates visuomotor incongruence, leading to a decline in embodiment across the body. This can lead to difficulty completing tasks or achieving goals, and may even cause frustration or anxiety. To address this issue, we have developed a system that allows a parallel operation of synchronized multiple robotic arms by assisting the arm towards which the user’s gaze is not directed while maintaining the sense of embodiment over the robotic arms.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130084452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Demonstration of Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing 身份变形的论证:透过人际面部变形探索自我-他者身份连续体
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595394
Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara
We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.
我们通过设计一种人际面部变形体验来探索自我-他者身份的持续变化,在这种体验中,两个用户的面部图像被混合,然后随着时间的推移交换。为了通过不同的社会关系来探索这一点,我们通过公开展览进行了定性和定量的调查。我们发现,在面部变形过程中存在着自我认同的窗口和多种人际体验。从这些见解中,我们合成了一个由能动性和面部身份感代表的自我-他者连续体。这种连续体在人际交往的社会和主观方面都有影响,这使进一步的情景设计成为可能,并可以补充远程交流互动设备研究的结果。
{"title":"A Demonstration of Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing","authors":"Kye Shimizu, Santa Naruse, Jun Nishida, Shunichi Kasahara","doi":"10.1145/3588037.3595394","DOIUrl":"https://doi.org/10.1145/3588037.3595394","url":null,"abstract":"We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented Haptic VR Experience Combining Two Weight-Shifting Versatile Controllers 增强触觉VR体验结合两个重量转换的多功能控制器
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3595399
Yuhu Liu, Yuri Ishikawa, Yohei Fukuma, Yusuke Nakagawa
We designed a VR controller to integrate experimental haptic technology into a practical controller. The device consists of two independent controllers, each with a weight-shifting module that can provide vibration, impact, and shape perception yet is sufficiently compact to be handled as a conventional commodity controller. Combining two controllers allows the device to be held differently for various applications.
我们设计了一款VR控制器,将实验性的触觉技术集成到实际的控制器中。该设备由两个独立的控制器组成,每个控制器都有一个重量转移模块,可以提供振动,冲击和形状感知,但足够紧凑,可以像传统的商品控制器一样处理。结合两个控制器允许设备以不同的方式保持不同的应用程序。
{"title":"Augmented Haptic VR Experience Combining Two Weight-Shifting Versatile Controllers","authors":"Yuhu Liu, Yuri Ishikawa, Yohei Fukuma, Yusuke Nakagawa","doi":"10.1145/3588037.3595399","DOIUrl":"https://doi.org/10.1145/3588037.3595399","url":null,"abstract":"We designed a VR controller to integrate experimental haptic technology into a practical controller. The device consists of two independent controllers, each with a weight-shifting module that can provide vibration, impact, and shape perception yet is sufficiently compact to be handled as a conventional commodity controller. Combining two controllers allows the device to be held differently for various applications.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128422841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inclusive Quiet Room -for building an inclusive society- 包容安静的房间——为建设包容的社会
Pub Date : 2023-07-26 DOI: 10.1145/3588037.3603420
Shoko Kimura, K. Ito, Ayaka Fujii, Rihito Tsuboi, Kazuki Okawa, Hibiki Kojima, K. Kitagawa, Yoshinori Natsume
A large percentage of people with autism or developmental disorders, which are mental disabilities, have sensory hypersensitivity. Therefore, the spread of “quiet rooms” in which they can feel at ease in social life is a necessary element in realizing a symbiotic society. However, the high cost of installing quiet rooms, which require highly soundproof rooms isolated from the outside, is an obstacle to their widespread use. The Inclusive Quiet Room is a new concept of portable quiet rooms that combines an easy-to-construct instant house, immersive videos, and relaxing sounds. In addition to enabling many people to experience the benefits of the room, the work proposes an image of the future quiet rooms that can be easily constructed anywhere. In this paper, we analyze the effectiveness of the Inclusive Quiet Room, exhibited in France, based on survey data from 372 respondents. Through the analysis, the relaxation effects and the demands for quiet rooms are substantiated. The room gives the feeling of being warmly embraced and secured. If all people including those without mental disorders could experience this embraced feeling, they would understand the need and benefits of relaxing environments for the people with sensory hypersensitivities.
很大一部分患有自闭症或发育障碍的人都有感觉超敏症。因此,让他们在社交生活中感到放松的“安静的房间”的传播是实现共生社会的必要因素。然而,安装安静房间的高成本,需要与外界隔离的高度隔音的房间,是它们广泛使用的障碍。包容性安静房间是一种便携式安静房间的新概念,它结合了易于构建的即时房屋,沉浸式视频和放松的声音。除了让许多人体验到房间的好处之外,该作品还提出了一个未来安静房间的形象,可以在任何地方轻松建造。在本文中,我们基于372名受访者的调查数据,分析了在法国展出的包容性安静房间的有效性。通过分析,证实了安静房间的放松效果和需求。这个房间给人一种被热情拥抱和安全的感觉。如果所有的人,包括那些没有精神障碍的人都能体验到这种被拥抱的感觉,他们就会明白放松环境对感觉超敏感的人的需求和好处。
{"title":"Inclusive Quiet Room -for building an inclusive society-","authors":"Shoko Kimura, K. Ito, Ayaka Fujii, Rihito Tsuboi, Kazuki Okawa, Hibiki Kojima, K. Kitagawa, Yoshinori Natsume","doi":"10.1145/3588037.3603420","DOIUrl":"https://doi.org/10.1145/3588037.3603420","url":null,"abstract":"A large percentage of people with autism or developmental disorders, which are mental disabilities, have sensory hypersensitivity. Therefore, the spread of “quiet rooms” in which they can feel at ease in social life is a necessary element in realizing a symbiotic society. However, the high cost of installing quiet rooms, which require highly soundproof rooms isolated from the outside, is an obstacle to their widespread use. The Inclusive Quiet Room is a new concept of portable quiet rooms that combines an easy-to-construct instant house, immersive videos, and relaxing sounds. In addition to enabling many people to experience the benefits of the room, the work proposes an image of the future quiet rooms that can be easily constructed anywhere. In this paper, we analyze the effectiveness of the Inclusive Quiet Room, exhibited in France, based on survey data from 372 respondents. Through the analysis, the relaxation effects and the demands for quiet rooms are substantiated. The room gives the feeling of being warmly embraced and secured. If all people including those without mental disorders could experience this embraced feeling, they would understand the need and benefits of relaxing environments for the people with sensory hypersensitivities.","PeriodicalId":348151,"journal":{"name":"ACM SIGGRAPH 2023 Emerging Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122898639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2023 Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1