首页 > 最新文献

SIGGRAPH Asia 2020 Emerging Technologies最新文献

英文 中文
Dual Body: Method of Tele-Cooperative Avatar Robot with Passive Sensation Feedback to Reduce Latency Perception 双体:具有被动感觉反馈的远程协作Avatar机器人减少延迟感知的方法
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422893
Vibol Yem, Kentaro Yamaoka, Gaku Sueta, Y. Ikei
Dual Body was developed to be a telexistence or telepresence system, one in which the user does not need to continuously operate an avatar robot but is still able to passively perceive feedback sensations when the robot performs actions. This system can recognize user speech commands, and the robot performs the task cooperatively. The system that we propose, in which passive sensation feedback and cooperation of the robot are used, highly reduces the perception of latency and the feeling of fatigue, which increases the quality of experience and task efficiency. In the demo experience, participants will be able to command the robot from individual rooms via a URL and RoomID, and they will perceive sound and visual feedback, such as images or landscapes of the campus of Tokyo Metropolitan University, from the robot as it travels.
Dual Body是一种远程存在或远程呈现系统,其中用户不需要持续操作虚拟机器人,但仍然能够被动地感知机器人执行动作时的反馈感觉。该系统能够识别用户的语音指令,并配合机器人完成任务。我们提出的系统利用机器人的被动感觉反馈和协作,大大减少了延迟感和疲劳感,提高了体验质量和任务效率。在演示体验中,参与者将能够通过URL和RoomID在各个房间指挥机器人,并且他们将在机器人行进时感知声音和视觉反馈,例如东京城市大学校园的图像或景观。
{"title":"Dual Body: Method of Tele-Cooperative Avatar Robot with Passive Sensation Feedback to Reduce Latency Perception","authors":"Vibol Yem, Kentaro Yamaoka, Gaku Sueta, Y. Ikei","doi":"10.1145/3415255.3422893","DOIUrl":"https://doi.org/10.1145/3415255.3422893","url":null,"abstract":"Dual Body was developed to be a telexistence or telepresence system, one in which the user does not need to continuously operate an avatar robot but is still able to passively perceive feedback sensations when the robot performs actions. This system can recognize user speech commands, and the robot performs the task cooperatively. The system that we propose, in which passive sensation feedback and cooperation of the robot are used, highly reduces the perception of latency and the feeling of fatigue, which increases the quality of experience and task efficiency. In the demo experience, participants will be able to command the robot from individual rooms via a URL and RoomID, and they will perceive sound and visual feedback, such as images or landscapes of the campus of Tokyo Metropolitan University, from the robot as it travels.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129982404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OmniPhotos: Casual 360° VR Photography with Motion Parallax OmniPhotos:休闲360°VR摄影与运动视差
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422884
Tobias Bertel, Mingze Yuan, Reuben Lindroos, Christian Richardt
Until now, immersive 360° VR panoramas could not be captured casually and reliably at the same time as state-of-the-art approaches involve time-consuming or expensive capture processes that prevent the casual capture of real-world VR environments. Existing approaches are also often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for casually and reliably capturing high-quality 360° VR panoramas. Our approach only requires a single sweep of a consumer 360° video camera as input, which takes less than 3 seconds with a rotating selfie stick. The captured video is transformed into a hybrid scene representation consisting of a coarse scene-specific proxy geometry and optical flow between consecutive video frames, enabling 5-DoF real-world VR experiences. The large capture radius and 360° field of view significantly expand the range of head motion compared to previous approaches. Among all competing methods, ours is the simplest, and fastest by an order of magnitude. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes. We will make our code and datasets publicly available.
到目前为止,身临其境的360°VR全景还不能随意可靠地捕捉到,同时最先进的方法涉及耗时或昂贵的捕捉过程,这阻碍了对真实VR环境的随意捕捉。现有的方法也经常限制其支持的头部运动范围。我们介绍了OmniPhotos,这是一种轻松可靠地捕捉高质量360°VR全景的新方法。我们的方法只需要消费者360°摄像机的一次扫描作为输入,使用旋转自拍杆不到3秒。捕获的视频被转换成混合场景表示,由粗糙的场景特定代理几何图形和连续视频帧之间的光流组成,从而实现5自由度的真实VR体验。与之前的方法相比,大捕获半径和360°视场显着扩大了头部运动的范围。在所有相互竞争的方法中,我们的方法是最简单的,而且速度快了一个数量级。我们已经捕获了50多个OmniPhotos,并展示了各种场景的视频结果。我们将公开我们的代码和数据集。
{"title":"OmniPhotos: Casual 360° VR Photography with Motion Parallax","authors":"Tobias Bertel, Mingze Yuan, Reuben Lindroos, Christian Richardt","doi":"10.1145/3415255.3422884","DOIUrl":"https://doi.org/10.1145/3415255.3422884","url":null,"abstract":"Until now, immersive 360° VR panoramas could not be captured casually and reliably at the same time as state-of-the-art approaches involve time-consuming or expensive capture processes that prevent the casual capture of real-world VR environments. Existing approaches are also often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for casually and reliably capturing high-quality 360° VR panoramas. Our approach only requires a single sweep of a consumer 360° video camera as input, which takes less than 3 seconds with a rotating selfie stick. The captured video is transformed into a hybrid scene representation consisting of a coarse scene-specific proxy geometry and optical flow between consecutive video frames, enabling 5-DoF real-world VR experiences. The large capture radius and 360° field of view significantly expand the range of head motion compared to previous approaches. Among all competing methods, ours is the simplest, and fastest by an order of magnitude. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes. We will make our code and datasets publicly available.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121488769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HaptoMapping: Visuo-Haptic AR System usingProjection-based Control of Wearable Haptic Devices HaptoMapping:使用基于投影的可穿戴触觉设备控制的视觉-触觉AR系统
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422891
Yamato Miyatake, T. Hiraki, Tomosuke Maeda, D. Iwai, Kosuke Sato
Visuo-haptic augmented reality (AR) systems that present visual and haptic sensations in a spatially and temporally consistent manner have the potential to improve AR applications’ performance. However, there are issues such as enclosing the user’s view with a display, restricting the workspace to a limited amount of flat space, or changing the visual information presented in conventional systems. In this paper, we propose “HaptoMapping,” a novel projection-based AR system, that can present consistent visuo-haptic sensations on a non-planar physical surface without installing any visual displays to users and by keeping the quality of visual information. We implemented a prototype of HaptoMapping consisting of a projection system and a wearable haptic device. Also, we introduce three application scenarios in daily scenes.
视觉-触觉增强现实(AR)系统以空间和时间一致的方式呈现视觉和触觉,具有提高AR应用性能的潜力。然而,也存在一些问题,例如用显示器封闭用户的视图,将工作空间限制在有限的平面空间中,或者改变传统系统中呈现的视觉信息。在本文中,我们提出了“HaptoMapping”,这是一种新颖的基于投影的AR系统,它可以在非平面物理表面上呈现一致的视觉-触觉感觉,而无需向用户安装任何视觉显示器,并保持视觉信息的质量。我们实现了一个由投影系统和可穿戴触觉设备组成的HaptoMapping原型。并介绍了日常场景中的三种应用场景。
{"title":"HaptoMapping: Visuo-Haptic AR System usingProjection-based Control of Wearable Haptic Devices","authors":"Yamato Miyatake, T. Hiraki, Tomosuke Maeda, D. Iwai, Kosuke Sato","doi":"10.1145/3415255.3422891","DOIUrl":"https://doi.org/10.1145/3415255.3422891","url":null,"abstract":"Visuo-haptic augmented reality (AR) systems that present visual and haptic sensations in a spatially and temporally consistent manner have the potential to improve AR applications’ performance. However, there are issues such as enclosing the user’s view with a display, restricting the workspace to a limited amount of flat space, or changing the visual information presented in conventional systems. In this paper, we propose “HaptoMapping,” a novel projection-based AR system, that can present consistent visuo-haptic sensations on a non-planar physical surface without installing any visual displays to users and by keeping the quality of visual information. We implemented a prototype of HaptoMapping consisting of a projection system and a wearable haptic device. Also, we introduce three application scenarios in daily scenes.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124010150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CoiLED Display: Make Everything Displayable 卷曲显示:使所有内容都可显示
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422889
Saya Suzunaga, Yuichi Itoh, Kazuyuki Fujita, Ryo Shirai, T. Onoye
We propose CoiLED Display, a flexible and scalable display that transforms ordinary objects in our environment into displays simply by coiling the device around them. CoiLED Display consists of a strip-shaped display unit with a single row of attached LEDs, and it can represent information, after a calibration process, as it is wrapped onto a target object. The calibration required for fitting each object to the system can be achieved by capturing the entire object from multiple angles with an RGB camera, which recognizes the relative positional relationship among the LEDs. The advantage of this approach is that the calibration is quite simple but robust, even if the coiled strips are misaligned or overlap each other. We demonstrated a proof-of-concept prototype using strips with a 5-mm width and containing LEDs mounted at 2-mm intervals. This paper discusses various example applications of the proposed system.
我们提出了一种可卷曲的显示器,它是一种灵活的、可扩展的显示器,通过简单地将设备缠绕在周围,就可以将我们环境中的普通物体转化为显示器。卷绕式显示器由一个带有单排附加led的条形显示单元组成,经过校准过程后,它可以表示信息,因为它被包裹在目标物体上。将每个物体拟合到系统所需的校准可以通过使用RGB相机从多个角度捕获整个物体来实现,该相机可以识别led之间的相对位置关系。这种方法的优点是,校准是相当简单,但鲁棒性,即使卷曲的条带是不对齐或相互重叠。我们展示了一个概念验证原型,使用宽度为5毫米的条带,并以2毫米的间隔安装led。本文讨论了该系统的各种应用实例。
{"title":"CoiLED Display: Make Everything Displayable","authors":"Saya Suzunaga, Yuichi Itoh, Kazuyuki Fujita, Ryo Shirai, T. Onoye","doi":"10.1145/3415255.3422889","DOIUrl":"https://doi.org/10.1145/3415255.3422889","url":null,"abstract":"We propose CoiLED Display, a flexible and scalable display that transforms ordinary objects in our environment into displays simply by coiling the device around them. CoiLED Display consists of a strip-shaped display unit with a single row of attached LEDs, and it can represent information, after a calibration process, as it is wrapped onto a target object. The calibration required for fitting each object to the system can be achieved by capturing the entire object from multiple angles with an RGB camera, which recognizes the relative positional relationship among the LEDs. The advantage of this approach is that the calibration is quite simple but robust, even if the coiled strips are misaligned or overlap each other. We demonstrated a proof-of-concept prototype using strips with a 5-mm width and containing LEDs mounted at 2-mm intervals. This paper discusses various example applications of the proposed system.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131211894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realistic Volumetric 3D Display Using Physical Materials 使用物理材料的逼真体积3D显示
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422879
Ray Asahina, Takashi Nomoto, Takatoshi Yoshida, Yoshihiro Watanabe
Conventional swept volumetric displays can provide accurate physical cues for depth perception. However, the quality of texture reproduction is not high because these displays use high-speed projectors with low bit depth and low resolution. In this study, to address this limitation of swept volumetric displays while retaining their advantages, a new swept volumetric three-dimensional (3D) display is designed using physical materials as screens. Physical materials are directly used to reproduce textures on a displayed 3D surface. Further, our system can achieve hidden-surface removal based on real-time viewpoint tracking.
传统的扫描体积显示器可以为深度感知提供准确的物理线索。然而,由于这些显示器使用的是低位深和低分辨率的高速投影仪,因此纹理再现的质量不高。在本研究中,为了解决扫描体积显示器的这一局限性,同时保持其优势,使用物理材料作为屏幕设计了一种新的扫描体积三维(3D)显示器。物理材料直接用于在显示的3D表面上再现纹理。此外,我们的系统可以实现基于实时视点跟踪的隐藏面去除。
{"title":"Realistic Volumetric 3D Display Using Physical Materials","authors":"Ray Asahina, Takashi Nomoto, Takatoshi Yoshida, Yoshihiro Watanabe","doi":"10.1145/3415255.3422879","DOIUrl":"https://doi.org/10.1145/3415255.3422879","url":null,"abstract":"Conventional swept volumetric displays can provide accurate physical cues for depth perception. However, the quality of texture reproduction is not high because these displays use high-speed projectors with low bit depth and low resolution. In this study, to address this limitation of swept volumetric displays while retaining their advantages, a new swept volumetric three-dimensional (3D) display is designed using physical materials as screens. Physical materials are directly used to reproduce textures on a displayed 3D surface. Further, our system can achieve hidden-surface removal based on real-time viewpoint tracking.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131438604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Minimal Latency Laser Graphics Pipeline 交互式最小延迟激光图形管道
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422885
Jayson Haebich, C. Sandor, Á. Cassinelli
We present the design and implementation of a ”Laser Graphics Processing Unit” (LGPU) featuring a proposed re-configurable graphics pipeline capable of minimal latency interactive feedback, without the need of computer communication. This is a novel approach for creating interactive graphics where a simple program describes the interaction on a vertex. Similar in design to a geometry or fragment shader on a GPU, these programs are uploaded on initialisation and do not require input from any external micro-controller while running. The interaction shader takes input from a light sensor and updates the vertex and fragment shader, an operation that can be parallelised. Once loaded onto our prototype LGPU the pipeline can create laser graphics that react within 4 ms of interaction and can run without input from a computer. The pipeline achieves this low latency by having the interaction shader communicate with the geometry and vertex shaders that are also running on the LGPU. This enables the creation of low latency displays such as car counters, musical instrument interfaces, and non-touch projected widgets or buttons. From our testing we were able to achieve a reaction time of 4 ms and from a range of up to 15 m.
我们提出了一个“激光图形处理单元”(LGPU)的设计和实现,其特点是提出了一个可重新配置的图形管道,能够最小延迟的交互反馈,而不需要计算机通信。这是一种创建交互式图形的新方法,其中一个简单的程序描述了顶点上的交互。类似于GPU上的几何或片段着色器的设计,这些程序在初始化时上传,并且在运行时不需要任何外部微控制器的输入。交互着色器从光传感器获取输入并更新顶点和片段着色器,这是一个可以并行化的操作。一旦加载到我们的原型LGPU上,流水线就可以创建激光图形,在4毫秒的交互内做出反应,并且可以在没有计算机输入的情况下运行。管道通过让交互着色器与也在LGPU上运行的几何和顶点着色器通信来实现这种低延迟。这允许创建低延迟显示,如汽车计数器、乐器界面和非触摸投影小部件或按钮。从我们的测试中,我们能够实现反应时间为4毫秒,范围可达15米。
{"title":"Interactive Minimal Latency Laser Graphics Pipeline","authors":"Jayson Haebich, C. Sandor, Á. Cassinelli","doi":"10.1145/3415255.3422885","DOIUrl":"https://doi.org/10.1145/3415255.3422885","url":null,"abstract":"We present the design and implementation of a ”Laser Graphics Processing Unit” (LGPU) featuring a proposed re-configurable graphics pipeline capable of minimal latency interactive feedback, without the need of computer communication. This is a novel approach for creating interactive graphics where a simple program describes the interaction on a vertex. Similar in design to a geometry or fragment shader on a GPU, these programs are uploaded on initialisation and do not require input from any external micro-controller while running. The interaction shader takes input from a light sensor and updates the vertex and fragment shader, an operation that can be parallelised. Once loaded onto our prototype LGPU the pipeline can create laser graphics that react within 4 ms of interaction and can run without input from a computer. The pipeline achieves this low latency by having the interaction shader communicate with the geometry and vertex shaders that are also running on the LGPU. This enables the creation of low latency displays such as car counters, musical instrument interfaces, and non-touch projected widgets or buttons. From our testing we were able to achieve a reaction time of 4 ms and from a range of up to 15 m.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"79 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126991371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic Projection Mapping with Networked Multi-projectors Based on Pixel-parallel Intensity Control 基于像素并行强度控制的网络化多投影机动态投影映射
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422888
Takashi Nomoto, Wanlong Li, Hao-Lun Peng, Yoshihiro Watanabe
We present a new method of mapping projections onto dynamic scenes by using multiple high-speed projectors. The proposed method controls the intensity in a pixel-parallel manner for each projector. As each projected image is updated in real time with low latency, adaptive shadow removal can be achieved for a projected image even in a complicated dynamic scene. Additionally, our pixel-parallel calculation method allows a distributed system configuration so that the number of projectors can be increased by networked connections for high scalability. We demonstrated seamless mapping onto dynamic scenes at 360 fps by using ten cameras and four projectors.
提出了一种利用多台高速投影仪将投影映射到动态场景的新方法。所提出的方法以像素平行的方式控制每个投影仪的强度。由于每个投影图像都是实时更新的,延迟低,因此即使在复杂的动态场景中,也可以实现投影图像的自适应阴影去除。此外,我们的像素并行计算方法允许分布式系统配置,以便通过网络连接增加投影仪的数量,从而实现高可扩展性。我们通过使用10台摄像机和4台投影仪演示了360帧每秒的动态场景无缝映射。
{"title":"Dynamic Projection Mapping with Networked Multi-projectors Based on Pixel-parallel Intensity Control","authors":"Takashi Nomoto, Wanlong Li, Hao-Lun Peng, Yoshihiro Watanabe","doi":"10.1145/3415255.3422888","DOIUrl":"https://doi.org/10.1145/3415255.3422888","url":null,"abstract":"We present a new method of mapping projections onto dynamic scenes by using multiple high-speed projectors. The proposed method controls the intensity in a pixel-parallel manner for each projector. As each projected image is updated in real time with low latency, adaptive shadow removal can be achieved for a projected image even in a complicated dynamic scene. Additionally, our pixel-parallel calculation method allows a distributed system configuration so that the number of projectors can be increased by networked connections for high scalability. We demonstrated seamless mapping onto dynamic scenes at 360 fps by using ten cameras and four projectors.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125480648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Mid-air Thermal Display via High-intensity Ultrasound 空中热显示通过高强度超声波
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422895
Takaaki Kamigaki, Shun Suzuki, H. Shinoda
1 OVERVIEW This paper proposes a mid-air system to provide both heating and cooling sensations for hand via a high-intensity ultrasound spot (Fig. 1 (Left)). We employ airborne ultrasound phased arrays (AUPAs), which can generate a high-intensity focal point, at an arbitrary position in the air. By changing the position of the focal point against the hand, our system can provide each thermal sensation, as shown in
本文提出了一种通过高强度超声点为手部提供加热和冷却感觉的半空系统(图1(左))。我们采用机载超声相控阵(aupa),它可以在空中任意位置产生高强度焦点。通过改变焦点对手的位置,我们的系统可以提供每种热感觉,如图所示
{"title":"Mid-air Thermal Display via High-intensity Ultrasound","authors":"Takaaki Kamigaki, Shun Suzuki, H. Shinoda","doi":"10.1145/3415255.3422895","DOIUrl":"https://doi.org/10.1145/3415255.3422895","url":null,"abstract":"1 OVERVIEW This paper proposes a mid-air system to provide both heating and cooling sensations for hand via a high-intensity ultrasound spot (Fig. 1 (Left)). We employ airborne ultrasound phased arrays (AUPAs), which can generate a high-intensity focal point, at an arbitrary position in the air. By changing the position of the focal point against the hand, our system can provide each thermal sensation, as shown in","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125739427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Bubble Mirror: An Interactive Face Image Display Using Electrolysis Bubbles 气泡镜:利用电解气泡的交互式面部图像显示
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422890
Ayaka Ishii, Namiki Tanaka, I. Siio
{"title":"Bubble Mirror: An Interactive Face Image Display Using Electrolysis Bubbles","authors":"Ayaka Ishii, Namiki Tanaka, I. Siio","doi":"10.1145/3415255.3422890","DOIUrl":"https://doi.org/10.1145/3415255.3422890","url":null,"abstract":"","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133303988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Speed Human Arm Projection Mapping with Skin Deformation 具有皮肤变形的高速人体手臂投影映射
Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422887
Hao-Lun Peng, Yoshihiro Watanabe
Augmenting the human arm surface via projection mapping can have a great impact on our daily lives with regards to entertainment, human-computer interaction, and education. However, conventional methods ignore skin deformation and have a high latency from motion to projection, which degrades the user experience. In this paper, we propose a projection mapping system that can solve such problems. First, we newly combine a state-of-the-art parametric deformable surface model with an efficient regression-based accuracy compensation method of skin deformation. The compensation method modifies the texture coordinate to achieve high-speed and highly accurate image generation for projection using joint-tracking results. Second, we develop a high-speed system that reduces latency from motion to projection within 10 ms. Compared to the conventional methods, this system provides more realistic experiences.
通过投影映射增强人体手臂表面可以对我们的日常生活产生重大影响,涉及娱乐,人机交互和教育。然而,传统的方法忽略了皮肤变形,并且从运动到投影的延迟很高,这降低了用户体验。在本文中,我们提出了一个可以解决这些问题的投影映射系统。首先,我们将最先进的参数化变形表面模型与一种有效的基于回归的蒙皮变形精度补偿方法相结合。补偿方法通过修改纹理坐标,利用关节跟踪结果实现高速、高精度的投影图像生成。其次,我们开发了一个高速系统,可以在10毫秒内减少从运动到投影的延迟。与传统方法相比,该系统提供了更真实的体验。
{"title":"High-Speed Human Arm Projection Mapping with Skin Deformation","authors":"Hao-Lun Peng, Yoshihiro Watanabe","doi":"10.1145/3415255.3422887","DOIUrl":"https://doi.org/10.1145/3415255.3422887","url":null,"abstract":"Augmenting the human arm surface via projection mapping can have a great impact on our daily lives with regards to entertainment, human-computer interaction, and education. However, conventional methods ignore skin deformation and have a high latency from motion to projection, which degrades the user experience. In this paper, we propose a projection mapping system that can solve such problems. First, we newly combine a state-of-the-art parametric deformable surface model with an efficient regression-based accuracy compensation method of skin deformation. The compensation method modifies the texture coordinate to achieve high-speed and highly accurate image generation for projection using joint-tracking results. Second, we develop a high-speed system that reduces latency from motion to projection within 10 ms. Compared to the conventional methods, this system provides more realistic experiences.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132416314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
SIGGRAPH Asia 2020 Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1