首页 > 最新文献

SIGGRAPH Asia 2019 Technical Briefs最新文献

英文 中文
Embedded Concave Micromirror Array-based See-through Light Field Near-eye Display 基于嵌入式凹面微镜阵列的透视光场近眼显示
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365141
Hongbae S. Park, R. Hoskinson, B. Stoeber
We propose a direct-view see-through light field near-eye display (NED) using a semi-reflective embedded concave micromirror array (ECMMA) that can generate virtual images at different depths of focus. The ECMMA is a planar optical element having a thin, semi-reflective embedded metallic film for the mirror array. Being a flat element, the ECMMA has zero net refractive power, and the light rays originating from the background scene do not change their original direction of propagation when they pass the ECMMA. Therefore, the see-through view of the proposed ECMMA-NED looks clear with negligible disturbance to the quality of the background scene.
我们提出了一种使用半反射内嵌凹微镜阵列(ECMMA)的直接透视光场近眼显示器(NED),它可以在不同聚焦深度产生虚拟图像。ECMMA是一种平面光学元件,具有用于镜面阵列的薄的半反射嵌入金属薄膜。ECMMA是一个平面元件,其净折射率为零,来自背景场景的光线在经过ECMMA时不会改变其原有的传播方向。因此,所提出的ECMMA-NED的透视视图看起来很清晰,对背景场景质量的干扰可以忽略不计。
{"title":"Embedded Concave Micromirror Array-based See-through Light Field Near-eye Display","authors":"Hongbae S. Park, R. Hoskinson, B. Stoeber","doi":"10.1145/3355088.3365141","DOIUrl":"https://doi.org/10.1145/3355088.3365141","url":null,"abstract":"We propose a direct-view see-through light field near-eye display (NED) using a semi-reflective embedded concave micromirror array (ECMMA) that can generate virtual images at different depths of focus. The ECMMA is a planar optical element having a thin, semi-reflective embedded metallic film for the mirror array. Being a flat element, the ECMMA has zero net refractive power, and the light rays originating from the background scene do not change their original direction of propagation when they pass the ECMMA. Therefore, the see-through view of the proposed ECMMA-NED looks clear with negligible disturbance to the quality of the background scene.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116675415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Augmented Reality Guided Respiratory Liver Tumors Punctures: A Preliminary Feasibility Study 增强现实引导呼吸性肝脏肿瘤穿刺:初步可行性研究
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365166
Ruotong Li, Tianpei Yang, Weixin Si, Xiangyun Liao, Qiong Wang, R. Klein, P. Heng
CT-guided radiofrequency ablation (RFA) has evolved rapidly over the past decade and become a widely accepted treatment option for patients with liver tumors. However, it is hard for doctors to locate tumors precisely while avoid damaging the surrounding risk structures with 2D CT images, which only provides limited static information, especially in case of respiratory motion. This paper presents a novel augmented reality guidance modality for improving the precision of liver tumors punctures by providing visual cue of 3D personalized anatomy with respiratory motion. Optical see-through display devices Epson MoveRio BT300 and Microsoft HoloLens are used to mix pre-operative 3D personalized data and intra-operative physical scene. Here an augmented reality based surgical navigation pipeline is proposed to achieve the transformation from raw medical data to virtual guidance information and precisely superimpose this information onto real experimental animal. In addition, to alleviate the difficulty during needle placement induced by respiratory motion, we proposed a correlation model to real-timely predict the tumor position via regression based respiration state estimation and the statistical tumor motion model. We experimentally validated the proposed system on in vivo beagle dogs with artificial lesion, which can effectively improve the puncture efficiency and precision. The proposed augmented reality modality is a general strategy to guide the doctors perform precise percutaneous puncture under respiration conditions and has the potential to be used for other surgical navigation tasks.
ct引导射频消融术(RFA)在过去十年中发展迅速,成为肝脏肿瘤患者广泛接受的治疗选择。然而,2D CT图像只能提供有限的静态信息,尤其是呼吸运动的情况下,医生很难在不损害周围危险结构的情况下精确定位肿瘤。本文提出了一种新的增强现实引导模式,通过提供带有呼吸运动的三维个性化解剖视觉线索来提高肝脏肿瘤穿刺的精度。使用光学透明显示设备Epson MoveRio BT300和Microsoft HoloLens将术前3D个性化数据与术中物理场景混合。本文提出了一种基于增强现实的手术导航管道,实现了从原始医疗数据到虚拟引导信息的转换,并将这些信息精确地叠加到真实实验动物身上。此外,为了缓解呼吸运动引起的置针困难,我们提出了一种基于回归的呼吸状态估计和统计肿瘤运动模型的实时预测肿瘤位置的相关模型。实验结果表明,该系统能有效提高穿刺效率和穿刺精度。提出的增强现实模式是指导医生在呼吸条件下进行精确经皮穿刺的一般策略,并有可能用于其他手术导航任务。
{"title":"Augmented Reality Guided Respiratory Liver Tumors Punctures: A Preliminary Feasibility Study","authors":"Ruotong Li, Tianpei Yang, Weixin Si, Xiangyun Liao, Qiong Wang, R. Klein, P. Heng","doi":"10.1145/3355088.3365166","DOIUrl":"https://doi.org/10.1145/3355088.3365166","url":null,"abstract":"CT-guided radiofrequency ablation (RFA) has evolved rapidly over the past decade and become a widely accepted treatment option for patients with liver tumors. However, it is hard for doctors to locate tumors precisely while avoid damaging the surrounding risk structures with 2D CT images, which only provides limited static information, especially in case of respiratory motion. This paper presents a novel augmented reality guidance modality for improving the precision of liver tumors punctures by providing visual cue of 3D personalized anatomy with respiratory motion. Optical see-through display devices Epson MoveRio BT300 and Microsoft HoloLens are used to mix pre-operative 3D personalized data and intra-operative physical scene. Here an augmented reality based surgical navigation pipeline is proposed to achieve the transformation from raw medical data to virtual guidance information and precisely superimpose this information onto real experimental animal. In addition, to alleviate the difficulty during needle placement induced by respiratory motion, we proposed a correlation model to real-timely predict the tumor position via regression based respiration state estimation and the statistical tumor motion model. We experimentally validated the proposed system on in vivo beagle dogs with artificial lesion, which can effectively improve the puncture efficiency and precision. The proposed augmented reality modality is a general strategy to guide the doctors perform precise percutaneous puncture under respiration conditions and has the potential to be used for other surgical navigation tasks.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"920 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116184239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Power of Box Filters: Real-time Approximation to Large Convolution Kernel by Box-filtered Image Pyramid 盒滤波器的力量:用盒滤波图像金字塔实时逼近大卷积核
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365143
Tianchen Xu, Xiaohua Ren, E. Wu
This paper presents a novel solution for approximations to some large convolution kernels by leveraging a weighted box-filtered image pyramid set. Convolution filters are widely used, but still compute-intensive for real-time rendering when the kernel size is large. Our algorithm approximates the convolution kernels, such as Gaussian and cosine filters, by two phases of down and up sampling on a GPU. The computational complexity only depends on the input image resolution and is independent of the kernel size. Therefore, our method can be applied to nonuniform blurs, irradiance probe generations, and ray-traced glossy global illuminations in real time, and runs in effective and efficient performance.
本文提出了一种利用加权盒滤波图像金字塔集逼近某些大卷积核的新方法。卷积过滤器被广泛使用,但是当内核大小较大时,对于实时渲染来说仍然是计算密集型的。我们的算法近似卷积核,如高斯和余弦滤波器,通过两个阶段的向下和向上采样在GPU上。计算复杂度仅与输入图像分辨率有关,与核大小无关。因此,我们的方法可以实时应用于非均匀模糊、辐照度探针生成和光线跟踪的光滑全局照明,并且具有有效和高效的性能。
{"title":"The Power of Box Filters: Real-time Approximation to Large Convolution Kernel by Box-filtered Image Pyramid","authors":"Tianchen Xu, Xiaohua Ren, E. Wu","doi":"10.1145/3355088.3365143","DOIUrl":"https://doi.org/10.1145/3355088.3365143","url":null,"abstract":"This paper presents a novel solution for approximations to some large convolution kernels by leveraging a weighted box-filtered image pyramid set. Convolution filters are widely used, but still compute-intensive for real-time rendering when the kernel size is large. Our algorithm approximates the convolution kernels, such as Gaussian and cosine filters, by two phases of down and up sampling on a GPU. The computational complexity only depends on the input image resolution and is independent of the kernel size. Therefore, our method can be applied to nonuniform blurs, irradiance probe generations, and ray-traced glossy global illuminations in real time, and runs in effective and efficient performance.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128009991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unpaired Sketch-to-Line Translation via Synthesis of Sketches 通过草图合成的非配对草图到线条的转换
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365163
Gayoung Lee, Dohyun Kim, Y. Yoo, Dongyoon Han, Jung-Woo Ha, Jaehyuk Chang
Converting hand-drawn sketches into clean line drawings is a crucial step for diverse artistic works such as comics and product designs. Recent data-driven methods using deep learning have shown their great abilities to automatically simplify sketches on raster images. Since it is difficult to collect or generate paired sketch and line images, lack of training data is a main obstacle to use these models. In this paper, we propose a training scheme that requires only unpaired sketch and line images for learning sketch-to-line translation. To do this, we first generate realistic paired sketch and line images from unpaired sketch and line images using rule-based line augmentation and unsupervised texture conversion. Next, with our synthetic paired data, we train a model for sketch-to-line translation using supervised learning. Compared to unsupervised methods that use cycle consistency losses, our model shows better performance at removing noisy strokes. We also show that our model simplifies complicated sketches better than models trained on a limited number of handcrafted paired data.
将手绘草图转换成清晰的线条图是漫画和产品设计等多种艺术作品的关键步骤。最近使用深度学习的数据驱动方法已经显示出它们在自动简化光栅图像上的草图方面的巨大能力。由于难以收集或生成配对的素描和线条图像,缺乏训练数据是使用这些模型的主要障碍。在本文中,我们提出了一种只需要未配对的素描和线条图像来学习素描到线条翻译的训练方案。为此,我们首先使用基于规则的线增强和无监督纹理转换,从未配对的素描和线条图像中生成逼真的成对素描和线条图像。接下来,使用我们的合成配对数据,我们使用监督学习训练一个草图到线条翻译的模型。与使用循环一致性损失的无监督方法相比,我们的模型在去除噪声冲程方面表现出更好的性能。我们还表明,我们的模型比在有限数量的手工配对数据上训练的模型更好地简化了复杂的草图。
{"title":"Unpaired Sketch-to-Line Translation via Synthesis of Sketches","authors":"Gayoung Lee, Dohyun Kim, Y. Yoo, Dongyoon Han, Jung-Woo Ha, Jaehyuk Chang","doi":"10.1145/3355088.3365163","DOIUrl":"https://doi.org/10.1145/3355088.3365163","url":null,"abstract":"Converting hand-drawn sketches into clean line drawings is a crucial step for diverse artistic works such as comics and product designs. Recent data-driven methods using deep learning have shown their great abilities to automatically simplify sketches on raster images. Since it is difficult to collect or generate paired sketch and line images, lack of training data is a main obstacle to use these models. In this paper, we propose a training scheme that requires only unpaired sketch and line images for learning sketch-to-line translation. To do this, we first generate realistic paired sketch and line images from unpaired sketch and line images using rule-based line augmentation and unsupervised texture conversion. Next, with our synthetic paired data, we train a model for sketch-to-line translation using supervised learning. Compared to unsupervised methods that use cycle consistency losses, our model shows better performance at removing noisy strokes. We also show that our model simplifies complicated sketches better than models trained on a limited number of handcrafted paired data.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"63 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123307329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ChinaStyle: A Mask-Aware Generative Adversarial Network for Chinese Traditional Image Translation 中国风格:基于面具感知的生成对抗网络的中国传统图像翻译
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365148
Yuan Wang, W. Zhang, Peng Chen
GANs make it effective to generate artworks using appropriate collections. However, most training dataset either contain paintings that were only from one artist or contain only one category. There is few training datasets for Chinese traditional figure paintings. This paper presents a new high-quality dataset named ChinaStyle Dataset including six categories, containing 1913 images totally. We further proposes Mask-Aware Generative Adversarial Networks (MA-GAN) to transfer realistic portraiture to different styles of Chinese paintings. Different from existing mothed, MA-GAN uses a single model only once with our unpaired dataset. Besides, Mask-aware strategy is used to generate free-hand style of Chinese paintings. In addition, a color preserved loss is proposed to alleviate the color free problem. Experimental results and user study demonstrate that MA-GAN achieves a natural and competitive performance compared with existing methods.
gan可以有效地使用适当的集合生成艺术品。然而,大多数训练数据集要么只包含来自一个艺术家的绘画,要么只包含一个类别。中国传统人物画的训练数据集很少。本文提出了一个新的高质量数据集——中国风格数据集,包括6个类别,共包含1913幅图像。我们进一步提出了面具感知生成对抗网络(MA-GAN),将现实主义肖像转换为不同风格的中国画。与现有的方法不同,MA-GAN只使用单个模型一次。此外,运用面具感知策略,产生写意风格的中国画。此外,还提出了一种颜色保留损失的方法来缓解无颜色问题。实验结果和用户研究表明,与现有方法相比,MA-GAN具有自然的性能和竞争力。
{"title":"ChinaStyle: A Mask-Aware Generative Adversarial Network for Chinese Traditional Image Translation","authors":"Yuan Wang, W. Zhang, Peng Chen","doi":"10.1145/3355088.3365148","DOIUrl":"https://doi.org/10.1145/3355088.3365148","url":null,"abstract":"GANs make it effective to generate artworks using appropriate collections. However, most training dataset either contain paintings that were only from one artist or contain only one category. There is few training datasets for Chinese traditional figure paintings. This paper presents a new high-quality dataset named ChinaStyle Dataset including six categories, containing 1913 images totally. We further proposes Mask-Aware Generative Adversarial Networks (MA-GAN) to transfer realistic portraiture to different styles of Chinese paintings. Different from existing mothed, MA-GAN uses a single model only once with our unpaired dataset. Besides, Mask-aware strategy is used to generate free-hand style of Chinese paintings. In addition, a color preserved loss is proposed to alleviate the color free problem. Experimental results and user study demonstrate that MA-GAN achieves a natural and competitive performance compared with existing methods.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130520368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Bezalel - Towards low-cost pin-based shape displays Bezalel——迈向低成本的针形显示器
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365144
Pedro de Almeida Sacramento, R. Ferreira, M. Andrade
The usage of shape changing interfaces is widely discussed in the HCI field as a promising strategy for the physical representation of digital data. Such interfaces are expected to greatly impact a wide range of applications, such as virtual reality, architectural design and education of blind people. Unfortunately, the widespread usage of pin-based shape displays is currently limited by their typically high manufacturing costs. The high costs of pin-based shape displays are mainly due to the number of actuators, which tends to grow quadratically with display resolution. Given that, we present Bezalel: a solution for pin-based shape displays that allows 2 actuators to efficiently actuate pin-based shape displays with n pins. Our solution is able to fully render any 2.5D shape within a time proportional to , which outperforms the 2014 Lemelson-MIT ”Use it!” graduate winner solution using half as much actuators. Additionally, results also show that, for specific shapes, our approach can perform as well as the most efficient and much more expensive technologies currently used. We expect that our solution will make it possible to create low-cost actuated surfaces with different sizes, from small tactile objects to large structures such as shape-changing floors.
作为数字数据物理表示的一种有前途的策略,形状变化接口的使用在人机交互领域得到了广泛的讨论。这样的接口有望极大地影响广泛的应用,如虚拟现实,建筑设计和盲人教育。不幸的是,目前基于引脚的形状显示器的广泛使用受到其典型的高制造成本的限制。基于引脚的形状显示器的高成本主要是由于执行器的数量,它往往与显示分辨率呈二次增长。鉴于此,我们提出了Bezalel:一种基于引脚的形状显示器的解决方案,允许2个致动器有效地驱动具有n个引脚的基于引脚的形状显示器。我们的解决方案能够在成比例的时间内完全渲染任何2.5D形状,这比2014年Lemelson-MIT的“Use it!”毕业生赢家解决方案使用一半的执行器。此外,结果还表明,对于特定的形状,我们的方法可以像目前使用的最有效和最昂贵的技术一样出色。我们希望我们的解决方案能够创造不同尺寸的低成本驱动表面,从小型触觉物体到大型结构,如变形地板。
{"title":"Bezalel - Towards low-cost pin-based shape displays","authors":"Pedro de Almeida Sacramento, R. Ferreira, M. Andrade","doi":"10.1145/3355088.3365144","DOIUrl":"https://doi.org/10.1145/3355088.3365144","url":null,"abstract":"The usage of shape changing interfaces is widely discussed in the HCI field as a promising strategy for the physical representation of digital data. Such interfaces are expected to greatly impact a wide range of applications, such as virtual reality, architectural design and education of blind people. Unfortunately, the widespread usage of pin-based shape displays is currently limited by their typically high manufacturing costs. The high costs of pin-based shape displays are mainly due to the number of actuators, which tends to grow quadratically with display resolution. Given that, we present Bezalel: a solution for pin-based shape displays that allows 2 actuators to efficiently actuate pin-based shape displays with n pins. Our solution is able to fully render any 2.5D shape within a time proportional to , which outperforms the 2014 Lemelson-MIT ”Use it!” graduate winner solution using half as much actuators. Additionally, results also show that, for specific shapes, our approach can perform as well as the most efficient and much more expensive technologies currently used. We expect that our solution will make it possible to create low-cost actuated surfaces with different sizes, from small tactile objects to large structures such as shape-changing floors.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130602571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Accelerated Volume Rendering with Chebyshev Distance Maps 加速体渲染与切比雪夫距离地图
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365164
Lachlan Deakin, M. Knackstedt
Volume rendering has useful applications with emerging technologies such as virtual and augmented reality. The high frame rate targets of these technologies poses a problem for volume rendering because of its very high computational complexity compared with conventional surface rendering. We developed an efficient empty space skipping algorithm for accelerating volume rendering. A distance map is generated which indicates the Chebyshev distance to the nearest occupied region (with non-transparent voxels) within a volume. The distance map is used to efficiently skip empty regions while volume ray casting. We show improved performance over state-of-the-art empty space skipping techniques.
体绘制在虚拟现实和增强现实等新兴技术中具有有用的应用。这些技术的高帧率目标给体绘制带来了问题,因为与传统的表面绘制相比,它们的计算复杂度非常高。我们开发了一种有效的空白空间跳过算法来加速体绘制。生成一个距离图,它表示到一个体积内最近的被占用区域(具有非透明体素)的切比雪夫距离。距离图用于在体射线投射时有效地跳过空白区域。我们展示了在最先进的空白空间跳跃技术上改进的性能。
{"title":"Accelerated Volume Rendering with Chebyshev Distance Maps","authors":"Lachlan Deakin, M. Knackstedt","doi":"10.1145/3355088.3365164","DOIUrl":"https://doi.org/10.1145/3355088.3365164","url":null,"abstract":"Volume rendering has useful applications with emerging technologies such as virtual and augmented reality. The high frame rate targets of these technologies poses a problem for volume rendering because of its very high computational complexity compared with conventional surface rendering. We developed an efficient empty space skipping algorithm for accelerating volume rendering. A distance map is generated which indicates the Chebyshev distance to the nearest occupied region (with non-transparent voxels) within a volume. The distance map is used to efficiently skip empty regions while volume ray casting. We show improved performance over state-of-the-art empty space skipping techniques.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125336411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Effect of Attention Adaptive Personal Audio Deliverable System on Digital Signage 注意力自适应个人音频传送系统在数字标牌中的效果
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365171
N. Kuratomo, Kazuki Yamada, Soh Masuko, T. Yamanaka, K. Zempo
The purpose of this study is to improve the comfortability of the space using digital signage and the effectiveness of the advertisement to be displayed. So we have developed a system that only reached sound information to people who need it, like people watching a screen. This system is composed of head direction and human position detected by a camera and super directive sound by a parametric speaker. These make the sound volume increase only when the head faces camera, and the speaker to be used is automatically selected according to the position of the person. Because of the super directive sound, it is designed not to be outputted to anyone who does not need it. The experiment using the system was conducted. Compared to the conventional loudspeaker, this system has improved space comfortability while maintaining advertising effect. This suggests that this system can create a more comfortable environment for customers.
本研究的目的是提高使用数字标牌的空间的舒适度和广告的有效性。所以我们开发了一个系统,只把声音信息传递给需要的人,比如看屏幕的人。该系统由摄像机检测头部方向和人体位置以及参数扬声器的超指令声音组成。这使得只有当头部面对摄像机时音量才会增加,并且根据人的位置自动选择要使用的扬声器。由于超级指令的声音,它被设计成不输出给任何不需要它的人。利用该系统进行了实验。与传统扬声器相比,该系统在保持广告效果的同时,提高了空间舒适度。这说明这个系统可以为顾客创造一个更舒适的环境。
{"title":"Effect of Attention Adaptive Personal Audio Deliverable System on Digital Signage","authors":"N. Kuratomo, Kazuki Yamada, Soh Masuko, T. Yamanaka, K. Zempo","doi":"10.1145/3355088.3365171","DOIUrl":"https://doi.org/10.1145/3355088.3365171","url":null,"abstract":"The purpose of this study is to improve the comfortability of the space using digital signage and the effectiveness of the advertisement to be displayed. So we have developed a system that only reached sound information to people who need it, like people watching a screen. This system is composed of head direction and human position detected by a camera and super directive sound by a parametric speaker. These make the sound volume increase only when the head faces camera, and the speaker to be used is automatically selected according to the position of the person. Because of the super directive sound, it is designed not to be outputted to anyone who does not need it. The experiment using the system was conducted. Compared to the conventional loudspeaker, this system has improved space comfortability while maintaining advertising effect. This suggests that this system can create a more comfortable environment for customers.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124270484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Decomposition Method of Object Transfiguration 一种物体变形的分解方法
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365151
Seung Joon Lee, Keon-Woo Kang, Suk-ju Kang, Siyeong Lee
Existing deep learning-based object transfiguration methods are based on unsupervised image-to-image translation which shows reasonable performance. However, previous methods often fail in tasks where the shape of an object changes significantly. In addition, the shape and texture of an original object remain in the converted image. To address these issues, we propose a novel method that decomposes an object transfiguration task into two subtasks: object removal and object synthesis. This prevents an original object from affecting a generated object and makes the generated object better suited to the background. Then, we explicitly formulate each task distinguishing a background and an object using instance information (e.g. object segmentation masks). Our model is unconstrained by position, shape, and size of an original object compared to other methods. We show qualitative and quantitative comparisons with other methods demonstrating the effectiveness of the proposed method.
现有的基于深度学习的对象变换方法都是基于无监督的图像到图像的转换,具有较好的性能。然而,以前的方法在物体形状发生重大变化的任务中往往失败。此外,原始物体的形状和纹理仍保留在转换后的图像中。为了解决这些问题,我们提出了一种新的方法,将对象转换任务分解为两个子任务:对象移除和对象合成。这可以防止原始对象影响生成的对象,并使生成的对象更适合背景。然后,我们使用实例信息(例如对象分割掩码)明确地制定每个任务来区分背景和对象。与其他方法相比,我们的模型不受原始对象的位置、形状和大小的约束。我们展示了定性和定量比较与其他方法证明了所提出的方法的有效性。
{"title":"A Decomposition Method of Object Transfiguration","authors":"Seung Joon Lee, Keon-Woo Kang, Suk-ju Kang, Siyeong Lee","doi":"10.1145/3355088.3365151","DOIUrl":"https://doi.org/10.1145/3355088.3365151","url":null,"abstract":"Existing deep learning-based object transfiguration methods are based on unsupervised image-to-image translation which shows reasonable performance. However, previous methods often fail in tasks where the shape of an object changes significantly. In addition, the shape and texture of an original object remain in the converted image. To address these issues, we propose a novel method that decomposes an object transfiguration task into two subtasks: object removal and object synthesis. This prevents an original object from affecting a generated object and makes the generated object better suited to the background. Then, we explicitly formulate each task distinguishing a background and an object using instance information (e.g. object segmentation masks). Our model is unconstrained by position, shape, and size of an original object compared to other methods. We show qualitative and quantitative comparisons with other methods demonstrating the effectiveness of the proposed method.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121824865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ray Guiding for Production Lightmap Baking 光线导向生产光图烘烤
Pub Date : 2019-11-17 DOI: 10.1145/3355088.3365167
Ari Silvennoinen, Peter-Pike J. Sloan
We present a ray guiding technique for improving the computation times in the context of production lightmap baking. Compared to state-of-the-art, our method has better scalability and lower variance.
我们提出了一种光线引导技术,以提高生产光图烘烤的计算时间。与现有方法相比,该方法具有更好的可扩展性和更小的方差。
{"title":"Ray Guiding for Production Lightmap Baking","authors":"Ari Silvennoinen, Peter-Pike J. Sloan","doi":"10.1145/3355088.3365167","DOIUrl":"https://doi.org/10.1145/3355088.3365167","url":null,"abstract":"We present a ray guiding technique for improving the computation times in the context of production lightmap baking. Compared to state-of-the-art, our method has better scalability and lower variance.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129831799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
SIGGRAPH Asia 2019 Technical Briefs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1