首页 > 最新文献

ACM SIGGRAPH 2015 Posters最新文献

英文 中文
Tracking water droplets under descent and deformation 跟踪水滴下降和变形情况
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787651
Caleb Brose, M. Thuo, J. Sheaffer
We present a system for tracking the movement and deformation of drops of water in free fall and collision. Our data comes from a high-speed camera which records 60,000 frames per second. The data is noisy, and is compromised by an unfortunate camera angle and poor lighting which contribute to caustics, reflections, and shadows in the image. Given an input video, we apply techniques from image processing, computer vision and computational geometry to track the the droplet's position and shape. While our tool could monitor the movement of transparent fluids in a more general environment, our data specifically depicts water colliding with hydrophobic materials. The output of our processing is used by materials scientists to better our understanding of the interactions between water and hydrophobic surfaces. These interactions have direct application in the materials engineering of next generation printing technologies.
我们提出了一种跟踪水滴在自由落体和碰撞中的运动和变形的系统。我们的数据来自一台每秒记录6万帧的高速摄像机。数据是嘈杂的,并且受到不合适的相机角度和光线不足的影响,从而导致图像中的焦散、反射和阴影。给定一个输入视频,我们应用图像处理、计算机视觉和计算几何技术来跟踪液滴的位置和形状。虽然我们的工具可以在更一般的环境中监测透明流体的运动,但我们的数据特别描述了水与疏水材料的碰撞。我们处理的结果被材料科学家用来更好地理解水和疏水表面之间的相互作用。这些相互作用在下一代印刷技术的材料工程中有直接的应用。
{"title":"Tracking water droplets under descent and deformation","authors":"Caleb Brose, M. Thuo, J. Sheaffer","doi":"10.1145/2787626.2787651","DOIUrl":"https://doi.org/10.1145/2787626.2787651","url":null,"abstract":"We present a system for tracking the movement and deformation of drops of water in free fall and collision. Our data comes from a high-speed camera which records 60,000 frames per second. The data is noisy, and is compromised by an unfortunate camera angle and poor lighting which contribute to caustics, reflections, and shadows in the image. Given an input video, we apply techniques from image processing, computer vision and computational geometry to track the the droplet's position and shape. While our tool could monitor the movement of transparent fluids in a more general environment, our data specifically depicts water colliding with hydrophobic materials. The output of our processing is used by materials scientists to better our understanding of the interactions between water and hydrophobic surfaces. These interactions have direct application in the materials engineering of next generation printing technologies.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123264729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retargeting 3D objects and scenes 重新定位3D对象和场景
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787655
Chun-Kai Huang, Yi-Ling Chen, I-Chao Shen, Bing-Yu Chen
We introduce an interactive method suitable for retargeting both 3D objects and scenes under a general framework. Initially, an input object or scene is decomposed into a collection of constituent components embraced by corresponding control bounding volumes which capture the intra-structures of the object or the semantic groupings of the objects in the scene. The overall retargeting is accomplished through a constrained optimization by manipulating the control bounding volumes. Without inferring the intricate dependencies between the components, we define a minimal set of constraints that maintain the spatial arrangement and connectivity between the components to regularize valid retargeting results. The default retargeting behavior can then be easily altered by additional semantic constraints imposed by users.
我们介绍了一种适用于在一般框架下对三维物体和场景进行重定向的交互式方法。最初,输入对象或场景被分解成由相应的控制边界体包含的组成组件的集合,这些控制边界体捕获对象的内部结构或场景中对象的语义分组。总体重定向是通过操纵控制边界体积的约束优化来完成的。在不推断组件之间复杂的依赖关系的情况下,我们定义了一组最小的约束,这些约束维持组件之间的空间排列和连通性,以规范有效的重定向结果。默认的重定向行为可以很容易地被用户施加的额外语义约束所改变。
{"title":"Retargeting 3D objects and scenes","authors":"Chun-Kai Huang, Yi-Ling Chen, I-Chao Shen, Bing-Yu Chen","doi":"10.1145/2787626.2787655","DOIUrl":"https://doi.org/10.1145/2787626.2787655","url":null,"abstract":"We introduce an interactive method suitable for retargeting both 3D objects and scenes under a general framework. Initially, an input object or scene is decomposed into a collection of constituent components embraced by corresponding control bounding volumes which capture the intra-structures of the object or the semantic groupings of the objects in the scene. The overall retargeting is accomplished through a constrained optimization by manipulating the control bounding volumes. Without inferring the intricate dependencies between the components, we define a minimal set of constraints that maintain the spatial arrangement and connectivity between the components to regularize valid retargeting results. The default retargeting behavior can then be easily altered by additional semantic constraints imposed by users.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125558737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decomposition of 32 bpp into 16 bpp textures with alpha 用alpha将32 bpp分解为16 bpp纹理
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792610
Nobuki Yoda, T. Igarashi
In 2D game graphics, textures are packed into a single texture called a sprite sheet in order to achieve efficient rendering. The sprite sheet can be compressed to save memory by using various compression methods such as block-based compressions and 16 bpp (bits per pixel) tone reduction. These methods are not without some problems, though. Block-based compressions are GPU-dependent, and high-quality compressions such as ASTC [Nystad et al. 2012] are often unavailable on mobile devices. 16 bpp tone reduction--often used with dithering--can create undesirable noise when it is scaled up (Figure 1c).
在2D游戏图像中,为了实现高效渲染,纹理被打包成一个称为精灵表的纹理。精灵表可以通过使用各种压缩方法进行压缩以节省内存,例如基于块的压缩和16 bpp(每像素位)色调降低。不过,这些方法并非没有问题。基于块的压缩依赖于gpu,像ASTC这样的高质量压缩[Nystad et al. 2012]在移动设备上通常是不可用的。16bpp的音调降低——通常与抖动一起使用——在放大时可能会产生不希望的噪音(图1c)。
{"title":"Decomposition of 32 bpp into 16 bpp textures with alpha","authors":"Nobuki Yoda, T. Igarashi","doi":"10.1145/2787626.2792610","DOIUrl":"https://doi.org/10.1145/2787626.2792610","url":null,"abstract":"In 2D game graphics, textures are packed into a single texture called a sprite sheet in order to achieve efficient rendering. The sprite sheet can be compressed to save memory by using various compression methods such as block-based compressions and 16 bpp (bits per pixel) tone reduction. These methods are not without some problems, though. Block-based compressions are GPU-dependent, and high-quality compressions such as ASTC [Nystad et al. 2012] are often unavailable on mobile devices. 16 bpp tone reduction--often used with dithering--can create undesirable noise when it is scaled up (Figure 1c).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115970100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconfigurable three-dimensional prototype system using digital materials 使用数字材料的可重构三维原型系统
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792657
Keita Sekijima, Hiroya Tanaka
Digital materials are discrete elements such as LEGO Blocks that it can be a kind of reconfigurable 3D matters. There are two advantages of using digital material rather than a continuous material. Firstly, it is easy to change the form after shaping by assembling and disassembling the elements. Secondly, There is never that the error of the part impacts the whole form in the shaping because the elements can be connected exactly by the joint system. There are many researches of digital material focus on the modular connection by press fitting or bonding. Such a digital material can't be assembled and disassembled smoothly after shaped. In our research, we designed the digital material "Kelvin Block" (figure 1a) that specialized in smoothly reconfiguring, and we developed the machine "3D Assembler" (figure 1b) to arrange Kelvin Blocks automatically. The size of Kelvin Block is 40mmx40mmx40mm that is optimized to the volume of the joint system.
数字材料是像乐高积木这样的离散元素,它可以是一种可重构的3D材料。使用数字材料比使用连续材料有两个优点。首先,通过组装和拆卸元件,可以很容易地改变成型后的形状。其次,在成形过程中,零件的误差永远不会影响到整体的形状,因为元件可以通过关节系统精确地连接起来。数字材料的研究主要集中在模数化的压接或粘接。这样的数字材料成型后无法顺利组装和拆卸。在我们的研究中,我们设计了专门用于顺利重新配置的数字材料“开尔文块”(图1a),并开发了自动排列开尔文块的机器“3D Assembler”(图1b)。开尔文块的尺寸为40mmx40mmx40mm,优化到关节系统的体积。
{"title":"Reconfigurable three-dimensional prototype system using digital materials","authors":"Keita Sekijima, Hiroya Tanaka","doi":"10.1145/2787626.2792657","DOIUrl":"https://doi.org/10.1145/2787626.2792657","url":null,"abstract":"Digital materials are discrete elements such as LEGO Blocks that it can be a kind of reconfigurable 3D matters. There are two advantages of using digital material rather than a continuous material. Firstly, it is easy to change the form after shaping by assembling and disassembling the elements. Secondly, There is never that the error of the part impacts the whole form in the shaping because the elements can be connected exactly by the joint system. There are many researches of digital material focus on the modular connection by press fitting or bonding. Such a digital material can't be assembled and disassembled smoothly after shaped. In our research, we designed the digital material \"Kelvin Block\" (figure 1a) that specialized in smoothly reconfiguring, and we developed the machine \"3D Assembler\" (figure 1b) to arrange Kelvin Blocks automatically. The size of Kelvin Block is 40mmx40mmx40mm that is optimized to the volume of the joint system.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127976057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Photometric compensation for practical and complex textures 实际和复杂纹理的光度补偿
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787647
N. Hashimoto, K. Kosaka
We propose a photometric compensation for projecting arbitrary images on practical surfaces of our everyday life. Although many previous proposals have achieved fine compensation at their experimental environments [Nayar et al. 2003], they cannot support practical targets including high-contrast texture. In order to adapt to such situation, we need a time-consuming iterative processing with camera feedback. Even though the iterative processing is applied, we cannot obtain fine compensation because no camera pixels of a projector-camera system (procam) correspond perfectly to the pixels of the projector [Mihara et al. 2014].
我们提出了一个光度补偿投影任意图像在我们的日常生活的实际表面。尽管许多先前的建议已经在实验环境下实现了良好的补偿[Nayar等人,2003],但它们不能支持包括高对比度纹理在内的实际目标。为了适应这种情况,我们需要使用相机反馈进行耗时的迭代处理。即使应用迭代处理,我们也无法获得精细补偿,因为投影机-摄像机系统(程序)的相机像素与投影机的像素不完全对应[Mihara et al. 2014]。
{"title":"Photometric compensation for practical and complex textures","authors":"N. Hashimoto, K. Kosaka","doi":"10.1145/2787626.2787647","DOIUrl":"https://doi.org/10.1145/2787626.2787647","url":null,"abstract":"We propose a photometric compensation for projecting arbitrary images on practical surfaces of our everyday life. Although many previous proposals have achieved fine compensation at their experimental environments [Nayar et al. 2003], they cannot support practical targets including high-contrast texture. In order to adapt to such situation, we need a time-consuming iterative processing with camera feedback. Even though the iterative processing is applied, we cannot obtain fine compensation because no camera pixels of a projector-camera system (procam) correspond perfectly to the pixels of the projector [Mihara et al. 2014].","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132672884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sketch dance stage 小品舞蹈舞台
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792646
S. Mizuno, Marino Isoda, Rei Ito, Mei Okamoto, Momoko Kondo, Saya Sugiura, Yuki Nakatani, M. Hirose
Drawing on a sketchbook is one of the most familiar arts and people of all ages can enjoy it. Thus a lot of CG applications on which a user can create 2D and 3DCG images with drawing operations have been developed [Kondo et al. 2013]. On the other hand, dancing is also familiar to many people. Thus a digital content that is a mixture of drawing and dancing could be very attractive.
在速写本上画画是最熟悉的艺术之一,所有年龄段的人都喜欢它。因此,已经开发了许多CG应用程序,用户可以在其上通过绘图操作创建2D和3DCG图像[Kondo et al. 2013]。另一方面,舞蹈也为许多人所熟悉。因此,混合绘画和舞蹈的数字内容可能非常有吸引力。
{"title":"Sketch dance stage","authors":"S. Mizuno, Marino Isoda, Rei Ito, Mei Okamoto, Momoko Kondo, Saya Sugiura, Yuki Nakatani, M. Hirose","doi":"10.1145/2787626.2792646","DOIUrl":"https://doi.org/10.1145/2787626.2792646","url":null,"abstract":"Drawing on a sketchbook is one of the most familiar arts and people of all ages can enjoy it. Thus a lot of CG applications on which a user can create 2D and 3DCG images with drawing operations have been developed [Kondo et al. 2013]. On the other hand, dancing is also familiar to many people. Thus a digital content that is a mixture of drawing and dancing could be very attractive.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132672896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Phygital field: integrated field with visible images and robot swarm controlled by invisible images 物理场:由可见图像和不可见图像控制的机器人群组成的综合场
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792604
T. Hiraki, Issei Takahashi, Shotaro Goto, S. Fukushima, T. Naemura
Forming images by using a swarm of mobile robots has emerged as a new platform for computer entertainment. Each robot has colored lighting, and the swarm represents various abstract patterns by using the lighting and the locomotion.
利用一群移动机器人形成图像已经成为一种新的电脑娱乐平台。每个机器人都有彩色的灯光,并且群体通过灯光和运动表现出各种抽象的图案。
{"title":"Phygital field: integrated field with visible images and robot swarm controlled by invisible images","authors":"T. Hiraki, Issei Takahashi, Shotaro Goto, S. Fukushima, T. Naemura","doi":"10.1145/2787626.2792604","DOIUrl":"https://doi.org/10.1145/2787626.2792604","url":null,"abstract":"Forming images by using a swarm of mobile robots has emerged as a new platform for computer entertainment. Each robot has colored lighting, and the swarm represents various abstract patterns by using the lighting and the locomotion.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132872108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Hands-free gesture operation for maintenance work using finger-mounted acceleration sensor 使用手指安装的加速度传感器进行维护工作的免提手势操作
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792609
Toshiaki Nakasu, T. Ike, Kazunori Imoto, Yasunobu Yamauchi
In maintenance of electric power control panels, a worker has to do a lot of manual work such as pushing buttons and turning on/off selector switches. Therefore, a hands-free gesture operating system is needed. Tsukada [Tsukada et al. 2002] proposed a gesture operating system using an acceleration sensor and switches. Although it is a simple task to control a home appliance by gesture, users have to use both gesture and switch on/off to perform more complicated tasks such as controlling and recording documents in maintenance work. Therefore, the system becomes complicated. We propose a novel switch-less assist system for maintenance work with a simple structure that recognizes gesture using only an acceleration sensor. Ike [Ike et al. 2014] proposed a hand gesture operating system that enables users to control a TV remotely by adopting "Tapping" as a click signal. The system recognizes tapping by detecting a pulse-like acceleration pattern corresponding to a micro collision generated by tapping. However, it is difficult to recognize tapping because maintenance work includes many micro collisions generated by touching things. We adopt "Tapping & Finger up", i.e., tapping fingers and turning up a finger, gestures that rarely occur in maintenance work, and design a gesture system enabling users to perform maintenance tasks and gesture operation seamlessly. Our system helps users do maintenance work easily and intuitively without interrupting work.
在维护电力控制面板时,工人必须做大量的手工工作,如按按钮和打开/关闭选择开关。因此,需要一个免提手势操作系统。Tsukada [Tsukada et al. 2002]提出了一种使用加速度传感器和开关的手势操作系统。虽然通过手势控制家用电器是一项简单的任务,但用户必须同时使用手势和开关来执行更复杂的任务,例如在维护工作中控制和记录文件。因此,系统变得复杂。我们提出了一种新型的无开关维修辅助系统,其结构简单,仅使用加速度传感器识别手势。Ike [Ike et al. 2014]提出了一种手势操作系统,用户可以通过“tap”作为点击信号来远程控制电视。该系统通过检测与轻叩产生的微碰撞相对应的脉冲式加速模式来识别轻叩。然而,由于维护工作包含许多触摸事物产生的微碰撞,因此很难识别敲击。我们采用了在维护工作中很少出现的“轻敲手指向上”手势,即轻敲手指和竖起手指,设计了一个手势系统,使用户能够无缝地执行维护任务和手势操作。我们的系统可以帮助用户在不中断工作的情况下轻松直观地进行维护工作。
{"title":"Hands-free gesture operation for maintenance work using finger-mounted acceleration sensor","authors":"Toshiaki Nakasu, T. Ike, Kazunori Imoto, Yasunobu Yamauchi","doi":"10.1145/2787626.2792609","DOIUrl":"https://doi.org/10.1145/2787626.2792609","url":null,"abstract":"In maintenance of electric power control panels, a worker has to do a lot of manual work such as pushing buttons and turning on/off selector switches. Therefore, a hands-free gesture operating system is needed. Tsukada [Tsukada et al. 2002] proposed a gesture operating system using an acceleration sensor and switches. Although it is a simple task to control a home appliance by gesture, users have to use both gesture and switch on/off to perform more complicated tasks such as controlling and recording documents in maintenance work. Therefore, the system becomes complicated. We propose a novel switch-less assist system for maintenance work with a simple structure that recognizes gesture using only an acceleration sensor. Ike [Ike et al. 2014] proposed a hand gesture operating system that enables users to control a TV remotely by adopting \"Tapping\" as a click signal. The system recognizes tapping by detecting a pulse-like acceleration pattern corresponding to a micro collision generated by tapping. However, it is difficult to recognize tapping because maintenance work includes many micro collisions generated by touching things. We adopt \"Tapping & Finger up\", i.e., tapping fingers and turning up a finger, gestures that rarely occur in maintenance work, and design a gesture system enabling users to perform maintenance tasks and gesture operation seamlessly. Our system helps users do maintenance work easily and intuitively without interrupting work.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130806517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VISTouch VISTouch
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787636
Masasuke Yasumoto, Takehiro Teraoka
Various studies have been done on the combined use of mobile devices. Ohta's Pinch [Ohta and Tanaka 2012] and Leigh's THAW [Leigh et al. 2014] are representative studies. However, they have certain limitations; Pinch cannot dynamically correspond to the positional relations of the devices, and THAW cannot recognize the devices' spatial positional relations. We constructed VISTouch so that it does not require a particular kind of external sensor, and it enables multiple mobile devices to dynamically obtain other devices' relative positions in real time. We summarize VISTouch in this paper.
人们对移动设备的综合使用进行了各种研究。Ohta的Pinch [Ohta and Tanaka 2012]和Leigh的THAW [Leigh et al. 2014]是具有代表性的研究。然而,它们也有一定的局限性;Pinch不能动态对应设备的位置关系,THAW不能识别设备的空间位置关系。我们构建了visouch,使得它不需要特定的外部传感器,它使多个移动设备能够实时动态地获取其他设备的相对位置。本文对VISTouch进行了总结。
{"title":"VISTouch","authors":"Masasuke Yasumoto, Takehiro Teraoka","doi":"10.1145/2787626.2787636","DOIUrl":"https://doi.org/10.1145/2787626.2787636","url":null,"abstract":"Various studies have been done on the combined use of mobile devices. Ohta's Pinch [Ohta and Tanaka 2012] and Leigh's THAW [Leigh et al. 2014] are representative studies. However, they have certain limitations; Pinch cannot dynamically correspond to the positional relations of the devices, and THAW cannot recognize the devices' spatial positional relations. We constructed VISTouch so that it does not require a particular kind of external sensor, and it enables multiple mobile devices to dynamically obtain other devices' relative positions in real time. We summarize VISTouch in this paper.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132099788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Texture preserving garment transfer 保质感服装转移
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792622
Fumiya Narita, Shunsuke Saito, Takuya Kato, Tsukasa Fukusato, S. Morishima
Dressing virtual characters is necessary for many applications, while modeling clothing is a significant bottleneck. Therefore, it has been proposed that the idea of Garment Transfer for transfer-ring clothing model from one character to another character [Brouet et al. 2012]. In recent years, this idea has been extended to be applicable between characters in various poses and shapes [Narita et al. 2014]. However, texture design of clothing is not preserved in their method since they deform the source clothing model to fit the target body (see Figure 1(a)(c)).
在许多应用程序中,虚拟角色的着装是必要的,而服装建模是一个重要的瓶颈。因此,有人提出了将环形服装模型从一个角色转移到另一个角色的服装转移思路[Brouet et al. 2012]。近年来,这一想法已被扩展到适用于各种姿势和形状的角色[Narita et al. 2014]。然而,他们的方法没有保留服装的纹理设计,因为他们将源服装模型变形以适应目标体(见图1(a)(c))。
{"title":"Texture preserving garment transfer","authors":"Fumiya Narita, Shunsuke Saito, Takuya Kato, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2787626.2792622","DOIUrl":"https://doi.org/10.1145/2787626.2792622","url":null,"abstract":"Dressing virtual characters is necessary for many applications, while modeling clothing is a significant bottleneck. Therefore, it has been proposed that the idea of Garment Transfer for transfer-ring clothing model from one character to another character [Brouet et al. 2012]. In recent years, this idea has been extended to be applicable between characters in various poses and shapes [Narita et al. 2014]. However, texture design of clothing is not preserved in their method since they deform the source clothing model to fit the target body (see Figure 1(a)(c)).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134054292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
ACM SIGGRAPH 2015 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1