首页 > 最新文献

ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia最新文献

英文 中文
Human head modeling based on fast-automatic mesh completion 基于快速自动网格补全的人体头部建模
Akinobu Maejima, S. Morishima
The need to rapidly create 3D human head models is still an important issue in game and film production. Blanz et al have developed a morphable model which can semi-automatically reconstruct the facial appearance (3D shape and texture) and simulated hairstyles of "new" faces (faces not yet scanned into an existing database) using photographs taken from the front or other angles [Blanz et al. 2004]. However, this method still requires manual marker specification and approximately 4 minutes of computational time. Moreover, the facial reconstruction produced by this system is not accurate unless a database containing a large variety of facial models is available. We have developed a system that can rapidly generate human head models using only frontal facial range scan data. Where it is impossible to measure the 3D geometry accurately (as with hair regions) the missing data is complemented using the 3D geometry of the template mesh (TM). Our main contribution is to achieve the fast mesh completion for the head modeling based on the "Automatic Marker Setting" and the "Optimized Local Affine Transform (OLAT)". The proposed system generates a head model in approximately 8 seconds. Therefore, if users utilize a range scanner which can quickly produce range data, it is possible to generate a complete 3D head model in one minute using our system on a PC.
在游戏和电影制作中,快速创建3D人体头部模型的需求仍然是一个重要问题。Blanz等人开发了一种变形模型,该模型可以使用从正面或其他角度拍摄的照片,半自动地重建“新”面孔(尚未扫描到现有数据库中的面孔)的面部外观(3D形状和纹理)和模拟发型[Blanz et al. 2004]。然而,这种方法仍然需要手动标记规范和大约4分钟的计算时间。此外,除非有一个包含大量面部模型的数据库,否则该系统产生的面部重建是不准确的。我们已经开发了一个系统,可以快速生成人类头部模型,仅使用正面面部范围扫描数据。在不可能精确测量3D几何形状的地方(如头发区域),缺失的数据使用模板网格(TM)的3D几何形状进行补充。我们的主要贡献是实现了基于“自动标记设置”和“优化局部仿射变换(OLAT)”的头部建模的快速网格补全。该系统在大约8秒内生成头部模型。因此,如果用户使用可以快速生成距离数据的距离扫描仪,则可以在PC上使用我们的系统在一分钟内生成完整的3D头部模型。
{"title":"Human head modeling based on fast-automatic mesh completion","authors":"Akinobu Maejima, S. Morishima","doi":"10.1145/1666778.1666831","DOIUrl":"https://doi.org/10.1145/1666778.1666831","url":null,"abstract":"The need to rapidly create 3D human head models is still an important issue in game and film production. Blanz et al have developed a morphable model which can semi-automatically reconstruct the facial appearance (3D shape and texture) and simulated hairstyles of \"new\" faces (faces not yet scanned into an existing database) using photographs taken from the front or other angles [Blanz et al. 2004]. However, this method still requires manual marker specification and approximately 4 minutes of computational time. Moreover, the facial reconstruction produced by this system is not accurate unless a database containing a large variety of facial models is available. We have developed a system that can rapidly generate human head models using only frontal facial range scan data. Where it is impossible to measure the 3D geometry accurately (as with hair regions) the missing data is complemented using the 3D geometry of the template mesh (TM). Our main contribution is to achieve the fast mesh completion for the head modeling based on the \"Automatic Marker Setting\" and the \"Optimized Local Affine Transform (OLAT)\". The proposed system generates a head model in approximately 8 seconds. Therefore, if users utilize a range scanner which can quickly produce range data, it is possible to generate a complete 3D head model in one minute using our system on a PC.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131322640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hybrid cursor control for precise and fast positioning without clutching 混合光标控制精确和快速的定位,没有抓紧
M. Schlattmann, R. Klein
In virtual environments, selection is typically solved by moving a cursor above a virtual item/object and issuing a selection command. In the context of hand-tracking, the cursor movement is controlled by a certain mapping of the hand pose to the virtual cursor position, allowing the cursor to reach any place in the virtual working space. If the virtual working space is bounded, a linear mapping can be used. This is called a proportional control.
在虚拟环境中,通常通过将光标移动到虚拟项目/对象上方并发出选择命令来解决选择问题。在手部跟踪中,光标的移动是通过手部姿态到虚拟光标位置的一定映射来控制的,从而使光标可以到达虚拟工作空间中的任何位置。如果虚拟工作空间是有界的,则可以使用线性映射。这被称为比例控制。
{"title":"Hybrid cursor control for precise and fast positioning without clutching","authors":"M. Schlattmann, R. Klein","doi":"10.1145/1667146.1667161","DOIUrl":"https://doi.org/10.1145/1667146.1667161","url":null,"abstract":"In virtual environments, selection is typically solved by moving a cursor above a virtual item/object and issuing a selection command. In the context of hand-tracking, the cursor movement is controlled by a certain mapping of the hand pose to the virtual cursor position, allowing the cursor to reach any place in the virtual working space. If the virtual working space is bounded, a linear mapping can be used. This is called a proportional control.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128848209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Happy wear 快乐的穿
Camille Scherrer, Julien Pilet
Look at yourself in our mirror, and you might see a paper fox behind you. Strange hands might open your stomach, or you could find a cat asleep in your bag.
照照镜子,你可能会看到你身后有一只纸狐狸。陌生的手可能会打开你的胃,或者你可能会发现一只猫在你的包里睡觉。
{"title":"Happy wear","authors":"Camille Scherrer, Julien Pilet","doi":"10.1145/1665137.1665170","DOIUrl":"https://doi.org/10.1145/1665137.1665170","url":null,"abstract":"Look at yourself in our mirror, and you might see a paper fox behind you. Strange hands might open your stomach, or you could find a cat asleep in your bag.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126198672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient multi-pass welding training with haptic guide 高效的多道焊接培训,触觉引导
Yongwan Kim, Ungyeon Yang, Dongsik Jo, Gun A. Lee, J. Choi, Jinah Park
Recent progress in computer graphics and interaction technologies has brought virtual training in many applications. Virtual training is very effective at dangerous or costly works. A represetative example is a welding training in automobile, shipbuilding, and construction equipment. Welding is define as a joining process that produces coalescence of metallic materials by heating them. Key factors for effective welding training are realistic welding modeling and trainig method with respect to users' torch motions. Several weld training systems, such as CS WAVE, ARC+ of 123Certification, and SimWelder of VRSim, support either only single-pass or inaccurate multi-pass simulation, since multi-pass welding process requires complicate complexity or enormous bead DB sets. In addition, these welding simulators utilize only some graphical metaphors to teach welding motions. However, welding training using graphical metaphors is still insufficient for training precise welding motions, because users can not fully perceive graphical guide information in 3D space under even stereoscopic environment.
计算机图形学和交互技术的最新进展为许多应用带来了虚拟训练。虚拟培训在危险或昂贵的工作中非常有效。一个代表性的例子是汽车、造船和建筑设备的焊接培训。焊接被定义为通过加热使金属材料聚并的连接过程。逼真的焊接建模和针对用户火炬动作的培训方法是有效焊接培训的关键因素。一些焊接培训系统,如CS WAVE, 123Certification的ARC+和VRSim的SimWelder,只支持单次或不准确的多次模拟,因为多次焊接过程需要复杂的复杂性或巨大的焊头DB集。此外,这些焊接模拟器只利用一些图形隐喻来教授焊接运动。然而,使用图形隐喻的焊接训练仍然不足以训练精确的焊接动作,因为即使在立体环境下,用户也不能完全感知三维空间中的图形引导信息。
{"title":"Efficient multi-pass welding training with haptic guide","authors":"Yongwan Kim, Ungyeon Yang, Dongsik Jo, Gun A. Lee, J. Choi, Jinah Park","doi":"10.1145/1666778.1666810","DOIUrl":"https://doi.org/10.1145/1666778.1666810","url":null,"abstract":"Recent progress in computer graphics and interaction technologies has brought virtual training in many applications. Virtual training is very effective at dangerous or costly works. A represetative example is a welding training in automobile, shipbuilding, and construction equipment. Welding is define as a joining process that produces coalescence of metallic materials by heating them. Key factors for effective welding training are realistic welding modeling and trainig method with respect to users' torch motions. Several weld training systems, such as CS WAVE, ARC+ of 123Certification, and SimWelder of VRSim, support either only single-pass or inaccurate multi-pass simulation, since multi-pass welding process requires complicate complexity or enormous bead DB sets. In addition, these welding simulators utilize only some graphical metaphors to teach welding motions. However, welding training using graphical metaphors is still insufficient for training precise welding motions, because users can not fully perceive graphical guide information in 3D space under even stereoscopic environment.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125920086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A tone reproduction operator accounting for mesopic vision 一种用于中视视觉的音调再现算子
M. Mikamo, M. Slomp, Toru Tamaki, K. Kaneda
High dynamic range (HDR) imaging provides more physically accurate measurements of pixel intensities, but displaying them may require tone mapping as the dynamic range between image and display device can differ. Most tone-mapping operators (TMO) focus on luminance compression ignoring chromatic assets. The human visual system (HVS), however, alters color perception according to the level of luminosity. At photopic conditions color perception is accurate and as conditions shift to scotopic, color perception decreases. Mesopic vision is a range in between where colors are perceived but in a distorted way: red intensities' responses fade faster producing a blue-shift effect known as Purkinje effect.
高动态范围(HDR)成像提供了更精确的像素强度物理测量,但显示它们可能需要色调映射,因为图像和显示设备之间的动态范围可能不同。大多数色调映射算子(TMO)都专注于亮度压缩,而忽略了色度资产。然而,人类的视觉系统(HVS)会根据亮度的高低来改变对颜色的感知。在光性条件下,颜色感觉是准确的,当条件转移到暗性时,颜色感觉下降。中视视觉是介于两者之间的一个范围,在这个范围内,人们以一种扭曲的方式感知颜色:红色强度的反应褪色得更快,产生一种被称为浦肯野效应的蓝移效应。
{"title":"A tone reproduction operator accounting for mesopic vision","authors":"M. Mikamo, M. Slomp, Toru Tamaki, K. Kaneda","doi":"10.1145/1666778.1666819","DOIUrl":"https://doi.org/10.1145/1666778.1666819","url":null,"abstract":"High dynamic range (HDR) imaging provides more physically accurate measurements of pixel intensities, but displaying them may require tone mapping as the dynamic range between image and display device can differ. Most tone-mapping operators (TMO) focus on luminance compression ignoring chromatic assets. The human visual system (HVS), however, alters color perception according to the level of luminosity. At photopic conditions color perception is accurate and as conditions shift to scotopic, color perception decreases. Mesopic vision is a range in between where colors are perceived but in a distorted way: red intensities' responses fade faster producing a blue-shift effect known as Purkinje effect.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114103485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A smart agent for taking pictures 一个聪明的拍照代理
Hyunsang Ahn, Manjai Lee, I. Jeong, Jihwan Park
This research suggests a novel photo taking system that can interact with people. The goal is to make a system act like a human photographer. This system can recognize when people wave their hands, moves toward them, and takes pictures with designated compositions and user-chosen tastes. For the image composition, the user can also adjust the composition arbitrary depends on personal choice by looking through the screen attached to the system. For the resulting shot, user can select the picture he wants.
这项研究提出了一种可以与人互动的新型拍照系统。目标是使系统像人类摄影师一样工作。这个系统可以识别人们的挥手,向他们移动,并拍摄指定构图和用户选择的口味的照片。对于图像的构图,用户也可以通过查看系统附带的屏幕,根据个人选择任意调整构图。对于生成的照片,用户可以选择他想要的图片。
{"title":"A smart agent for taking pictures","authors":"Hyunsang Ahn, Manjai Lee, I. Jeong, Jihwan Park","doi":"10.1145/1666778.1666800","DOIUrl":"https://doi.org/10.1145/1666778.1666800","url":null,"abstract":"This research suggests a novel photo taking system that can interact with people. The goal is to make a system act like a human photographer. This system can recognize when people wave their hands, moves toward them, and takes pictures with designated compositions and user-chosen tastes. For the image composition, the user can also adjust the composition arbitrary depends on personal choice by looking through the screen attached to the system. For the resulting shot, user can select the picture he wants.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive work for feeling time by compositing multi-vision and generating sounds 通过合成多视觉和产生声音来感受时间的交互式工作
Yi-Hsiu Chen, W. Chou
With the progress of computer technology, interaction does not only break the relationship between the audience and the art work but also reveals a new digital aesthetic view. The proposed, interactive work uses multi-webcam to capture the multi-view images of the user and generate sounds by synthesizing sonic tones according to the dynamic moments of images. The main concept is to utilize the interactive installation to allow users to experience and feel the flowing of time by catching sight of vision in the interstices of time, as if reproducing Nude Descending a Staircase, No. 2 drawn by Dada artist Marcel Duchamp. The work flattens people's dynamic movements into a brief and condensed audio-visual field. Besides, the users can create a composite memory of themselves with the sound and video.
随着计算机技术的进步,互动不仅打破了观众与艺术作品之间的关系,而且揭示了一种新的数字审美观。所提出的交互式工作使用多摄像头捕获用户的多视图图像,并根据图像的动态矩合成声音音调来生成声音。主要的概念是利用互动装置,让使用者在时间的间隙中捕捉视觉,体验和感受时间的流动,仿佛再现达达艺术家马塞尔·杜尚的作品《裸体下楼》2号。作品将人的动态动作平铺成一个简短而凝练的视听场。此外,用户还可以通过声音和视频创建自己的复合记忆。
{"title":"Interactive work for feeling time by compositing multi-vision and generating sounds","authors":"Yi-Hsiu Chen, W. Chou","doi":"10.1145/1666778.1666781","DOIUrl":"https://doi.org/10.1145/1666778.1666781","url":null,"abstract":"With the progress of computer technology, interaction does not only break the relationship between the audience and the art work but also reveals a new digital aesthetic view. The proposed, interactive work uses multi-webcam to capture the multi-view images of the user and generate sounds by synthesizing sonic tones according to the dynamic moments of images. The main concept is to utilize the interactive installation to allow users to experience and feel the flowing of time by catching sight of vision in the interstices of time, as if reproducing Nude Descending a Staircase, No. 2 drawn by Dada artist Marcel Duchamp. The work flattens people's dynamic movements into a brief and condensed audio-visual field. Besides, the users can create a composite memory of themselves with the sound and video.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116551458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An esthetics rule-based ranking system for amateur photos 一个基于美学规则的业余照片排名系统
C. Yeh, Wai-Seng Ng, B. Barsky, M. Ouhyoung
With the current widespread use of digital cameras, the process of selecting and maintaining personal photos is becoming an onerous task. To our knowledge, there has been little research on photo evaluation based on computational esthetics. Photographers around the world have established some general rules for taking good photos. Building upon artistic theories and human visual perception is difficult since the results tend to be subjective. Although automatically ranking award-wining professional photos may not be a sensible pursuit, such an approach may be reasonable for photos taken by amateurs. In the next section, we introduce rules for such a system.
随着数码相机的广泛使用,个人照片的选择和维护成为一项繁重的任务。据我们所知,基于计算美学的照片评价研究很少。世界各地的摄影师已经建立了一些拍摄好照片的一般规则。建立在艺术理论和人类的视觉感知是困难的,因为结果往往是主观的。虽然自动对获奖的专业照片进行排名可能不是一个明智的追求,但对于业余爱好者拍摄的照片来说,这种方法可能是合理的。在下一节中,我们将介绍这种系统的规则。
{"title":"An esthetics rule-based ranking system for amateur photos","authors":"C. Yeh, Wai-Seng Ng, B. Barsky, M. Ouhyoung","doi":"10.1145/1667146.1667177","DOIUrl":"https://doi.org/10.1145/1667146.1667177","url":null,"abstract":"With the current widespread use of digital cameras, the process of selecting and maintaining personal photos is becoming an onerous task. To our knowledge, there has been little research on photo evaluation based on computational esthetics. Photographers around the world have established some general rules for taking good photos. Building upon artistic theories and human visual perception is difficult since the results tend to be subjective. Although automatically ranking award-wining professional photos may not be a sensible pursuit, such an approach may be reasonable for photos taken by amateurs. In the next section, we introduce rules for such a system.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"34 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113975899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Simulation-based in-between creation for CACAni system 基于仿真的CACAni系统中间创建
Eiji Sugisaki, S. H. Soon, Fumihito Kyota, M. Nakajima
In-between creation in traditional cel animation based on the hand-drawn key-frames is a fundamental element for the actual production and plays a symbolic role in an artistic interpretation of the scene. To create impressive in-betweens, however, animators are required to be skilled for hair animation creation. In the traditional cel animation, hair motions are generally used to express a character's affective change or showing environment condition. Despite this usability and importance, the hair motion is drawn relatively simply or is not animated at all because of the lack of skilled animators and time constraints in cel animation production. To assist this production process, P. Noble and W. Tang [Noble and Tang. 2004], and Sugisaki et al. [Sugisaki et al. 2006] introduced certain ways to create hair motion for cartoon animations. Both of them created the hair motion based on 3D simulation that is applied to the prepared 3D character model. In this paper, we introduce an in-between creation method, specialized for hair based on dynamic simulation, which does not need any 3D character model. Animators can create in-between frames for hair motion by setting a few parameters, and then our method automatically select the best in-between frames based on the specified frame number by animator. The advantage of our method is to create in-between frames for hair motion by applying simulation model to key-frames. Obviously, the key-frame images do not have any depth. In fact, our method can directly utilize the hand-drawn key-frames which are drawn by animators in CACAni (Computer-Assisted Cel Animation) system [CACAni Website].
传统细胞动画中基于手绘关键帧的中间创作是实际制作的基本元素,在对场景的艺术诠释中起着象征性的作用。然而,为了创造令人印象深刻的中间,动画师需要熟练的头发动画创作。在传统的细胞动画中,头发的动作通常用来表达角色的情感变化或显示环境条件。尽管这种可用性和重要性,头发的运动是绘制相对简单或不是动画,因为缺乏熟练的动画师和时间限制在细胞动画制作。为了辅助这个制作过程,P. Noble和W. Tang [Noble and Tang. 2004]以及Sugisaki et al. [Sugisaki et al. 2006]介绍了为卡通动画创造头发运动的某些方法。他们都创建了基于3D模拟的头发运动,应用于准备好的3D角色模型。本文介绍了一种不需要任何三维人物模型的、基于动态仿真的毛发中间化制作方法。动画师可以通过设置一些参数来创建头发运动的中间帧,然后我们的方法根据动画师指定的帧数自动选择最佳的中间帧。该方法的优点是通过对关键帧应用仿真模型来创建头发运动的中间帧。显然,关键帧图像没有任何深度。事实上,我们的方法可以直接利用动画师在CACAni(计算机辅助细胞动画)系统[CACAni网站]中绘制的手绘关键帧。
{"title":"Simulation-based in-between creation for CACAni system","authors":"Eiji Sugisaki, S. H. Soon, Fumihito Kyota, M. Nakajima","doi":"10.1145/1667146.1667156","DOIUrl":"https://doi.org/10.1145/1667146.1667156","url":null,"abstract":"In-between creation in traditional cel animation based on the hand-drawn key-frames is a fundamental element for the actual production and plays a symbolic role in an artistic interpretation of the scene. To create impressive in-betweens, however, animators are required to be skilled for hair animation creation. In the traditional cel animation, hair motions are generally used to express a character's affective change or showing environment condition. Despite this usability and importance, the hair motion is drawn relatively simply or is not animated at all because of the lack of skilled animators and time constraints in cel animation production. To assist this production process, P. Noble and W. Tang [Noble and Tang. 2004], and Sugisaki et al. [Sugisaki et al. 2006] introduced certain ways to create hair motion for cartoon animations. Both of them created the hair motion based on 3D simulation that is applied to the prepared 3D character model. In this paper, we introduce an in-between creation method, specialized for hair based on dynamic simulation, which does not need any 3D character model. Animators can create in-between frames for hair motion by setting a few parameters, and then our method automatically select the best in-between frames based on the specified frame number by animator. The advantage of our method is to create in-between frames for hair motion by applying simulation model to key-frames. Obviously, the key-frame images do not have any depth. In fact, our method can directly utilize the hand-drawn key-frames which are drawn by animators in CACAni (Computer-Assisted Cel Animation) system [CACAni Website].","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128014077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Direct 3D manipulation for volume segmentation using mixed reality 直接3D操作体分割使用混合现实
Takehiro Tawara, K. Ono
We propose a novel two-handed direct manipulation system to achieve complex volume segmentation of CT/MRI data in the real 3D space with a remote controller attached a motion tracking cube. At the same time segmented data is displayed by direct volume rendering using a programmable GPU. Our system achieves visualization of real time modification of volume data with complex shadings including transparency control by changing transfer functions, displaying any cross section and rendering multi materials using a local illumination model.
本文提出了一种新型的双手直接操作系统,利用附着运动跟踪立方体的遥控器实现真实三维空间中CT/MRI数据的复杂体分割。同时,通过可编程GPU的直接体绘制来显示分段数据。我们的系统实现了具有复杂阴影的体数据实时修改的可视化,包括通过改变传递函数来控制透明度,显示任何横截面,并使用局部照明模型渲染多种材料。
{"title":"Direct 3D manipulation for volume segmentation using mixed reality","authors":"Takehiro Tawara, K. Ono","doi":"10.1145/1666778.1666811","DOIUrl":"https://doi.org/10.1145/1666778.1666811","url":null,"abstract":"We propose a novel two-handed direct manipulation system to achieve complex volume segmentation of CT/MRI data in the real 3D space with a remote controller attached a motion tracking cube. At the same time segmented data is displayed by direct volume rendering using a programmable GPU. Our system achieves visualization of real time modification of volume data with complex shadings including transparency control by changing transfer functions, displaying any cross section and rendering multi materials using a local illumination model.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"100 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132708210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1