The need to rapidly create 3D human head models is still an important issue in game and film production. Blanz et al have developed a morphable model which can semi-automatically reconstruct the facial appearance (3D shape and texture) and simulated hairstyles of "new" faces (faces not yet scanned into an existing database) using photographs taken from the front or other angles [Blanz et al. 2004]. However, this method still requires manual marker specification and approximately 4 minutes of computational time. Moreover, the facial reconstruction produced by this system is not accurate unless a database containing a large variety of facial models is available. We have developed a system that can rapidly generate human head models using only frontal facial range scan data. Where it is impossible to measure the 3D geometry accurately (as with hair regions) the missing data is complemented using the 3D geometry of the template mesh (TM). Our main contribution is to achieve the fast mesh completion for the head modeling based on the "Automatic Marker Setting" and the "Optimized Local Affine Transform (OLAT)". The proposed system generates a head model in approximately 8 seconds. Therefore, if users utilize a range scanner which can quickly produce range data, it is possible to generate a complete 3D head model in one minute using our system on a PC.
在游戏和电影制作中,快速创建3D人体头部模型的需求仍然是一个重要问题。Blanz等人开发了一种变形模型,该模型可以使用从正面或其他角度拍摄的照片,半自动地重建“新”面孔(尚未扫描到现有数据库中的面孔)的面部外观(3D形状和纹理)和模拟发型[Blanz et al. 2004]。然而,这种方法仍然需要手动标记规范和大约4分钟的计算时间。此外,除非有一个包含大量面部模型的数据库,否则该系统产生的面部重建是不准确的。我们已经开发了一个系统,可以快速生成人类头部模型,仅使用正面面部范围扫描数据。在不可能精确测量3D几何形状的地方(如头发区域),缺失的数据使用模板网格(TM)的3D几何形状进行补充。我们的主要贡献是实现了基于“自动标记设置”和“优化局部仿射变换(OLAT)”的头部建模的快速网格补全。该系统在大约8秒内生成头部模型。因此,如果用户使用可以快速生成距离数据的距离扫描仪,则可以在PC上使用我们的系统在一分钟内生成完整的3D头部模型。
{"title":"Human head modeling based on fast-automatic mesh completion","authors":"Akinobu Maejima, S. Morishima","doi":"10.1145/1666778.1666831","DOIUrl":"https://doi.org/10.1145/1666778.1666831","url":null,"abstract":"The need to rapidly create 3D human head models is still an important issue in game and film production. Blanz et al have developed a morphable model which can semi-automatically reconstruct the facial appearance (3D shape and texture) and simulated hairstyles of \"new\" faces (faces not yet scanned into an existing database) using photographs taken from the front or other angles [Blanz et al. 2004]. However, this method still requires manual marker specification and approximately 4 minutes of computational time. Moreover, the facial reconstruction produced by this system is not accurate unless a database containing a large variety of facial models is available. We have developed a system that can rapidly generate human head models using only frontal facial range scan data. Where it is impossible to measure the 3D geometry accurately (as with hair regions) the missing data is complemented using the 3D geometry of the template mesh (TM). Our main contribution is to achieve the fast mesh completion for the head modeling based on the \"Automatic Marker Setting\" and the \"Optimized Local Affine Transform (OLAT)\". The proposed system generates a head model in approximately 8 seconds. Therefore, if users utilize a range scanner which can quickly produce range data, it is possible to generate a complete 3D head model in one minute using our system on a PC.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131322640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In virtual environments, selection is typically solved by moving a cursor above a virtual item/object and issuing a selection command. In the context of hand-tracking, the cursor movement is controlled by a certain mapping of the hand pose to the virtual cursor position, allowing the cursor to reach any place in the virtual working space. If the virtual working space is bounded, a linear mapping can be used. This is called a proportional control.
{"title":"Hybrid cursor control for precise and fast positioning without clutching","authors":"M. Schlattmann, R. Klein","doi":"10.1145/1667146.1667161","DOIUrl":"https://doi.org/10.1145/1667146.1667161","url":null,"abstract":"In virtual environments, selection is typically solved by moving a cursor above a virtual item/object and issuing a selection command. In the context of hand-tracking, the cursor movement is controlled by a certain mapping of the hand pose to the virtual cursor position, allowing the cursor to reach any place in the virtual working space. If the virtual working space is bounded, a linear mapping can be used. This is called a proportional control.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128848209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Look at yourself in our mirror, and you might see a paper fox behind you. Strange hands might open your stomach, or you could find a cat asleep in your bag.
{"title":"Happy wear","authors":"Camille Scherrer, Julien Pilet","doi":"10.1145/1665137.1665170","DOIUrl":"https://doi.org/10.1145/1665137.1665170","url":null,"abstract":"Look at yourself in our mirror, and you might see a paper fox behind you. Strange hands might open your stomach, or you could find a cat asleep in your bag.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126198672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongwan Kim, Ungyeon Yang, Dongsik Jo, Gun A. Lee, J. Choi, Jinah Park
Recent progress in computer graphics and interaction technologies has brought virtual training in many applications. Virtual training is very effective at dangerous or costly works. A represetative example is a welding training in automobile, shipbuilding, and construction equipment. Welding is define as a joining process that produces coalescence of metallic materials by heating them. Key factors for effective welding training are realistic welding modeling and trainig method with respect to users' torch motions. Several weld training systems, such as CS WAVE, ARC+ of 123Certification, and SimWelder of VRSim, support either only single-pass or inaccurate multi-pass simulation, since multi-pass welding process requires complicate complexity or enormous bead DB sets. In addition, these welding simulators utilize only some graphical metaphors to teach welding motions. However, welding training using graphical metaphors is still insufficient for training precise welding motions, because users can not fully perceive graphical guide information in 3D space under even stereoscopic environment.
{"title":"Efficient multi-pass welding training with haptic guide","authors":"Yongwan Kim, Ungyeon Yang, Dongsik Jo, Gun A. Lee, J. Choi, Jinah Park","doi":"10.1145/1666778.1666810","DOIUrl":"https://doi.org/10.1145/1666778.1666810","url":null,"abstract":"Recent progress in computer graphics and interaction technologies has brought virtual training in many applications. Virtual training is very effective at dangerous or costly works. A represetative example is a welding training in automobile, shipbuilding, and construction equipment. Welding is define as a joining process that produces coalescence of metallic materials by heating them. Key factors for effective welding training are realistic welding modeling and trainig method with respect to users' torch motions. Several weld training systems, such as CS WAVE, ARC+ of 123Certification, and SimWelder of VRSim, support either only single-pass or inaccurate multi-pass simulation, since multi-pass welding process requires complicate complexity or enormous bead DB sets. In addition, these welding simulators utilize only some graphical metaphors to teach welding motions. However, welding training using graphical metaphors is still insufficient for training precise welding motions, because users can not fully perceive graphical guide information in 3D space under even stereoscopic environment.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125920086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High dynamic range (HDR) imaging provides more physically accurate measurements of pixel intensities, but displaying them may require tone mapping as the dynamic range between image and display device can differ. Most tone-mapping operators (TMO) focus on luminance compression ignoring chromatic assets. The human visual system (HVS), however, alters color perception according to the level of luminosity. At photopic conditions color perception is accurate and as conditions shift to scotopic, color perception decreases. Mesopic vision is a range in between where colors are perceived but in a distorted way: red intensities' responses fade faster producing a blue-shift effect known as Purkinje effect.
{"title":"A tone reproduction operator accounting for mesopic vision","authors":"M. Mikamo, M. Slomp, Toru Tamaki, K. Kaneda","doi":"10.1145/1666778.1666819","DOIUrl":"https://doi.org/10.1145/1666778.1666819","url":null,"abstract":"High dynamic range (HDR) imaging provides more physically accurate measurements of pixel intensities, but displaying them may require tone mapping as the dynamic range between image and display device can differ. Most tone-mapping operators (TMO) focus on luminance compression ignoring chromatic assets. The human visual system (HVS), however, alters color perception according to the level of luminosity. At photopic conditions color perception is accurate and as conditions shift to scotopic, color perception decreases. Mesopic vision is a range in between where colors are perceived but in a distorted way: red intensities' responses fade faster producing a blue-shift effect known as Purkinje effect.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114103485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research suggests a novel photo taking system that can interact with people. The goal is to make a system act like a human photographer. This system can recognize when people wave their hands, moves toward them, and takes pictures with designated compositions and user-chosen tastes. For the image composition, the user can also adjust the composition arbitrary depends on personal choice by looking through the screen attached to the system. For the resulting shot, user can select the picture he wants.
{"title":"A smart agent for taking pictures","authors":"Hyunsang Ahn, Manjai Lee, I. Jeong, Jihwan Park","doi":"10.1145/1666778.1666800","DOIUrl":"https://doi.org/10.1145/1666778.1666800","url":null,"abstract":"This research suggests a novel photo taking system that can interact with people. The goal is to make a system act like a human photographer. This system can recognize when people wave their hands, moves toward them, and takes pictures with designated compositions and user-chosen tastes. For the image composition, the user can also adjust the composition arbitrary depends on personal choice by looking through the screen attached to the system. For the resulting shot, user can select the picture he wants.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the progress of computer technology, interaction does not only break the relationship between the audience and the art work but also reveals a new digital aesthetic view. The proposed, interactive work uses multi-webcam to capture the multi-view images of the user and generate sounds by synthesizing sonic tones according to the dynamic moments of images. The main concept is to utilize the interactive installation to allow users to experience and feel the flowing of time by catching sight of vision in the interstices of time, as if reproducing Nude Descending a Staircase, No. 2 drawn by Dada artist Marcel Duchamp. The work flattens people's dynamic movements into a brief and condensed audio-visual field. Besides, the users can create a composite memory of themselves with the sound and video.
{"title":"Interactive work for feeling time by compositing multi-vision and generating sounds","authors":"Yi-Hsiu Chen, W. Chou","doi":"10.1145/1666778.1666781","DOIUrl":"https://doi.org/10.1145/1666778.1666781","url":null,"abstract":"With the progress of computer technology, interaction does not only break the relationship between the audience and the art work but also reveals a new digital aesthetic view. The proposed, interactive work uses multi-webcam to capture the multi-view images of the user and generate sounds by synthesizing sonic tones according to the dynamic moments of images. The main concept is to utilize the interactive installation to allow users to experience and feel the flowing of time by catching sight of vision in the interstices of time, as if reproducing Nude Descending a Staircase, No. 2 drawn by Dada artist Marcel Duchamp. The work flattens people's dynamic movements into a brief and condensed audio-visual field. Besides, the users can create a composite memory of themselves with the sound and video.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116551458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the current widespread use of digital cameras, the process of selecting and maintaining personal photos is becoming an onerous task. To our knowledge, there has been little research on photo evaluation based on computational esthetics. Photographers around the world have established some general rules for taking good photos. Building upon artistic theories and human visual perception is difficult since the results tend to be subjective. Although automatically ranking award-wining professional photos may not be a sensible pursuit, such an approach may be reasonable for photos taken by amateurs. In the next section, we introduce rules for such a system.
{"title":"An esthetics rule-based ranking system for amateur photos","authors":"C. Yeh, Wai-Seng Ng, B. Barsky, M. Ouhyoung","doi":"10.1145/1667146.1667177","DOIUrl":"https://doi.org/10.1145/1667146.1667177","url":null,"abstract":"With the current widespread use of digital cameras, the process of selecting and maintaining personal photos is becoming an onerous task. To our knowledge, there has been little research on photo evaluation based on computational esthetics. Photographers around the world have established some general rules for taking good photos. Building upon artistic theories and human visual perception is difficult since the results tend to be subjective. Although automatically ranking award-wining professional photos may not be a sensible pursuit, such an approach may be reasonable for photos taken by amateurs. In the next section, we introduce rules for such a system.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"34 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113975899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eiji Sugisaki, S. H. Soon, Fumihito Kyota, M. Nakajima
In-between creation in traditional cel animation based on the hand-drawn key-frames is a fundamental element for the actual production and plays a symbolic role in an artistic interpretation of the scene. To create impressive in-betweens, however, animators are required to be skilled for hair animation creation. In the traditional cel animation, hair motions are generally used to express a character's affective change or showing environment condition. Despite this usability and importance, the hair motion is drawn relatively simply or is not animated at all because of the lack of skilled animators and time constraints in cel animation production. To assist this production process, P. Noble and W. Tang [Noble and Tang. 2004], and Sugisaki et al. [Sugisaki et al. 2006] introduced certain ways to create hair motion for cartoon animations. Both of them created the hair motion based on 3D simulation that is applied to the prepared 3D character model. In this paper, we introduce an in-between creation method, specialized for hair based on dynamic simulation, which does not need any 3D character model. Animators can create in-between frames for hair motion by setting a few parameters, and then our method automatically select the best in-between frames based on the specified frame number by animator. The advantage of our method is to create in-between frames for hair motion by applying simulation model to key-frames. Obviously, the key-frame images do not have any depth. In fact, our method can directly utilize the hand-drawn key-frames which are drawn by animators in CACAni (Computer-Assisted Cel Animation) system [CACAni Website].
传统细胞动画中基于手绘关键帧的中间创作是实际制作的基本元素,在对场景的艺术诠释中起着象征性的作用。然而,为了创造令人印象深刻的中间,动画师需要熟练的头发动画创作。在传统的细胞动画中,头发的动作通常用来表达角色的情感变化或显示环境条件。尽管这种可用性和重要性,头发的运动是绘制相对简单或不是动画,因为缺乏熟练的动画师和时间限制在细胞动画制作。为了辅助这个制作过程,P. Noble和W. Tang [Noble and Tang. 2004]以及Sugisaki et al. [Sugisaki et al. 2006]介绍了为卡通动画创造头发运动的某些方法。他们都创建了基于3D模拟的头发运动,应用于准备好的3D角色模型。本文介绍了一种不需要任何三维人物模型的、基于动态仿真的毛发中间化制作方法。动画师可以通过设置一些参数来创建头发运动的中间帧,然后我们的方法根据动画师指定的帧数自动选择最佳的中间帧。该方法的优点是通过对关键帧应用仿真模型来创建头发运动的中间帧。显然,关键帧图像没有任何深度。事实上,我们的方法可以直接利用动画师在CACAni(计算机辅助细胞动画)系统[CACAni网站]中绘制的手绘关键帧。
{"title":"Simulation-based in-between creation for CACAni system","authors":"Eiji Sugisaki, S. H. Soon, Fumihito Kyota, M. Nakajima","doi":"10.1145/1667146.1667156","DOIUrl":"https://doi.org/10.1145/1667146.1667156","url":null,"abstract":"In-between creation in traditional cel animation based on the hand-drawn key-frames is a fundamental element for the actual production and plays a symbolic role in an artistic interpretation of the scene. To create impressive in-betweens, however, animators are required to be skilled for hair animation creation. In the traditional cel animation, hair motions are generally used to express a character's affective change or showing environment condition. Despite this usability and importance, the hair motion is drawn relatively simply or is not animated at all because of the lack of skilled animators and time constraints in cel animation production. To assist this production process, P. Noble and W. Tang [Noble and Tang. 2004], and Sugisaki et al. [Sugisaki et al. 2006] introduced certain ways to create hair motion for cartoon animations. Both of them created the hair motion based on 3D simulation that is applied to the prepared 3D character model. In this paper, we introduce an in-between creation method, specialized for hair based on dynamic simulation, which does not need any 3D character model. Animators can create in-between frames for hair motion by setting a few parameters, and then our method automatically select the best in-between frames based on the specified frame number by animator. The advantage of our method is to create in-between frames for hair motion by applying simulation model to key-frames. Obviously, the key-frame images do not have any depth. In fact, our method can directly utilize the hand-drawn key-frames which are drawn by animators in CACAni (Computer-Assisted Cel Animation) system [CACAni Website].","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128014077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel two-handed direct manipulation system to achieve complex volume segmentation of CT/MRI data in the real 3D space with a remote controller attached a motion tracking cube. At the same time segmented data is displayed by direct volume rendering using a programmable GPU. Our system achieves visualization of real time modification of volume data with complex shadings including transparency control by changing transfer functions, displaying any cross section and rendering multi materials using a local illumination model.
{"title":"Direct 3D manipulation for volume segmentation using mixed reality","authors":"Takehiro Tawara, K. Ono","doi":"10.1145/1666778.1666811","DOIUrl":"https://doi.org/10.1145/1666778.1666811","url":null,"abstract":"We propose a novel two-handed direct manipulation system to achieve complex volume segmentation of CT/MRI data in the real 3D space with a remote controller attached a motion tracking cube. At the same time segmented data is displayed by direct volume rendering using a programmable GPU. Our system achieves visualization of real time modification of volume data with complex shadings including transparency control by changing transfer functions, displaying any cross section and rendering multi materials using a local illumination model.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"100 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132708210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}