首页 > 最新文献

ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia最新文献

英文 中文
Collision detection for high-resolution deformable object using particle-based approach 基于粒子的高分辨率可变形物体碰撞检测
Thiti Rungcharoenpaisal, P. Kanongchaiyos
Computational time of collision detection can be exceeded when there are too many checking for non-colliding primitives on some particular areas of high-resolution deformable objects. The problem is usually solved with best-fit bounding volume hierarchies (BVHs) which require much more memory and time for updating the bounding volumes when the objects deform. Hence, a particle-based collision detection method is enhanced to reduce the checking for non-colliding primitives by adding movable particles on the object vertices corresponding to each particular area. The distance of corresponding particles are computed for selecting the closest vertices between each pair of objects. The experimental results show that the proposed method has less colliding checking time than using BVHs when using with the deformable objects. Moreover, the proposed primitive-checking method can be parallel processed on GPU increasing speed performance while accuracy is still preserved when the results are compared to the previous BVH method.
当对高分辨率可变形物体的某些特定区域进行过多的非碰撞原语检查时,会导致碰撞检测的计算时间过多。该问题通常通过最佳拟合边界卷层次(best fit bounding volume hierarchies, BVHs)来解决,当物体变形时,BVHs需要更多的内存和时间来更新边界卷。因此,增强了基于粒子的碰撞检测方法,通过在每个特定区域对应的物体顶点上添加可移动粒子来减少对非碰撞原语的检查。计算相应粒子的距离,选择每对物体之间最接近的顶点。实验结果表明,与BVHs相比,该方法在处理可变形物体时具有更短的碰撞检测时间。此外,该方法可以在GPU上并行处理,提高了速度性能,同时与之前的BVH方法进行比较,结果保持了精度。
{"title":"Collision detection for high-resolution deformable object using particle-based approach","authors":"Thiti Rungcharoenpaisal, P. Kanongchaiyos","doi":"10.1145/1666778.1666801","DOIUrl":"https://doi.org/10.1145/1666778.1666801","url":null,"abstract":"Computational time of collision detection can be exceeded when there are too many checking for non-colliding primitives on some particular areas of high-resolution deformable objects. The problem is usually solved with best-fit bounding volume hierarchies (BVHs) which require much more memory and time for updating the bounding volumes when the objects deform. Hence, a particle-based collision detection method is enhanced to reduce the checking for non-colliding primitives by adding movable particles on the object vertices corresponding to each particular area. The distance of corresponding particles are computed for selecting the closest vertices between each pair of objects. The experimental results show that the proposed method has less colliding checking time than using BVHs when using with the deformable objects. Moreover, the proposed primitive-checking method can be parallel processed on GPU increasing speed performance while accuracy is still preserved when the results are compared to the previous BVH method.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133478853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Entire topography of lunar surface 月球表面的整个地形
H. Nakayama
The Japanese spacedraft Kaguya (Selene) was launched on 14 September 2007 by the Japan Aerospace Exploration Agency. Its objectives are "to obtain scientific data of the lunar origin and evolution and to develop the technology for future lunar exploration."
日本宇宙航空研究开发机构于2007年9月14日发射了“月亮女神”号宇宙飞船。它的目标是“获取月球起源和演变的科学数据,并为未来的月球探测开发技术”。
{"title":"Entire topography of lunar surface","authors":"H. Nakayama","doi":"10.1145/1665208.1665243","DOIUrl":"https://doi.org/10.1145/1665208.1665243","url":null,"abstract":"The Japanese spacedraft Kaguya (Selene) was launched on 14 September 2007 by the Japan Aerospace Exploration Agency. Its objectives are \"to obtain scientific data of the lunar origin and evolution and to develop the technology for future lunar exploration.\"","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114338338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CUDA renderer: a programmable graphics pipeline CUDA渲染器:一个可编程的图形管道
Fang Liu, Meng-Cheng Huang, Xuehui Liu, E. Wu
Modern GPUs provide gradually increasing programmability on vertex shader, geometry shader and fragment shader in the past decade. However, many classical problems such as order-independent transparency (OIT), occlusion culling have not yet been efficiently solved using the traditional graphics pipeline. The main reason is that the behavior of the current stage of the pipeline is hard to be determined due to the unpredictable future data. Since the rasterization and blending stage are still largely fixed functions on chip, previous improvements on these problems always require hardware modifications thus remain on the theoretical level. In this paper we propose CUDA Renderer, a fully programmable graphics pipeline using compute unified device architecture (CUDA) [NVIDIA 2008] which can completely run on current graphics hardware. Our experimental results have demonstrated significant speedup to traditional graphics pipeline especially on OIT. We believe many other problems can also benefit from this flexible architecture.
在过去的十年里,现代gpu在顶点着色器、几何着色器和片段着色器上提供了逐渐增加的可编程性。然而,许多经典问题,如顺序无关透明性(OIT)、遮挡剔除等,还没有得到传统图形管道的有效解决。主要原因是由于未来数据不可预测,因此很难确定管道当前阶段的行为。由于栅格化和混合阶段在很大程度上仍然是芯片上的固定功能,以前对这些问题的改进总是需要硬件修改,因此停留在理论水平。在本文中,我们提出CUDA渲染器,一个完全可编程的图形管道使用计算统一设备架构(CUDA) [NVIDIA 2008],可以完全运行在当前的图形硬件。实验结果表明,该方法对传统图形管道有明显的加速作用,特别是在OIT上。我们相信许多其他问题也可以从这种灵活的体系结构中受益。
{"title":"CUDA renderer: a programmable graphics pipeline","authors":"Fang Liu, Meng-Cheng Huang, Xuehui Liu, E. Wu","doi":"10.1145/1667146.1667189","DOIUrl":"https://doi.org/10.1145/1667146.1667189","url":null,"abstract":"Modern GPUs provide gradually increasing programmability on vertex shader, geometry shader and fragment shader in the past decade. However, many classical problems such as order-independent transparency (OIT), occlusion culling have not yet been efficiently solved using the traditional graphics pipeline. The main reason is that the behavior of the current stage of the pipeline is hard to be determined due to the unpredictable future data. Since the rasterization and blending stage are still largely fixed functions on chip, previous improvements on these problems always require hardware modifications thus remain on the theoretical level. In this paper we propose CUDA Renderer, a fully programmable graphics pipeline using compute unified device architecture (CUDA) [NVIDIA 2008] which can completely run on current graphics hardware. Our experimental results have demonstrated significant speedup to traditional graphics pipeline especially on OIT. We believe many other problems can also benefit from this flexible architecture.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117144774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Contour-driven brush stroke synthesis 轮廓驱动的笔触合成
Xie Ning, Hamid Laga, S. Saito, M. Nakajima
We propose in this paper an interactive sketch-based system for simulating oriental brush strokes on complex shapes. We introduce a contour-driven approach where the user inputs contours to represent complex shapes, the system estimates automatically the optimal trajectory of the brush, and then renders them into oriental ink painting. Unlike previous work where the brush trajectory is explicitly specified as input, we automatically estimate this trajectory given the outline of the shape to paint. Existing methods can be classified into: (1) methods that explicitly model a virtual 3D brush and mimic its effect on a paper [Wang and Wang 2007], and (2) methods that simulate the rendering effect on a 2D canvas without an explicit 3D brush model [Okabe et al. 2007]. Our approach falls into the second category. Figure 1 shows four results generated by our algorithm.
本文提出了一种基于草图的交互式系统,用于模拟复杂形状上的东方笔触。我们引入了一种轮廓驱动的方法,用户输入轮廓来表示复杂的形状,系统自动估计笔刷的最佳轨迹,然后将它们渲染成东方水墨画。与之前的工作不同,在之前的工作中,笔刷轨迹被明确指定为输入,我们根据要绘制的形状的轮廓自动估计这个轨迹。现有的方法可以分为:(1)显式建模虚拟3D画笔并模拟其在纸张上的效果的方法[Wang and Wang 2007],以及(2)在没有显式3D画笔模型的情况下在2D画布上模拟渲染效果的方法[Okabe et al. 2007]。我们的方法属于第二类。图1显示了我们的算法生成的四个结果。
{"title":"Contour-driven brush stroke synthesis","authors":"Xie Ning, Hamid Laga, S. Saito, M. Nakajima","doi":"10.1145/1667146.1667154","DOIUrl":"https://doi.org/10.1145/1667146.1667154","url":null,"abstract":"We propose in this paper an interactive sketch-based system for simulating oriental brush strokes on complex shapes. We introduce a contour-driven approach where the user inputs contours to represent complex shapes, the system estimates automatically the optimal trajectory of the brush, and then renders them into oriental ink painting. Unlike previous work where the brush trajectory is explicitly specified as input, we automatically estimate this trajectory given the outline of the shape to paint. Existing methods can be classified into: (1) methods that explicitly model a virtual 3D brush and mimic its effect on a paper [Wang and Wang 2007], and (2) methods that simulate the rendering effect on a 2D canvas without an explicit 3D brush model [Okabe et al. 2007]. Our approach falls into the second category. Figure 1 shows four results generated by our algorithm.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115759430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric texture for fissure in 2012 2012年裂隙的体积纹理
H. Duiker, Tadao Mihashi
For a feature film 2012, Digital Domain was asked to destroy Los Angeles by an earthquake, the scale of which human kind has never experienced, entirely in computer graphics. The earthquake levels the city and causes huge canyons and fissures to form in front of our eyes, revealing underground structures and formations. As part of the task of destroying Los Angeles, we implemented a 2d texture-based 3d volumetric shader for RenderMan to create photorealistic fissure walls.
在2012年的故事片中,Digital Domain被要求用计算机图形完全用人类从未经历过的地震摧毁洛杉矶。地震将城市夷为平地,在我们眼前形成了巨大的峡谷和裂缝,揭示了地下结构和地层。作为摧毁洛杉矶任务的一部分,我们为RenderMan实现了一个基于纹理的2d 3d体积着色器来创建逼真的裂隙墙。
{"title":"Volumetric texture for fissure in 2012","authors":"H. Duiker, Tadao Mihashi","doi":"10.1145/1666778.1666795","DOIUrl":"https://doi.org/10.1145/1666778.1666795","url":null,"abstract":"For a feature film 2012, Digital Domain was asked to destroy Los Angeles by an earthquake, the scale of which human kind has never experienced, entirely in computer graphics. The earthquake levels the city and causes huge canyons and fissures to form in front of our eyes, revealing underground structures and formations. As part of the task of destroying Los Angeles, we implemented a 2d texture-based 3d volumetric shader for RenderMan to create photorealistic fissure walls.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Example based skinning with progressively optimized support joints 基于示例的蒙皮与逐步优化的支持关节
Kentaro Yamanaka, Akane Yano, S. Morishima
Skeleton-Subspace Deformation (SSD), which is the most popular method for articulated character animation, often causes some artifacts. Animators have to edit mesh each time, which is seriously tedious and time-consuming. So example based skinning has been proposed. It employs edited mesh as target poses and generates plausible animation efficiently. In this technique, character mesh should be deformed to accurately fit target poses. Mohr et al. [2003] introduced additional joints. They expect animators to embed skeleton precisely.
骨骼-子空间变形(SSD),这是最流行的铰接角色动画方法,通常会导致一些伪影。动画师每次都要编辑网格,这是非常繁琐和耗时的。因此提出了基于实例的蒙皮算法。它采用编辑网格作为目标姿态,有效地生成逼真的动画。在这种技术中,角色网格应该变形以准确地适应目标姿势。Mohr等人[2003]引入了附加关节。他们希望动画师能够精确地嵌入骨架。
{"title":"Example based skinning with progressively optimized support joints","authors":"Kentaro Yamanaka, Akane Yano, S. Morishima","doi":"10.1145/1666778.1666833","DOIUrl":"https://doi.org/10.1145/1666778.1666833","url":null,"abstract":"Skeleton-Subspace Deformation (SSD), which is the most popular method for articulated character animation, often causes some artifacts. Animators have to edit mesh each time, which is seriously tedious and time-consuming. So example based skinning has been proposed. It employs edited mesh as target poses and generates plausible animation efficiently. In this technique, character mesh should be deformed to accurately fit target poses. Mohr et al. [2003] introduced additional joints. They expect animators to embed skeleton precisely.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Interaction bar 交互酒吧
Chia-Hao Yang, Bo-Fan Jheng
In Interaction Bar, each wine cup represents a different character and emotion. Simulated scenes and the interaction surface react to users with unique visuals in each situation. Just like a real barroom crowd, these interactions can build bridges of friendship and encourage conversations, even among people who have never met.
在互动吧里,每一个酒杯都代表着不同的性格和情感。模拟场景和交互界面在每种情况下以独特的视觉效果响应用户。就像酒吧里的一群人一样,这些互动可以建立友谊的桥梁,鼓励对话,即使是在从未见过面的人之间。
{"title":"Interaction bar","authors":"Chia-Hao Yang, Bo-Fan Jheng","doi":"10.1145/1665137.1665193","DOIUrl":"https://doi.org/10.1145/1665137.1665193","url":null,"abstract":"In Interaction Bar, each wine cup represents a different character and emotion. Simulated scenes and the interaction surface react to users with unique visuals in each situation. Just like a real barroom crowd, these interactions can build bridges of friendship and encourage conversations, even among people who have never met.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125139055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the level 坦率地说
Michael Rutter
All Bean Maxwell wants is for the picture on his foyer wall to hang level. With a scrutinizing eye, and an array of tools, he tirelessly pursues this exercise in perfection. But will his dedication to the little details cause him to lose sight of the bigger picture?
比恩·麦克斯韦尔只想把他门厅墙上的画挂平。用一双仔细的眼睛和一系列的工具,他孜孜不倦地追求这一完美的练习。但是,他对小细节的执着是否会导致他忽视大局呢?
{"title":"On the level","authors":"Michael Rutter","doi":"10.1145/1665208.1665257","DOIUrl":"https://doi.org/10.1145/1665208.1665257","url":null,"abstract":"All Bean Maxwell wants is for the picture on his foyer wall to hang level. With a scrutinizing eye, and an array of tools, he tirelessly pursues this exercise in perfection. But will his dedication to the little details cause him to lose sight of the bigger picture?","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127631931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis and understanding of paintings by Ito Jakuchu 伊藤久初绘画的分析与理解
Sangtae Kim
This film describes an attempt to use a high-definition digital cinema system to produce content for a next-generation image system. Paintings by Ito Jakuchu were used as prototypes to produce an immersive virtual environment that allows people to enter the paintings. The result reveals possibilities for new collaborative studies among various fields such as art, psychology, and cognitive science, and a larger-than-life display to analyze and understand cultural properties and art works.
这部电影描述了使用高清数字电影系统为下一代图像系统制作内容的尝试。Ito Jakuchu的画作被用作原型,创造了一个沉浸式的虚拟环境,让人们可以进入画作。这一结果揭示了艺术、心理学、认知科学等领域的合作研究,以及分析和理解文化财产和艺术作品的“比生活更大的展示”的可能性。
{"title":"Analysis and understanding of paintings by Ito Jakuchu","authors":"Sangtae Kim","doi":"10.1145/1665137.1665155","DOIUrl":"https://doi.org/10.1145/1665137.1665155","url":null,"abstract":"This film describes an attempt to use a high-definition digital cinema system to produce content for a next-generation image system. Paintings by Ito Jakuchu were used as prototypes to produce an immersive virtual environment that allows people to enter the paintings. The result reveals possibilities for new collaborative studies among various fields such as art, psychology, and cognitive science, and a larger-than-life display to analyze and understand cultural properties and art works.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive image composition through draggable objects 通过可拖动对象进行交互式图像合成
Yuichiro Yamaguchi, Takuya Saito, Yosuke Bando, Bing-Yu Chen, T. Nishita
In traditional image composition methods for cutting out a source object from a source image and pasting it onto a target image, users have to segment a foreground object in a target image when they want to partially hide a source object behind it. While recent image editing tools greatly facilitate segmentation operations, it can be tedious to segment each object if users try to place a source object in various positions in a target image before obtaining a satisfying composition. We propose a method which allows users to drag a source object and slip it behind a target object as shown in Fig. 1, so that users can move a source object around without manually segmenting each part of a target image.
在传统的图像合成方法中,将源对象从源图像中剪切出来并粘贴到目标图像上,当用户想要部分隐藏源对象时,必须对目标图像中的前景对象进行分割。虽然最近的图像编辑工具极大地促进了分割操作,但如果用户试图在获得令人满意的构图之前将源对象放置在目标图像中的不同位置,则对每个对象进行分割可能会很繁琐。我们提出了一种方法,允许用户拖动源对象并将其滑动到目标对象后面,如图1所示,这样用户就可以移动源对象,而无需手动分割目标图像的每个部分。
{"title":"Interactive image composition through draggable objects","authors":"Yuichiro Yamaguchi, Takuya Saito, Yosuke Bando, Bing-Yu Chen, T. Nishita","doi":"10.1145/1667146.1667186","DOIUrl":"https://doi.org/10.1145/1667146.1667186","url":null,"abstract":"In traditional image composition methods for cutting out a source object from a source image and pasting it onto a target image, users have to segment a foreground object in a target image when they want to partially hide a source object behind it. While recent image editing tools greatly facilitate segmentation operations, it can be tedious to segment each object if users try to place a source object in various positions in a target image before obtaining a satisfying composition. We propose a method which allows users to drag a source object and slip it behind a target object as shown in Fig. 1, so that users can move a source object around without manually segmenting each part of a target image.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122266570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1