首页 > 最新文献

SIGGRAPH Asia 2014 Technical Briefs最新文献

英文 中文
ColorFingers: improved multi-touch color picker ColorFingers:改进的多点触控颜色选择器
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669033
A. J. G. Ebbinason, B. R. Kanna
ColorFingers is a WYSIWYG, Location Independent Touch (LIT) based color picking tool aimed to give unique and swift interaction in choosing color on touch based devices. It makes use of touch interface and the touch information of two fingers to select almost 16 million colors. This tool is a model to prove how touch can be interpreted in different ways to achieve performance improvements in HCI. In this paper, we propose ColorFingers which is a color picker and briefly discuss the working of it. We show, how it achieves around 54% reduction in color selection time and 53% improvement in accuracy when compared to existing models. The proposed model emphasizes on Multi-touch, Quick feedback and Location Independency.
ColorFingers是一款基于所见即所得、位置独立触摸(LIT)的颜色选择工具,旨在为基于触摸的设备提供独特而快速的颜色选择交互。它利用触摸界面和两个手指的触摸信息来选择近1600万种颜色。该工具是一个模型,用于证明如何以不同的方式解释触摸以实现HCI中的性能改进。本文提出了颜色选择器ColorFingers,并简要讨论了它的工作原理。我们展示了与现有模型相比,它如何减少54%的颜色选择时间和53%的准确性提高。该模型强调多点触控、快速反馈和位置独立性。
{"title":"ColorFingers: improved multi-touch color picker","authors":"A. J. G. Ebbinason, B. R. Kanna","doi":"10.1145/2669024.2669033","DOIUrl":"https://doi.org/10.1145/2669024.2669033","url":null,"abstract":"ColorFingers is a WYSIWYG, Location Independent Touch (LIT) based color picking tool aimed to give unique and swift interaction in choosing color on touch based devices. It makes use of touch interface and the touch information of two fingers to select almost 16 million colors. This tool is a model to prove how touch can be interpreted in different ways to achieve performance improvements in HCI. In this paper, we propose ColorFingers which is a color picker and briefly discuss the working of it. We show, how it achieves around 54% reduction in color selection time and 53% improvement in accuracy when compared to existing models. The proposed model emphasizes on Multi-touch, Quick feedback and Location Independency.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"494 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123563555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Visualizing building interiors using virtual windows 使用虚拟窗户可视化建筑内部
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669029
N. Joseph, Brett Achorn, Sean Jenkins, Hank Driskill
The feature film "Big Hero 6" is set in a fictional city with numerous scenes encompassing hundreds of buildings. The objects visible inside the windows, especially during nighttime, play a vital role in portraying the realism of the scene. Unfortunately, it can be expensive to individually model each room in every building. Thus, the production team needed a way to render building interiors with reasonable parallax effects, without adding geometry in an already large scene. This paper describes a novel building interior visualization system using a Virtual Window Shader (Shader) written for a ray-traced global illumination (GI) multi-bounce renderer [Eisenacher et al. 2013]. The Shader efficiently creates an illusion of geometry and light sources inside building windows using only pre-baked textures.
剧情片《超能陆战队》以一个虚构的城市为背景,那里有许多场景,包括数百座建筑。窗户内可见的物体,特别是在夜间,在描绘场景的现实主义方面起着至关重要的作用。不幸的是,为每栋建筑的每个房间单独建模可能会很昂贵。因此,制作团队需要一种方法来渲染具有合理视差效果的建筑内部,而不是在已经很大的场景中添加几何体。本文描述了一种新的建筑内部可视化系统,该系统使用为光线跟踪全局照明(GI)多弹跳渲染器编写的虚拟窗口着色器(Shader) [Eisenacher et al. 2013]。Shader仅使用预烤纹理有效地在建筑窗户内创建几何形状和光源的错觉。
{"title":"Visualizing building interiors using virtual windows","authors":"N. Joseph, Brett Achorn, Sean Jenkins, Hank Driskill","doi":"10.1145/2669024.2669029","DOIUrl":"https://doi.org/10.1145/2669024.2669029","url":null,"abstract":"The feature film \"Big Hero 6\" is set in a fictional city with numerous scenes encompassing hundreds of buildings. The objects visible inside the windows, especially during nighttime, play a vital role in portraying the realism of the scene. Unfortunately, it can be expensive to individually model each room in every building. Thus, the production team needed a way to render building interiors with reasonable parallax effects, without adding geometry in an already large scene. This paper describes a novel building interior visualization system using a Virtual Window Shader (Shader) written for a ray-traced global illumination (GI) multi-bounce renderer [Eisenacher et al. 2013]. The Shader efficiently creates an illusion of geometry and light sources inside building windows using only pre-baked textures.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125411248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Topology-aware reconstruction of thin tubular structures 细管结构的拓扑感知重构
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669035
Tobias Martin, Juan Montes, J. Bazin, T. Popa
This paper is dedicated to the 3D reconstruction of thin tubular structures, such as cables or ropes, from a given image sequence. This is a challenging task, mainly because of self-occlusions of the structure and its thin features. We present an approach that combines image processing tools with physics simulation to faithfully reconstruct jumbled and tangled cables in 3D. Our method estimates the topology of the tubular object in the form of a single 1D path and also computes a topology-aware reconstruction of its geometry. We evaluate our method on both synthetic and real datasets and demonstrate that our method favourably compares to state-of-the-art methods.
本文致力于从给定的图像序列中对细管状结构(如电缆或绳索)进行三维重建。这是一项具有挑战性的任务,主要是因为结构的自闭塞性及其薄的特征。我们提出了一种将图像处理工具与物理模拟相结合的方法,以忠实地重建3D中混乱和纠结的电缆。我们的方法以单一一维路径的形式估计管状物体的拓扑结构,并计算其几何形状的拓扑感知重建。我们在合成和真实数据集上评估我们的方法,并证明我们的方法优于最先进的方法。
{"title":"Topology-aware reconstruction of thin tubular structures","authors":"Tobias Martin, Juan Montes, J. Bazin, T. Popa","doi":"10.1145/2669024.2669035","DOIUrl":"https://doi.org/10.1145/2669024.2669035","url":null,"abstract":"This paper is dedicated to the 3D reconstruction of thin tubular structures, such as cables or ropes, from a given image sequence. This is a challenging task, mainly because of self-occlusions of the structure and its thin features. We present an approach that combines image processing tools with physics simulation to faithfully reconstruct jumbled and tangled cables in 3D. Our method estimates the topology of the tubular object in the form of a single 1D path and also computes a topology-aware reconstruction of its geometry. We evaluate our method on both synthetic and real datasets and demonstrate that our method favourably compares to state-of-the-art methods.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"144 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131719942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Unified skinning of rigid and deformable models for anatomical simulations 用于解剖模拟的刚性和可变形模型的统一蒙皮
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669031
I. Stavness, C. A. Sánchez, J. Lloyd, A. Ho, Johnty Wang, S. Fels, Danny Huang
We propose a novel geometric skinning approach that unifies geometric blending for rigid-body models with embedded surfaces for finite-element models. The resulting skinning method provides flexibility for modelers and animators to select the desired dynamic degrees-of-freedom through a combination of coupled rigid and deformable structures connected to a single skin mesh that is influenced by all dynamic components. The approach is particularly useful for anatomical models that include a mix of hard structures (bones) and soft tissues (muscles, tendons). We demonstrate our skinning method for an upper airway model and create first-of-its-kind simulations of swallowing and speech acoustics that are generated by muscle-driven biomechanical models of the oral anatomy.
我们提出了一种新的几何蒙皮方法,将刚体模型的几何混合与有限元模型的嵌入表面相结合。由此产生的蒙皮方法为建模者和动画师提供了灵活性,可以通过连接到受所有动态组件影响的单个蒙皮网格的耦合刚性和可变形结构的组合来选择所需的动态自由度。这种方法对于包括硬结构(骨骼)和软组织(肌肉、肌腱)的混合解剖模型特别有用。我们展示了我们的上呼吸道模型的剥皮方法,并创建了由口腔解剖学肌肉驱动的生物力学模型产生的吞咽和语音声学模拟。
{"title":"Unified skinning of rigid and deformable models for anatomical simulations","authors":"I. Stavness, C. A. Sánchez, J. Lloyd, A. Ho, Johnty Wang, S. Fels, Danny Huang","doi":"10.1145/2669024.2669031","DOIUrl":"https://doi.org/10.1145/2669024.2669031","url":null,"abstract":"We propose a novel geometric skinning approach that unifies geometric blending for rigid-body models with embedded surfaces for finite-element models. The resulting skinning method provides flexibility for modelers and animators to select the desired dynamic degrees-of-freedom through a combination of coupled rigid and deformable structures connected to a single skin mesh that is influenced by all dynamic components. The approach is particularly useful for anatomical models that include a mix of hard structures (bones) and soft tissues (muscles, tendons). We demonstrate our skinning method for an upper airway model and create first-of-its-kind simulations of swallowing and speech acoustics that are generated by muscle-driven biomechanical models of the oral anatomy.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133750565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Virtual spherical gaussian lights for real-time glossy indirect illumination 用于实时光滑间接照明的虚拟球形高斯灯
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669025
Yusuke Tokuyoshi
Virtual point light (VPL) [Keller 1997] based global illumination methods are well established for interactive applications, but they have considerable problems such as spiky artifacts and temporal flickering caused by singularities, high-frequency materials, and discontinuous geometries (Fig. 1). This paper proposes an efficient technique to render one-bounce interreflections for all-frequency materials based on virtual spherical lights (VSLs) [Hašan et al. 2009]. VSLs were proposed to suppress spiky artifacts of VPLs. However, this is unsuitable for real-time applications, since it needs expensive Monte-Carlo (MC) integration and k-nearest neighbor density estimation for each VSL. This paper approximates VSLs using spherical Gaussian (SG) lights without singularities, which take all-frequency materials into account. Instead of k-nearest neighbor density estimation, this paper presents a simple SG lights generation technique using mipmap filtering which alleviates temporal flickering for high-frequency geometries and textures (e.g., normal maps) at real-time frame rates. Since SG lights based approximations are inconsistent estimators, this paper additionally discusses a consistent bias reduction technique. Our technique is simple, easy to integrate in existing reflective shadow map (RSM) based implementations, and completely dynamic for one-bounce indirect illumination including caustics.
基于虚拟点光(VPL) [Keller 1997]的全局照明方法已经很好地应用于交互应用,但它们存在相当大的问题,例如由奇点、高频材料和不连续几何形状引起的尖形伪影和时间闪烁(图1)。本文提出了一种基于虚拟球光(VSLs)的高效技术,可以为全频率材料呈现单弹互反射[Hašan等人,2009]。提出了vsl来抑制vpl的尖刺伪影。然而,这并不适合实时应用,因为它需要昂贵的蒙特卡罗(MC)集成和每个VSL的k近邻密度估计。本文采用考虑全频率材料的无奇点的球形高斯光(SG)来近似vsl。代替k近邻密度估计,本文提出了一种简单的使用mipmap滤波的SG光生成技术,该技术可以减轻实时帧速率下高频几何形状和纹理(例如法线贴图)的时间闪烁。由于基于SG光的近似是不一致的估计,本文还讨论了一种一致的偏置减小技术。我们的技术简单,易于集成到现有的基于反射阴影映射(RSM)的实现中,并且对于包括焦散在内的单弹间接照明是完全动态的。
{"title":"Virtual spherical gaussian lights for real-time glossy indirect illumination","authors":"Yusuke Tokuyoshi","doi":"10.1145/2669024.2669025","DOIUrl":"https://doi.org/10.1145/2669024.2669025","url":null,"abstract":"Virtual point light (VPL) [Keller 1997] based global illumination methods are well established for interactive applications, but they have considerable problems such as spiky artifacts and temporal flickering caused by singularities, high-frequency materials, and discontinuous geometries (Fig. 1). This paper proposes an efficient technique to render one-bounce interreflections for all-frequency materials based on virtual spherical lights (VSLs) [Hašan et al. 2009]. VSLs were proposed to suppress spiky artifacts of VPLs. However, this is unsuitable for real-time applications, since it needs expensive Monte-Carlo (MC) integration and k-nearest neighbor density estimation for each VSL. This paper approximates VSLs using spherical Gaussian (SG) lights without singularities, which take all-frequency materials into account. Instead of k-nearest neighbor density estimation, this paper presents a simple SG lights generation technique using mipmap filtering which alleviates temporal flickering for high-frequency geometries and textures (e.g., normal maps) at real-time frame rates. Since SG lights based approximations are inconsistent estimators, this paper additionally discusses a consistent bias reduction technique. Our technique is simple, easy to integrate in existing reflective shadow map (RSM) based implementations, and completely dynamic for one-bounce indirect illumination including caustics.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133629920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Real time light field reconstruction for sub-pixel based integral imaging display 基于亚像素积分成像显示的实时光场重建
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669041
Shaohui Jiao, Wen Wu, Haitao Wang, Mingcai Zhou, Tao Hong, Xun Sun, E. Wu
Integral imaging (II) display provides a promising 3D display technology for users to see natural 3D color images with stereo and motion parallax. However, they often suffers from the limitations of both the insufficient spatial resolution and lack of real-time content generation strategies. In this paper, we advance the traditional II display with an efficient sub-pixel based light field reconstruction scheme, to achieve 3D imagery with much higher spatial resolution in real-time speed.
集成成像(II)显示为用户提供了一种很有前景的3D显示技术,使用户能够看到具有立体视差和运动视差的自然3D彩色图像。然而,它们经常受到空间分辨率不足和缺乏实时内容生成策略的限制。本文提出了一种基于亚像素的高效光场重建方案,对传统II显示进行改进,以实时速度实现更高空间分辨率的三维图像。
{"title":"Real time light field reconstruction for sub-pixel based integral imaging display","authors":"Shaohui Jiao, Wen Wu, Haitao Wang, Mingcai Zhou, Tao Hong, Xun Sun, E. Wu","doi":"10.1145/2669024.2669041","DOIUrl":"https://doi.org/10.1145/2669024.2669041","url":null,"abstract":"Integral imaging (II) display provides a promising 3D display technology for users to see natural 3D color images with stereo and motion parallax. However, they often suffers from the limitations of both the insufficient spatial resolution and lack of real-time content generation strategies. In this paper, we advance the traditional II display with an efficient sub-pixel based light field reconstruction scheme, to achieve 3D imagery with much higher spatial resolution in real-time speed.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129590124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Depth of field rendering via adaptive recursive filtering 景深渲染通过自适应递归滤波
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669034
Shibiao Xu, Xing Mei, Weiming Dong, Xun Sun, Xukun Shen, Xiaopeng Zhang
We present a new post-processing method for rendering high-quality depth-of-field effects in real time. Our method is based on a recursive filtering process, which adaptively smooths the image frame with local depth and circle of confusion information. Unlike previous post-filtering approaches that rely on various convolution kernels, the behavior of our filter is controlled by a weighting function defined between two neighboring pixels. By properly designing this weighting function, our method produces spatially-varying smoothed results, correctly handles the boundaries between in-focus and out-of-focus objects, and avoids rendering artifacts such as intensity leakage and blurring discontinuity. Additionally, our method works on the full frame without resorting to image pyramids. Our algorithms runs efficiently on graphics hardware. We demonstrate the effectiveness of the proposed method with several complex scenes.
我们提出了一种新的实时渲染高质量景深效果的后处理方法。该方法基于递归滤波过程,利用局部深度和混淆圈信息对图像帧进行自适应平滑。与之前依赖于各种卷积核的后滤波方法不同,我们的滤波器的行为由两个相邻像素之间定义的加权函数控制。通过对该加权函数的合理设计,我们的方法产生了空间变化的平滑结果,正确处理了对焦和失焦对象之间的边界,并避免了诸如强度泄漏和模糊不连续等渲染伪影。此外,我们的方法适用于全帧,而无需诉诸图像金字塔。我们的算法在图形硬件上运行效率很高。我们用几个复杂场景验证了该方法的有效性。
{"title":"Depth of field rendering via adaptive recursive filtering","authors":"Shibiao Xu, Xing Mei, Weiming Dong, Xun Sun, Xukun Shen, Xiaopeng Zhang","doi":"10.1145/2669024.2669034","DOIUrl":"https://doi.org/10.1145/2669024.2669034","url":null,"abstract":"We present a new post-processing method for rendering high-quality depth-of-field effects in real time. Our method is based on a recursive filtering process, which adaptively smooths the image frame with local depth and circle of confusion information. Unlike previous post-filtering approaches that rely on various convolution kernels, the behavior of our filter is controlled by a weighting function defined between two neighboring pixels. By properly designing this weighting function, our method produces spatially-varying smoothed results, correctly handles the boundaries between in-focus and out-of-focus objects, and avoids rendering artifacts such as intensity leakage and blurring discontinuity. Additionally, our method works on the full frame without resorting to image pyramids. Our algorithms runs efficiently on graphics hardware. We demonstrate the effectiveness of the proposed method with several complex scenes.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114210375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reference-based manga colorization by graph correspondence using quadratic programming 使用二次规划的图形对应的基于参考的漫画着色
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669037
Kazuhiro Sato, Yusuke Matsui, T. Yamasaki, K. Aizawa
Manga (Japanese comics) are popular all over the world. However, most existing manga are monochrome. If such monochrome manga can be colorized, readers can enjoy the richer representations. In this paper, we propose a semiautomatic colorization method for manga. Given a previously colored reference manga image and target monochrome manga images, we propagate the colors of the reference manga to the target manga by representing images as graphs and matching graphs. The proposed method enables coloring of manga images without time-consuming manual colorization. We show results in which the colors of characters were correctly transferred to target characters, even those with complex structures.
日本漫画在全世界都很受欢迎。然而,大多数现有的漫画都是单色的。如果这种单色漫画可以着色,读者可以享受更丰富的表现。本文提出一种漫画的半自动上色方法。给定先前着色的参考漫画图像和目标单色漫画图像,我们通过将图像表示为图形和匹配图形来将参考漫画的颜色传播到目标漫画。该方法可以实现漫画图像的上色,而无需耗时的手动上色。我们展示了字符的颜色被正确地转移到目标字符的结果,即使是那些结构复杂的字符。
{"title":"Reference-based manga colorization by graph correspondence using quadratic programming","authors":"Kazuhiro Sato, Yusuke Matsui, T. Yamasaki, K. Aizawa","doi":"10.1145/2669024.2669037","DOIUrl":"https://doi.org/10.1145/2669024.2669037","url":null,"abstract":"Manga (Japanese comics) are popular all over the world. However, most existing manga are monochrome. If such monochrome manga can be colorized, readers can enjoy the richer representations. In this paper, we propose a semiautomatic colorization method for manga. Given a previously colored reference manga image and target monochrome manga images, we propagate the colors of the reference manga to the target manga by representing images as graphs and matching graphs. The proposed method enables coloring of manga images without time-consuming manual colorization. We show results in which the colors of characters were correctly transferred to target characters, even those with complex structures.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123762796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
SIGGRAPH Asia 2014 Technical Briefs SIGGRAPH亚洲2014技术简报
Pub Date : 1900-01-01 DOI: 10.1145/2669024
{"title":"SIGGRAPH Asia 2014 Technical Briefs","authors":"","doi":"10.1145/2669024","DOIUrl":"https://doi.org/10.1145/2669024","url":null,"abstract":"","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115777284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SIGGRAPH Asia 2014 Technical Briefs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1