首页 > 最新文献

Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia最新文献

英文 中文
Efficient animation of water flow on irregular terrains 水流在不规则地形上的高效动画
M. Maes, T. Fujimoto, Norishige Chiba
We present an optimization of the water column-based height-field approach of water simulation by reducing memory footprint and promoting parallel implementation. The simulation still provides three-dimensional fluid animation suitable for water flowing on irregular terrains, intended for interactive applications. Our approach avoids the creation and storage of redundant virtual pipes between columns of water, and removes output dependency for the parallel implementation. We show a GPU implementation of the proposed method that runs at near interactive frame rates with rich lighting effects on the water surface, making it efficient for water animation on natural terrains for Computer Graphics.
我们通过减少内存占用和促进并行实现,提出了一种基于水柱的水模拟高度场方法的优化。该模拟还提供了适合于不规则地形上水流的三维流体动画,用于交互式应用。我们的方法避免了在水柱之间创建和存储冗余的虚拟管道,并消除了并行实现的输出依赖。我们展示了该方法的GPU实现,该方法以接近交互的帧率运行,在水面上具有丰富的灯光效果,使其在计算机图形学的自然地形上高效地进行水动画。
{"title":"Efficient animation of water flow on irregular terrains","authors":"M. Maes, T. Fujimoto, Norishige Chiba","doi":"10.1145/1174429.1174447","DOIUrl":"https://doi.org/10.1145/1174429.1174447","url":null,"abstract":"We present an optimization of the water column-based height-field approach of water simulation by reducing memory footprint and promoting parallel implementation. The simulation still provides three-dimensional fluid animation suitable for water flowing on irregular terrains, intended for interactive applications. Our approach avoids the creation and storage of redundant virtual pipes between columns of water, and removes output dependency for the parallel implementation. We show a GPU implementation of the proposed method that runs at near interactive frame rates with rich lighting effects on the water surface, making it efficient for water animation on natural terrains for Computer Graphics.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114441737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Densely sampled light probe sequences for spatially variant image based lighting 基于空间变化图像照明的密集采样光探测序列
J. Unger, S. Gustavson, A. Ynnerman
We present a novel technique for capturing spatially and temporally resolved light probe sequences, and using them for rendering. For this purpose we have designed and built a Real Time Light Probe; a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The Real Time Light Probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images with a dynamic range of 10,000,000:1 at 25 frames per second.By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point in space along the path of motion to a particular frame in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, using both traditional image based lighting methods with temporally varying light probe illumination and an extension to handle spatially varying lighting conditions across large objects.
我们提出了一种捕获空间和时间分辨光探针序列的新技术,并将其用于渲染。为此,我们设计并制造了一个实时光探测器;一种反射成像系统,当在一个场景中移动时,可以以视频帧速率捕获空间中每个点的照明事件的全动态范围。实时光探头使用数字成像系统,我们已经编程捕获高质量,光度精确的彩色图像,动态范围为10,000,000:1,每秒25帧。通过跟踪光探针的位置和方向,可以将每个光探针转换为世界坐标中的公共参照系,并将空间中的每个点沿着运动路径映射到光探针序列中的特定帧。我们展示了我们的技术,通过复杂的现实世界照明渲染合成物体,使用传统的基于图像的照明方法与时间变化的光探针照明和扩展来处理大型物体的空间变化照明条件。
{"title":"Densely sampled light probe sequences for spatially variant image based lighting","authors":"J. Unger, S. Gustavson, A. Ynnerman","doi":"10.1145/1174429.1174487","DOIUrl":"https://doi.org/10.1145/1174429.1174487","url":null,"abstract":"We present a novel technique for capturing spatially and temporally resolved light probe sequences, and using them for rendering. For this purpose we have designed and built a Real Time Light Probe; a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The Real Time Light Probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images with a dynamic range of 10,000,000:1 at 25 frames per second.By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point in space along the path of motion to a particular frame in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, using both traditional image based lighting methods with temporally varying light probe illumination and an extension to handle spatially varying lighting conditions across large objects.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121433431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
3D distance transform adaptive filtering for smoothing and denoising triangle meshes 三维距离变换自适应滤波平滑和去噪三角形网格
M. Fournier, J. Dischler, D. Bechmann
In this paper we compute the distance transform of a 3D triangle mesh. A volumetric voxel representation is defined over the mesh to evaluate the distance transform. Optimizations are described to efficiently manipulate the volumetric data structure that represents the mesh. A new method for adaptive filtering of the distance transform is introduced to smooth and reduce the noise on the meshes that were reconstructed from scanned data acquired with a 3D scanner. A modified version of the Marching Cube algorithm is presented to correctly reconstruct the final mesh of the filtered distance transform defined with the voxel representation. The new filtering method is feature preserving and it is more versatile than previous algorithms described in the literature. Results show that this method outperforms previous ones in term of an error metric comparison. Future works are discussed to improve the new method and its computing performances.
本文计算了三维三角形网格的距离变换。在网格上定义了一个体素表示来评估距离变换。优化描述为有效地操作表示网格的体积数据结构。提出了一种新的距离变换自适应滤波方法,对三维扫描仪采集的扫描数据进行重构后的网格进行平滑处理,降低噪声。提出了一种改进的行进立方体算法,以正确地重建用体素表示定义的过滤距离变换的最终网格。新的滤波方法是特征保持,它比以前的文献中描述的算法更通用。结果表明,该方法在误差度量比较方面优于以往的方法。讨论了进一步改进新方法及其计算性能的工作。
{"title":"3D distance transform adaptive filtering for smoothing and denoising triangle meshes","authors":"M. Fournier, J. Dischler, D. Bechmann","doi":"10.1145/1174429.1174497","DOIUrl":"https://doi.org/10.1145/1174429.1174497","url":null,"abstract":"In this paper we compute the distance transform of a 3D triangle mesh. A volumetric voxel representation is defined over the mesh to evaluate the distance transform. Optimizations are described to efficiently manipulate the volumetric data structure that represents the mesh. A new method for adaptive filtering of the distance transform is introduced to smooth and reduce the noise on the meshes that were reconstructed from scanned data acquired with a 3D scanner. A modified version of the Marching Cube algorithm is presented to correctly reconstruct the final mesh of the filtered distance transform defined with the voxel representation. The new filtering method is feature preserving and it is more versatile than previous algorithms described in the literature. Results show that this method outperforms previous ones in term of an error metric comparison. Future works are discussed to improve the new method and its computing performances.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"131 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An accelerating splatting algorithm based on multi-texture mapping for volume rendering 基于多纹理映射的体绘制加速喷溅算法
Han Xiao, De-Gui Xiao
Texture-mapping hardware has been successfully exploited for volume rendering. In this paper, we combine splatting method with 2D texture mapping efficiently and propose an algorithm for footprint algorithm based volume rendering accelerated by multi texture mapping. First of all, a regular data set is divided by some texture slices along the primary viewing direction. Then the segmented data are projected to the texture slices plane. Lastly, all these texture slices compose the final image by blending. With our algorithm, the scaled volume data set can be rendered effectively and fast but not degrade the quality of image remarkably.
纹理映射硬件已经成功地用于体绘制。本文将喷溅法与二维纹理映射有效地结合起来,提出了一种基于足迹算法的多纹理映射加速体绘制算法。首先,将一个规则的数据集沿主要观看方向划分为一些纹理切片。然后将分割后的数据投影到纹理切片平面上。最后,所有这些纹理切片通过混合组成最终的图像。该算法可以有效快速地渲染缩放后的体数据集,且不会显著降低图像质量。
{"title":"An accelerating splatting algorithm based on multi-texture mapping for volume rendering","authors":"Han Xiao, De-Gui Xiao","doi":"10.1145/1174429.1174464","DOIUrl":"https://doi.org/10.1145/1174429.1174464","url":null,"abstract":"Texture-mapping hardware has been successfully exploited for volume rendering. In this paper, we combine splatting method with 2D texture mapping efficiently and propose an algorithm for footprint algorithm based volume rendering accelerated by multi texture mapping. First of all, a regular data set is divided by some texture slices along the primary viewing direction. Then the segmented data are projected to the texture slices plane. Lastly, all these texture slices compose the final image by blending. With our algorithm, the scaled volume data set can be rendered effectively and fast but not degrade the quality of image remarkably.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"75 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131922209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Directional enhancement in texture-based vector field visualization 基于纹理的矢量场可视化中的方向增强
Francesca Taponecco, T. Urness, V. Interrante
The use of textures provides a rich and diverse set of possibilities for the visualization of flow data. In this paper, we present methods designed to produce oriented and controlled textures that accurately reflect the complex patterns that occur in vector field visualizations. We offer new insights based on the specification and classification of neighborhood models for synthesizing a texture that accurately depicts a vector field. Secondly, we introduce a computationally efficient method of texture mapping streamlines utilizing outlining textures to depict flow orientation.
纹理的使用为流数据的可视化提供了丰富多样的可能性。在本文中,我们提出了设计用于产生定向和控制纹理的方法,这些纹理准确地反映了矢量场可视化中出现的复杂图案。我们基于邻域模型的规范和分类,为合成准确描绘矢量场的纹理提供了新的见解。其次,我们介绍了一种计算效率高的纹理映射流线的方法,利用轮廓纹理来描绘流的方向。
{"title":"Directional enhancement in texture-based vector field visualization","authors":"Francesca Taponecco, T. Urness, V. Interrante","doi":"10.1145/1174429.1174463","DOIUrl":"https://doi.org/10.1145/1174429.1174463","url":null,"abstract":"The use of textures provides a rich and diverse set of possibilities for the visualization of flow data. In this paper, we present methods designed to produce oriented and controlled textures that accurately reflect the complex patterns that occur in vector field visualizations. We offer new insights based on the specification and classification of neighborhood models for synthesizing a texture that accurately depicts a vector field. Secondly, we introduce a computationally efficient method of texture mapping streamlines utilizing outlining textures to depict flow orientation.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116686592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Implicit curve oriented inbetweening for motion animation 面向运动动画的隐式中间曲线
Haiyin Xu, Dan Li, Jian Wang
While the parametric curve is overwhelmingly used in current character motion animation practice, the utilization of the implicit curve is recognized to be extremely potential for motion animation since the implicit surface have succeeded in such application as modeling, deformation and rendering within computer graphics and animation. In this paper, we advocate the use of implicit curves in motion animation. Implicit curves for task specification in motion animation is proposed and further explored in this paper. A planar implicit curve is used to represent the motion path while a speed profile curve is adopted to describe the motion timing. Then an approach and algorithm to motion inbetweening along implicit curves is proposed. Based on the motion path and motion speed, curve oriented inbetweening technique is utilized to generate inbetween position sequences in Cartesian space from which frame inbetweens in parametric space for animation are obtained by inverse kinematics.
虽然参数化曲线在当前的角色运动动画实践中被压倒性地使用,但由于隐式曲面在计算机图形学和动画中的建模、变形和渲染等应用中取得了成功,因此隐式曲线的利用被认为在运动动画中具有极大的潜力。在本文中,我们提倡在运动动画中使用隐式曲线。本文提出并进一步探讨了运动动画中任务说明的隐式曲线。采用平面隐式曲线表示运动轨迹,采用速度曲线描述运动时序。在此基础上,提出了一种沿隐式曲线进行中间运动的方法和算法。基于运动路径和运动速度,利用面向曲线的中间点技术在笛卡尔空间中生成中间点位置序列,通过运动学逆解得到参数空间中的帧中间点。
{"title":"Implicit curve oriented inbetweening for motion animation","authors":"Haiyin Xu, Dan Li, Jian Wang","doi":"10.1145/1174429.1174443","DOIUrl":"https://doi.org/10.1145/1174429.1174443","url":null,"abstract":"While the parametric curve is overwhelmingly used in current character motion animation practice, the utilization of the implicit curve is recognized to be extremely potential for motion animation since the implicit surface have succeeded in such application as modeling, deformation and rendering within computer graphics and animation. In this paper, we advocate the use of implicit curves in motion animation. Implicit curves for task specification in motion animation is proposed and further explored in this paper. A planar implicit curve is used to represent the motion path while a speed profile curve is adopted to describe the motion timing. Then an approach and algorithm to motion inbetweening along implicit curves is proposed. Based on the motion path and motion speed, curve oriented inbetweening technique is utilized to generate inbetween position sequences in Cartesian space from which frame inbetweens in parametric space for animation are obtained by inverse kinematics.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115387736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modeling expressive wrinkle on human face 模拟人类脸上富有表现力的皱纹
Nurazlin Zainal Azmi, R. Rahmat, R. Mahmod
Wrinkles are important for realistic facial animation and modeling because they aid in recognizing human's expressions as well as person's age. Different techniques have been used to generate wrinkles, whether it is fine-scale or large-scale wrinkles. This paper presents a technique on modeling large-scale wrinkles, also known as the expressive wrinkle, on human face by using points instead of triangular meshes. The wrinkle will be modeled on a drawing basis, where users can see the effect directly once the shape and location of the wrinkle has been specified on the 3D face mesh itself. After that, the data involved in modeling the wrinkle are retrieved and processed. A new wrinkle shape function will be introduced to capture the realism of the wrinkle generated where it will be applied during the process.
皱纹对于真实的面部动画和建模很重要,因为它们有助于识别人类的表情以及人的年龄。不同的技术被用来产生皱纹,无论是小尺度的还是大规模的皱纹。本文提出了一种用点代替三角网格的人脸大尺度皱纹(又称表情皱纹)建模技术。皱纹将在绘图的基础上建模,一旦在3D面部网格上指定了皱纹的形状和位置,用户就可以直接看到效果。然后,检索和处理皱纹建模所涉及的数据。一个新的皱纹形状函数将被引入,以捕捉的现实主义产生的皱纹,它将在过程中应用。
{"title":"Modeling expressive wrinkle on human face","authors":"Nurazlin Zainal Azmi, R. Rahmat, R. Mahmod","doi":"10.1145/1174429.1174500","DOIUrl":"https://doi.org/10.1145/1174429.1174500","url":null,"abstract":"Wrinkles are important for realistic facial animation and modeling because they aid in recognizing human's expressions as well as person's age. Different techniques have been used to generate wrinkles, whether it is fine-scale or large-scale wrinkles. This paper presents a technique on modeling large-scale wrinkles, also known as the expressive wrinkle, on human face by using points instead of triangular meshes. The wrinkle will be modeled on a drawing basis, where users can see the effect directly once the shape and location of the wrinkle has been specified on the 3D face mesh itself. After that, the data involved in modeling the wrinkle are retrieved and processed. A new wrinkle shape function will be introduced to capture the realism of the wrinkle generated where it will be applied during the process.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127282419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A global hierarchical Z space algorithm for cluster parallel graphics architectures 集群并行图形架构的全局分层Z空间算法
A. Santilli, Ewa Huebner
In this paper we present a new global hierarchical Z-space sort-last algorithm for cluster parallel graphics architectures that improves upon algorithms used so far for high performance super-graphics. The new algorithm bypasses limitations of sort-last tile based parallelization paradigms, and solves some known Z-space parallelization inefficiencies. The algorithm is implemented as a global hierarchical-Z system which allows GPUs to perform high frequency global intra-frame Z-culling and distributed final frame Z-determination. The new algorithm allows for full one-to-one process-GPU coupling with minimal inter-process and inter-GPU communications. This enables maximal input bandwidth, maximum GPU utilization levels, near optimal load balances and improved efficiency when scaled to larger configurations.
在本文中,我们提出了一种新的全局分层z空间排序算法,用于集群并行图形架构,改进了迄今为止用于高性能超级图形的算法。新算法绕过了基于最后排序的并行化范式的限制,并解决了一些已知的z空间并行化低效问题。该算法实现为全局分层z系统,允许gpu执行高频全局帧内z剔除和分布式最终帧z确定。新算法允许完全一对一的进程- gpu耦合与最小的进程间和gpu间通信。这可以实现最大的输入带宽,最大的GPU利用率水平,接近最佳负载平衡,并在扩展到更大的配置时提高效率。
{"title":"A global hierarchical Z space algorithm for cluster parallel graphics architectures","authors":"A. Santilli, Ewa Huebner","doi":"10.1145/1174429.1174451","DOIUrl":"https://doi.org/10.1145/1174429.1174451","url":null,"abstract":"In this paper we present a new global hierarchical Z-space sort-last algorithm for cluster parallel graphics architectures that improves upon algorithms used so far for high performance super-graphics. The new algorithm bypasses limitations of sort-last tile based parallelization paradigms, and solves some known Z-space parallelization inefficiencies. The algorithm is implemented as a global hierarchical-Z system which allows GPUs to perform high frequency global intra-frame Z-culling and distributed final frame Z-determination. The new algorithm allows for full one-to-one process-GPU coupling with minimal inter-process and inter-GPU communications. This enables maximal input bandwidth, maximum GPU utilization levels, near optimal load balances and improved efficiency when scaled to larger configurations.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking and video surveillance activity analysis 跟踪和视频监控活动分析
Michael Cheng, Binh Pham, D. Tjondronegoro
The explosion in the number of cameras surveilling the environment in recent years is generating a need for systems capable of analysing video streams for important events. This paper outlines a system for detecting noteworthy behaviours (from a security or surveillance perspective) which does not involve the enumeration of the event sequences of all possible activities of interest. Instead the focus is on calculating a measure of the abnormality of the action taking place. This raises the need for a low complexity tracking algorithm robust to the noise artefacts present in video surveillance systems. The tracking technique described herein achieves this goal by using a "future history" buffer of images and so delaying the classification and tracking of objects by the time quantum which is the buffer size. This allows disambiguation of noise blobs and facilitates classification in the case of occlusions and disappearance of people due to lighting, failures in the background model etc.
近年来,监控环境的摄像机数量激增,产生了对能够分析重要事件视频流的系统的需求。本文概述了一个用于检测值得注意的行为(从安全或监视的角度)的系统,该系统不涉及枚举所有可能感兴趣的活动的事件序列。相反,重点是计算发生的行为的异常程度。这就提出了对视频监控系统中存在的噪声伪迹具有鲁棒性的低复杂度跟踪算法的需求。本文描述的跟踪技术通过使用图像的“未来历史”缓冲区来实现这一目标,从而通过缓冲大小的时间量子延迟对象的分类和跟踪。这样可以消除噪声团的歧义,并便于在由于照明、背景模型故障等原因造成的遮挡和人员消失的情况下进行分类。
{"title":"Tracking and video surveillance activity analysis","authors":"Michael Cheng, Binh Pham, D. Tjondronegoro","doi":"10.1145/1174429.1174491","DOIUrl":"https://doi.org/10.1145/1174429.1174491","url":null,"abstract":"The explosion in the number of cameras surveilling the environment in recent years is generating a need for systems capable of analysing video streams for important events. This paper outlines a system for detecting noteworthy behaviours (from a security or surveillance perspective) which does not involve the enumeration of the event sequences of all possible activities of interest. Instead the focus is on calculating a measure of the abnormality of the action taking place. This raises the need for a low complexity tracking algorithm robust to the noise artefacts present in video surveillance systems. The tracking technique described herein achieves this goal by using a \"future history\" buffer of images and so delaying the classification and tracking of objects by the time quantum which is the buffer size. This allows disambiguation of noise blobs and facilitates classification in the case of occlusions and disappearance of people due to lighting, failures in the background model etc.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121981502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learned deformable skeletons for motion capture based animation 学习了基于动作捕捉动画的可变形骨架
Alyssa Lees
This paper presents a novel approach for automatically adding expressive attributes to motion capture based animations by utilizing learned behavior from the natural deformations of the human skeleton. The author envisions this system as part of a larger toolbox that enables animators to quickly modify the emotional qualities of motion capture data.
本文提出了一种利用人体骨骼自然变形的学习行为,为基于动作捕捉的动画自动添加表达属性的新方法。作者设想这个系统作为一个更大的工具箱的一部分,使动画师能够快速修改动作捕捉数据的情感品质。
{"title":"Learned deformable skeletons for motion capture based animation","authors":"Alyssa Lees","doi":"10.1145/1174429.1174440","DOIUrl":"https://doi.org/10.1145/1174429.1174440","url":null,"abstract":"This paper presents a novel approach for automatically adding expressive attributes to motion capture based animations by utilizing learned behavior from the natural deformations of the human skeleton. The author envisions this system as part of a larger toolbox that enables animators to quickly modify the emotional qualities of motion capture data.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130590581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1