首页 > 最新文献

Proceedings of the 30th Spring Conference on Computer Graphics最新文献

英文 中文
Fast and Furious: How the web got turbo charged just in time… 速度与激情:网络是如何及时加速的……
Pub Date : 2022-03-17 DOI: 10.1145/2643188.3527459
{"title":"Fast and Furious: How the web got turbo charged just in time…","authors":"","doi":"10.1145/2643188.3527459","DOIUrl":"https://doi.org/10.1145/2643188.3527459","url":null,"abstract":"","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115428082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and Furious: How the web got turbo charged just in time? 速度与激情:网络是如何及时加速的?
Pub Date : 2022-03-17 DOI: 10.1145/2643188.3527460
M. Franz
{"title":"Fast and Furious: How the web got turbo charged just in time?","authors":"M. Franz","doi":"10.1145/2643188.3527460","DOIUrl":"https://doi.org/10.1145/2643188.3527460","url":null,"abstract":"","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133969632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cheap rendering vs. costly annotation: rendered omnidirectional dataset of vehicles 便宜的渲染vs.昂贵的注释:渲染的车辆全方位数据集
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643191
Peter Slosár, Roman Juránek, A. Herout
Detection of vehicles in traffic surveillance needs good and large training datasets in order to achieve competitive detection rates. We are showing an approach to automatic synthesis of custom datasets, simulating various major influences: viewpoint, camera parameters, sunlight, surrounding environment, etc. Our goal is to create a competitive vehicle detector which "has not seen a real car before." We are using Blender as the modeling and rendering engine. A suitable scene graph accompanied by a set of scripts was created, that allows simple configuration of the synthesized dataset. The generator is also capable of storing rich set of metadata that are used as annotations of the synthesized images. We synthesized several experimental datasets, evaluated their statistical properties, as compared to real-life datasets. Most importantly, we trained a detector on the synthetic data. Its detection performance is comparable to a detector trained on state-of-the-art real-life dataset. Synthesis of a dataset of 10,000 images takes only several hours, which is much more efficient, compared to manual annotation, let aside the possibility of human error in annotation.
为了达到有竞争力的检测率,交通监控中的车辆检测需要好的、大的训练数据集。我们正在展示一种自动合成自定义数据集的方法,模拟各种主要影响因素:视点、相机参数、阳光、周围环境等。我们的目标是创造一个具有竞争力的车辆探测器,“以前从未见过真正的汽车”。我们使用Blender作为建模和渲染引擎。创建了一个合适的场景图和一组脚本,允许简单地配置合成数据集。生成器还能够存储丰富的元数据集,用作合成图像的注释。我们综合了几个实验数据集,评估了它们的统计特性,并与现实生活中的数据集进行了比较。最重要的是,我们用合成数据训练了一个探测器。其检测性能可与在最先进的现实数据集上训练的检测器相媲美。1万张图像的数据集的合成只需要几个小时,与手动注释相比,这要高效得多,不考虑注释中人为错误的可能性。
{"title":"Cheap rendering vs. costly annotation: rendered omnidirectional dataset of vehicles","authors":"Peter Slosár, Roman Juránek, A. Herout","doi":"10.1145/2643188.2643191","DOIUrl":"https://doi.org/10.1145/2643188.2643191","url":null,"abstract":"Detection of vehicles in traffic surveillance needs good and large training datasets in order to achieve competitive detection rates. We are showing an approach to automatic synthesis of custom datasets, simulating various major influences: viewpoint, camera parameters, sunlight, surrounding environment, etc. Our goal is to create a competitive vehicle detector which \"has not seen a real car before.\" We are using Blender as the modeling and rendering engine. A suitable scene graph accompanied by a set of scripts was created, that allows simple configuration of the synthesized dataset. The generator is also capable of storing rich set of metadata that are used as annotations of the synthesized images. We synthesized several experimental datasets, evaluated their statistical properties, as compared to real-life datasets. Most importantly, we trained a detector on the synthetic data. Its detection performance is comparable to a detector trained on state-of-the-art real-life dataset. Synthesis of a dataset of 10,000 images takes only several hours, which is much more efficient, compared to manual annotation, let aside the possibility of human error in annotation.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121902241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adaptive BVH: an evaluation of an efficient shared data structure for interactive simulation 自适应BVH:一种有效的交互式仿真共享数据结构的评估
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643192
Colin Fowler, Michael J. Doyle, M. Manzke
The strive towards realistic simulations at interactive speeds has driven research in both rendering and physical simulation. This heightened realism involves larger data sets and data structures, and comes at a high computational cost. We investigate simulations involving collision detection and real-time ray-tracing and note similarities in the data structures used to accelerate them. Our investigation demonstrates that it is possible to utilize a single Acceleration Data Structure (ADS) for both subsystems of an interactive simulation, even though they benefit from different characteristics. Typically, the collision detection and ray-tracing system build ADSs that satisfy their specific needs. We argue for a shared adaptive ADS that can be optimized for both collision detection and ray-tracing. The collision detection system builds this adaptive ADS and the ray-tracing algorithm uses the same adaptive ADS after the collision detection system has resolved potential collisions, therefore saving memory, execution time and power. The results show that compromises need not be made on build heuristics. Furthermore, the ADS may be optimized for primary and secondary rays and consequently save more memory, execution time and large quantities of power.
以交互速度实现逼真模拟的努力推动了渲染和物理模拟的研究。这种增强的现实性涉及更大的数据集和数据结构,并且需要很高的计算成本。我们研究了涉及碰撞检测和实时光线追踪的模拟,并注意到用于加速它们的数据结构的相似性。我们的研究表明,可以为交互仿真的两个子系统使用单一加速数据结构(ADS),即使它们受益于不同的特性。通常,碰撞检测和光线追踪系统建立ads,以满足其特定需求。我们主张一个共享的自适应ADS,可以优化碰撞检测和光线跟踪。碰撞检测系统构建自适应ADS,光线追踪算法在碰撞检测系统解决潜在碰撞后使用相同的自适应ADS,从而节省内存、执行时间和功耗。结果表明,无需在构建启发式上做出妥协。此外,ADS可以针对主射线和次射线进行优化,从而节省更多的内存、执行时间和大量的电源。
{"title":"Adaptive BVH: an evaluation of an efficient shared data structure for interactive simulation","authors":"Colin Fowler, Michael J. Doyle, M. Manzke","doi":"10.1145/2643188.2643192","DOIUrl":"https://doi.org/10.1145/2643188.2643192","url":null,"abstract":"The strive towards realistic simulations at interactive speeds has driven research in both rendering and physical simulation. This heightened realism involves larger data sets and data structures, and comes at a high computational cost. We investigate simulations involving collision detection and real-time ray-tracing and note similarities in the data structures used to accelerate them. Our investigation demonstrates that it is possible to utilize a single Acceleration Data Structure (ADS) for both subsystems of an interactive simulation, even though they benefit from different characteristics. Typically, the collision detection and ray-tracing system build ADSs that satisfy their specific needs. We argue for a shared adaptive ADS that can be optimized for both collision detection and ray-tracing. The collision detection system builds this adaptive ADS and the ray-tracing algorithm uses the same adaptive ADS after the collision detection system has resolved potential collisions, therefore saving memory, execution time and power. The results show that compromises need not be made on build heuristics. Furthermore, the ADS may be optimized for primary and secondary rays and consequently save more memory, execution time and large quantities of power.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129392186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computation and perception: building better displays 计算和感知:构建更好的显示
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2699750
D. Gutierrez
Computational displays have recently emerged as a fascinating new research area. By combining smart processing with novel optics and electronics, their ultimate goal is to provide a better viewing experience. This may be achieved by means of an extended dynamic range, a better color reproduction, or even glasses-free stereoscopic techniques. However, no matter what the improvements are, these will always be bounded by the limitations imposed by current technology. We argue that by adding perceptual models of human vision to the design of the displays, some of these hard limitations can be circumvented, providing an enhanced viewing experience beyond what should be physically and technically possible. In this paper we show examples of how such perceptually-based strategy is currently being applied in different prototype implementations.
计算显示最近成为一个引人入胜的新研究领域。通过将智能处理与新型光学和电子技术相结合,他们的最终目标是提供更好的观看体验。这可以通过扩展动态范围、更好的色彩再现或甚至不戴眼镜的立体技术来实现。然而,无论有什么改进,这些改进总是受到当前技术所施加的限制。我们认为,通过将人类视觉的感知模型添加到显示器的设计中,可以绕过这些困难的限制,提供超出物理和技术可能的增强观看体验。在本文中,我们展示了这种基于感知的策略目前如何应用于不同的原型实现的示例。
{"title":"Computation and perception: building better displays","authors":"D. Gutierrez","doi":"10.1145/2643188.2699750","DOIUrl":"https://doi.org/10.1145/2643188.2699750","url":null,"abstract":"Computational displays have recently emerged as a fascinating new research area. By combining smart processing with novel optics and electronics, their ultimate goal is to provide a better viewing experience. This may be achieved by means of an extended dynamic range, a better color reproduction, or even glasses-free stereoscopic techniques. However, no matter what the improvements are, these will always be bounded by the limitations imposed by current technology. We argue that by adding perceptual models of human vision to the design of the displays, some of these hard limitations can be circumvented, providing an enhanced viewing experience beyond what should be physically and technically possible. In this paper we show examples of how such perceptually-based strategy is currently being applied in different prototype implementations.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130006302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Live ultrasound-based particle visualization of blood flow in the heart 基于实时超声的心脏血流粒子可视化
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643200
Paolo Angelelli, S. Snare, H. Hauser, S. Nyrnes, L. Løvstakken, S. Bruckner
We introduce an integrated method for the acquisition, processing and visualization of live, in-vivo blood flow in the heart. The method is based on ultrasound imaging, using a plane wave acquisition acquisition protocol, which produces high frame rate ensemble data that are efficiently processed to extract directional flow information not previously available based on conventional Doppler imaging. These data are then visualized using a tailored pathlet-based visualization approach, to convey the slice-contained dynamic movement of the blood in the heart. This is especially important when imaging patients with possible congenital heart diseases, who typically exhibit complex flow patterns that are challenging to interpret. With this approach, it now is possible for the first time to achieve a real-time integration-based visualization of 2D blood flow aspects based on ultrasonic imaging. We demonstrate our solution in the context of selected cases of congenital heart diseases in neonates, showing how our technique allows for a more accurate and intuitive visualization of shunt flow and vortices.
我们介绍了一种集成的方法来获取,处理和可视化的活的,在体内的血液流动的心脏。该方法基于超声成像,使用平面波采集协议,产生高帧率的集合数据,有效处理提取定向流信息,这是以前基于传统多普勒成像无法获得的。然后,使用定制的基于板片的可视化方法将这些数据可视化,以传达包含在切片中的心脏血液动态运动。这在对可能患有先天性心脏病的患者进行成像时尤其重要,这些患者通常表现出复杂的血流模式,难以解释。通过这种方法,现在有可能首次实现基于超声成像的二维血流方面的实时集成可视化。我们在新生儿先天性心脏病的选定病例中展示了我们的解决方案,展示了我们的技术如何允许更准确和直观的分流流和漩涡可视化。
{"title":"Live ultrasound-based particle visualization of blood flow in the heart","authors":"Paolo Angelelli, S. Snare, H. Hauser, S. Nyrnes, L. Løvstakken, S. Bruckner","doi":"10.1145/2643188.2643200","DOIUrl":"https://doi.org/10.1145/2643188.2643200","url":null,"abstract":"We introduce an integrated method for the acquisition, processing and visualization of live, in-vivo blood flow in the heart. The method is based on ultrasound imaging, using a plane wave acquisition acquisition protocol, which produces high frame rate ensemble data that are efficiently processed to extract directional flow information not previously available based on conventional Doppler imaging. These data are then visualized using a tailored pathlet-based visualization approach, to convey the slice-contained dynamic movement of the blood in the heart. This is especially important when imaging patients with possible congenital heart diseases, who typically exhibit complex flow patterns that are challenging to interpret. With this approach, it now is possible for the first time to achieve a real-time integration-based visualization of 2D blood flow aspects based on ultrasonic imaging. We demonstrate our solution in the context of selected cases of congenital heart diseases in neonates, showing how our technique allows for a more accurate and intuitive visualization of shunt flow and vortices.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121502393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
History of SCCG SCCG的历史
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2700584
E. Ruzický, A. Ferko
We present the past, conferencing activities over three decades, celebrating the 30th anniversary of SCCG. By chance, this coincides with 95th years of research and education at Comenius University Bratislava. The highlights include the global timeline milestones, the oldest regular graphics conference in Central Europe history, the collocated world unique international student seminar CESCG, and an inevitably open conclusion.
我们回顾了三十多年来的会议活动,庆祝SCCG成立三十周年。巧合的是,这与布拉迪斯拉发夸美纽斯大学95年的研究和教育相一致。亮点包括全球时间轴里程碑,中欧历史上最古老的常规图形会议,同时举办的世界独特的国际学生研讨会CESCG,以及不可避免的开放结论。
{"title":"History of SCCG","authors":"E. Ruzický, A. Ferko","doi":"10.1145/2643188.2700584","DOIUrl":"https://doi.org/10.1145/2643188.2700584","url":null,"abstract":"We present the past, conferencing activities over three decades, celebrating the 30th anniversary of SCCG. By chance, this coincides with 95th years of research and education at Comenius University Bratislava. The highlights include the global timeline milestones, the oldest regular graphics conference in Central Europe history, the collocated world unique international student seminar CESCG, and an inevitably open conclusion.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128008676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple instances object detection 多实例对象检测
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643190
Z. Haladová, E. Sikudová
Since the beginning of the new century the growing popularity of markerless augmented reality (AR) applications inspired the research in the area of object instance detection, registration and tracking. The usage of common daily objects or specially developed fliers or magazines (e.g. IKEA) as AR markers became more popular than traditional ARtoolkit-like black/white patterns. Although there are many different methods for object instance detection emerging every year, very little attention is paid to the case where multiple instances of the same object are present in the scene and need to be augmented (e.g. a table full of fliers, several exemplars of historical coins in the museum, etc.). In this paper we review existing methods of multiple instance detection and propose a new method for grayscale images overcoming the limitations of previous methods.
进入新世纪以来,无标记增强现实(AR)应用的日益普及激发了对象实例检测、注册和跟踪领域的研究。使用普通的日常用品或专门开发的传单或杂志(例如宜家)作为AR标记比传统的artoolkit式黑白图案更受欢迎。尽管每年都有许多不同的对象实例检测方法出现,但很少有人注意到场景中存在相同对象的多个实例并需要增强的情况(例如,满是传单的桌子,博物馆中的几个历史硬币样本等)。本文回顾了现有的灰度图像多实例检测方法,提出了一种新的灰度图像多实例检测方法,克服了以往方法的局限性。
{"title":"Multiple instances object detection","authors":"Z. Haladová, E. Sikudová","doi":"10.1145/2643188.2643190","DOIUrl":"https://doi.org/10.1145/2643188.2643190","url":null,"abstract":"Since the beginning of the new century the growing popularity of markerless augmented reality (AR) applications inspired the research in the area of object instance detection, registration and tracking. The usage of common daily objects or specially developed fliers or magazines (e.g. IKEA) as AR markers became more popular than traditional ARtoolkit-like black/white patterns. Although there are many different methods for object instance detection emerging every year, very little attention is paid to the case where multiple instances of the same object are present in the scene and need to be augmented (e.g. a table full of fliers, several exemplars of historical coins in the museum, etc.). In this paper we review existing methods of multiple instance detection and propose a new method for grayscale images overcoming the limitations of previous methods.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131855260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Kinect-supported dataset creation for human pose estimation kinect支持的人体姿态估计数据集创建
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643195
Kamil Behún, A. Herout, A. Páldy
Training and evaluation datasets for specific tasks of human pose estimation are hard to find. This paper presents an approach for rapid construction of a precisely annotated training dataset for human pose estimation of a sitting subject, intended especially for aeronautic cockpit. We propose to use Kinect as a tool for collecting ground truth to a purely visual dataset (for reasons defined by the application, use of Kinect or similar structured light-based approaches is impossible). Since Kinect annotation of individual joints might be imprecise at certain moments, manual post-processing of the acquired data is necessary and we propose a scheme for efficient and reliable manual post-annotation. We produced a dataset of 6,322 annotated frames, involving 11 human subjects recorded in various lighting conditions, different clothing, and varying background. Each frame contains one seated person in frontal view with annotation of pose and optical flow data. We used detectors of body parts based on Random Forest on the produced dataset in order to verify its usability. These preliminary results show that the detector can be trained successfully on the developed dataset and that the optical flow contributes to the detection accuracy considerably. The dataset and the intermediary data used during its creation is made publicly available. By this, we intend to support further research and evaluation in the specific topic of human pose estimation focused on a sitting subject in a cockpit scenario.
用于人体姿态估计的特定任务的训练和评估数据集很难找到。本文提出了一种快速构建精确注释训练数据集的方法,用于对坐着的受试者进行姿态估计,特别是针对航空驾驶舱。我们建议使用Kinect作为收集纯视觉数据集的工具(由于应用程序定义的原因,使用Kinect或类似的基于结构光的方法是不可能的)。由于Kinect对单个关节的标注在某些时刻可能不精确,因此需要对采集到的数据进行人工后处理,我们提出了一种高效可靠的人工后处理方案。我们制作了一个包含6322个带注释的帧的数据集,涉及11个在不同照明条件下、不同服装和不同背景下记录的人类受试者。每一帧都包含一个正面视图中坐着的人,并注释了姿态和光流数据。我们在生成的数据集上使用基于随机森林的身体部位检测器来验证其可用性。这些初步结果表明,检测器可以成功地在开发的数据集上进行训练,并且光流对检测精度有很大的影响。数据集及其创建过程中使用的中间数据是公开可用的。通过这一点,我们打算支持进一步的研究和评估人类姿态估计的具体主题,重点是座舱场景中坐着的受试者。
{"title":"Kinect-supported dataset creation for human pose estimation","authors":"Kamil Behún, A. Herout, A. Páldy","doi":"10.1145/2643188.2643195","DOIUrl":"https://doi.org/10.1145/2643188.2643195","url":null,"abstract":"Training and evaluation datasets for specific tasks of human pose estimation are hard to find. This paper presents an approach for rapid construction of a precisely annotated training dataset for human pose estimation of a sitting subject, intended especially for aeronautic cockpit. We propose to use Kinect as a tool for collecting ground truth to a purely visual dataset (for reasons defined by the application, use of Kinect or similar structured light-based approaches is impossible). Since Kinect annotation of individual joints might be imprecise at certain moments, manual post-processing of the acquired data is necessary and we propose a scheme for efficient and reliable manual post-annotation. We produced a dataset of 6,322 annotated frames, involving 11 human subjects recorded in various lighting conditions, different clothing, and varying background. Each frame contains one seated person in frontal view with annotation of pose and optical flow data. We used detectors of body parts based on Random Forest on the produced dataset in order to verify its usability. These preliminary results show that the detector can be trained successfully on the developed dataset and that the optical flow contributes to the detection accuracy considerably. The dataset and the intermediary data used during its creation is made publicly available. By this, we intend to support further research and evaluation in the specific topic of human pose estimation focused on a sitting subject in a cockpit scenario.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121064410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction 评估数据驱动的统计人体运动重建的协方差矩阵约束
Pub Date : 2014-05-28 DOI: 10.1145/2643188.2643199
Christos Mousas, Paul F. Newbury, C. Anagnostopoulos
This paper presents the evaluation process of the character's motion reconstruction while constraints are applied to the covariance matrix of the motion prior learning process. For the evaluation process, a maximum a posteriori (MAP) framework is first generated, which receives input trajectories and reconstructs the motion of the character. Then, using various methods to constrain the covariance matrix, information that reflects certain assumptions about the motion reconstruction process is retrieved. Each of the covariance matrix constraints are evaluated by its ability to reconstruct the desired motion sequences either by using a large amount of motion data or by using a small dataset that contains only specific motions.
在对运动先验学习过程的协方差矩阵施加约束的同时,提出了对人物运动重建的评价过程。对于评估过程,首先生成一个最大后验(MAP)框架,该框架接收输入轨迹并重建角色的运动。然后,使用各种方法约束协方差矩阵,检索反映运动重建过程某些假设的信息。每个协方差矩阵约束通过使用大量运动数据或使用仅包含特定运动的小数据集重建所需运动序列的能力来评估。
{"title":"Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction","authors":"Christos Mousas, Paul F. Newbury, C. Anagnostopoulos","doi":"10.1145/2643188.2643199","DOIUrl":"https://doi.org/10.1145/2643188.2643199","url":null,"abstract":"This paper presents the evaluation process of the character's motion reconstruction while constraints are applied to the covariance matrix of the motion prior learning process. For the evaluation process, a maximum a posteriori (MAP) framework is first generated, which receives input trajectories and reconstructs the motion of the character. Then, using various methods to constrain the covariance matrix, information that reflects certain assumptions about the motion reconstruction process is retrieved. Each of the covariance matrix constraints are evaluated by its ability to reconstruct the desired motion sequences either by using a large amount of motion data or by using a small dataset that contains only specific motions.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
Proceedings of the 30th Spring Conference on Computer Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1