首页 > 最新文献

Graphics and Visual Computing最新文献

英文 中文
A point selection strategy with edge and line detection for Direct Sparse Visual Odometry 基于边缘和直线检测的直接稀疏视觉里程测量点选择策略
Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2022.200051
Yinming Miao, Masahiro Yamaguchi

In most feature-based Visual Simultaneous Localization and Mapping systems, the pixels in a current image are compared with the correlative pixels in previous images, and the difference in the coordinates of pixels shows the movement of the camera. Different from the feature-based systems, direct methods operate on image intensity directly. Every pixel on the image or selected pixels with sufficient intensity gradient can be utilized. However, the noises in the images may affect the performance of those algorithms as the pixels are not adequately selected. In this work, we propose a new pixel selection method for a direct visual odometry system that focuses on the edge pixels. The edge pixels are usually more stable and repeatable than normal pixels. We apply the traditional edge detection method with adaptive parameters to get rough edge results. Then the edges are separated by gradient and shape. We use straightness, smoothness, length, and gradient magnitude to select the meaningful edges. We replace the pixel selection step of Direct Sparse Odometry and Direct Sparse Odometry with Loop Closure to present the evaluation on open datasets. The experimental results indicate that our method improves the performance of existing direct visual odometry systems in man-made scenes but is not suitable for pure natural scenes.

在大多数基于特征的视觉同步定位与映射系统中,当前图像中的像素与之前图像中的相关像素进行比较,像素坐标的差异表示相机的运动。与基于特征的系统不同,直接方法直接对图像强度进行操作。图像上的每个像素或具有足够强度梯度的选定像素都可以被利用。然而,图像中的噪声可能会影响这些算法的性能,因为像素没有充分选择。在这项工作中,我们提出了一种新的针对边缘像素的直接视觉里程计系统的像素选择方法。边缘像素通常比普通像素更稳定和可重复。我们采用传统的自适应参数边缘检测方法来获得粗糙边缘结果。然后用梯度和形状分离边缘。我们使用直线度、平滑度、长度和梯度大小来选择有意义的边缘。我们用闭环代替直接稀疏里程法和直接稀疏里程法的像素选择步骤,在开放数据集上进行评估。实验结果表明,该方法提高了现有直接视觉里程计系统在人工场景中的性能,但不适用于纯自然场景。
{"title":"A point selection strategy with edge and line detection for Direct Sparse Visual Odometry","authors":"Yinming Miao,&nbsp;Masahiro Yamaguchi","doi":"10.1016/j.gvc.2022.200051","DOIUrl":"https://doi.org/10.1016/j.gvc.2022.200051","url":null,"abstract":"<div><p>In most feature-based Visual Simultaneous Localization and Mapping systems, the pixels in a current image are compared with the correlative pixels in previous images, and the difference in the coordinates of pixels shows the movement of the camera. Different from the feature-based systems, direct methods operate on image intensity directly. Every pixel on the image or selected pixels with sufficient intensity gradient can be utilized. However, the noises in the images may affect the performance of those algorithms as the pixels are not adequately selected. In this work, we propose a new pixel selection method for a direct visual odometry system that focuses on the edge pixels. The edge pixels are usually more stable and repeatable than normal pixels. We apply the traditional edge detection method with adaptive parameters to get rough edge results. Then the edges are separated by gradient and shape. We use straightness, smoothness, length, and gradient magnitude to select the meaningful edges. We replace the pixel selection step of Direct Sparse Odometry and Direct Sparse Odometry with Loop Closure to present the evaluation on open datasets. The experimental results indicate that our method improves the performance of existing direct visual odometry systems in man-made scenes but is not suitable for pure natural scenes.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200051"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629422000055/pdfft?md5=0d8257efc50aff81595f28fd075adf81&pid=1-s2.0-S2666629422000055-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91637012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Overcoming challenges when teaching hands-on courses about Virtual Reality and Augmented Reality: Methods, techniques and best practice 克服挑战时,教学动手课程关于虚拟现实和增强现实:方法,技术和最佳实践
Pub Date : 2022-06-01 DOI: 10.1016/j.gvc.2021.200037
Ralf Doerner, Robin Horst

This paper presents methods and techniques for teaching Virtual Reality (VR) and Augmented Reality (AR) that were conceived and refined during more than 20 years of our teaching experience on these subjects in higher education. We cover a broad spectrum from acquainting learners with VR and AR as only one aspect of a more general course to an in-depth course on VR and AR during a whole semester. The focus of the paper is methods and techniques that allow learners to not only learn about VR and AR on a theoretical level but that facilitate their own VR and AR experiences with all senses and foster hands-on learning. We show why this is challenging (e.g., the high workload involved with the preparation of hands-on experiences, the large amount of course time that needs to be devoted), and how these challenges can be met (e.g., using our Circuit Parcours Technique). Moreover, we discuss learning goals that can be addressed in VR and AR courses besides hands-on experiences when using our methods and techniques. Finally, we provide best practice examples that can be used as blueprints for parts of a VR and AR course.

本文介绍了虚拟现实(VR)和增强现实(AR)的教学方法和技术,这些方法和技术是我们在高等教育这些学科的20多年教学经验中构思和完善的。我们涵盖了广泛的范围,从让学习者熟悉VR和AR作为一个更一般的课程的一个方面,到在整个学期中深入学习VR和AR。本文的重点是方法和技术,使学习者不仅可以在理论层面上学习VR和AR,而且可以促进他们自己的所有感官的VR和AR体验,并促进实践学习。我们展示了为什么这是具有挑战性的(例如,准备实践经验所涉及的高工作量,需要投入的大量课程时间),以及如何应对这些挑战(例如,使用我们的电路Parcours技术)。此外,我们还讨论了在使用我们的方法和技术时,除了实践经验之外,可以在VR和AR课程中解决的学习目标。最后,我们提供了最佳实践示例,可以用作部分VR和AR课程的蓝图。
{"title":"Overcoming challenges when teaching hands-on courses about Virtual Reality and Augmented Reality: Methods, techniques and best practice","authors":"Ralf Doerner,&nbsp;Robin Horst","doi":"10.1016/j.gvc.2021.200037","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200037","url":null,"abstract":"<div><p>This paper presents methods and techniques for teaching Virtual Reality (VR) and Augmented Reality (AR) that were conceived and refined during more than 20 years of our teaching experience on these subjects in higher education. We cover a broad spectrum from acquainting learners with VR and AR as only one aspect of a more general course to an in-depth course on VR and AR during a whole semester. The focus of the paper is methods and techniques that allow learners to not only learn about VR and AR on a theoretical level but that facilitate their own VR and AR experiences with all senses and foster hands-on learning. We show why this is challenging (e.g., the high workload involved with the preparation of hands-on experiences, the large amount of course time that needs to be devoted), and how these challenges can be met (e.g., using our Circuit Parcours Technique). Moreover, we discuss learning goals that can be addressed in VR and AR courses besides hands-on experiences when using our methods and techniques. Finally, we provide best practice examples that can be used as blueprints for parts of a VR and AR course.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"6 ","pages":"Article 200037"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629421000188/pdfft?md5=1474989127de9e2ea5e31bd6ea40fb7d&pid=1-s2.0-S2666629421000188-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91637007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
GRSI Best Paper Award GRSI最佳论文奖
Pub Date : 2021-12-01 DOI: 10.1016/S2666-6294(21)00020-6
Mashhuda Glencross, Daniele Panozzo, Joaquim Jorge
{"title":"GRSI Best Paper Award","authors":"Mashhuda Glencross,&nbsp;Daniele Panozzo,&nbsp;Joaquim Jorge","doi":"10.1016/S2666-6294(21)00020-6","DOIUrl":"https://doi.org/10.1016/S2666-6294(21)00020-6","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200039"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629421000206/pdfft?md5=53b9bbef9c970d601269ecb1e388cc91&pid=1-s2.0-S2666629421000206-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust marker-based projector–camera synchronization 稳健的基于标记的投影仪-摄像机同步
Pub Date : 2021-12-01 DOI: 10.1016/j.gvc.2021.200034
Vanessa Klein , Martin Edel , Marc Stamminger , Frank Bauer

Recording clean pictures of projected images requires the projector and camera to be synchronized. This task usually requires additional hardware or imposes major restrictions on the devices with software-based approaches, e.g., a specific frame rate of the camera. We present a novel software-based synchronization technique that supports projectors and cameras with different frame rates and at the same time tolerates camera frame drops. We focus on the special needs of LCD projectors and the effect of their liquid crystal response time on the projected image. By relying on visible marker detection we entirely refrain from taking time measurements, allowing for a robust and fast synchronization.

记录投影图像的干净图片需要投影仪和相机同步。该任务通常需要额外的硬件,或者通过基于软件的方法对设备施加主要限制,例如,相机的特定帧速率。我们提出了一种新的基于软件的同步技术,该技术支持具有不同帧速率的投影仪和相机,同时允许相机帧下降。我们专注于液晶投影仪的特殊需求以及它们的液晶响应时间对投影图像的影响。通过依赖可见标记检测,我们完全避免进行时间测量,从而实现稳健快速的同步。
{"title":"Robust marker-based projector–camera synchronization","authors":"Vanessa Klein ,&nbsp;Martin Edel ,&nbsp;Marc Stamminger ,&nbsp;Frank Bauer","doi":"10.1016/j.gvc.2021.200034","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200034","url":null,"abstract":"<div><p>Recording clean pictures of projected images requires the projector and camera to be synchronized. This task usually requires additional hardware or imposes major restrictions on the devices with software-based approaches, e.g., a specific frame rate of the camera. We present a novel software-based synchronization technique that supports projectors and cameras with different frame rates and at the same time tolerates camera frame drops. We focus on the special needs of LCD projectors and the effect of their liquid crystal response time on the projected image. By relying on visible marker detection we entirely refrain from taking time measurements, allowing for a robust and fast synchronization.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200034"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666629421000164/pdfft?md5=ce56d0eb2f928aded252494f9dd10eda&pid=1-s2.0-S2666629421000164-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive evaluation of deep models and optimizers for Indian sign language recognition 印度手语识别的深度模型和优化器的综合评价
Pub Date : 2021-12-01 DOI: 10.1016/j.gvc.2021.200032
Prachi Sharma, Radhey Shyam Anand

Deep Learning has become popular among researchers for a long time, and still, new deep convolution neural networks come into the picture very frequently. However, it is challenging to select the best amongst such networks due to their dependence on the tuning of optimization hyperparameters, which is a trivial task. This situation motivates the current study, in which we perform a systematic evaluation and statistical analysis of pre-trained deep models. It is the first comprehensive analysis of pre-trained deep models, gradient-based optimizers and optimization hyperparameters for static Indian sign language recognition. A three-layered CNN model is also proposed and trained from scratch, which attained the best recognition accuracy of 99.0% and 97.6% on numerals and alphabets of a public ISL dataset. Among pre-trained models, ResNet152V2 performed better than other models with a recognition accuracy of 96.2% on numerals and 90.8% on alphabets of the ISL dataset. Our results reinforce the hypothesis for pre-trained deep models that, in general, a pre-trained deep network adequately tuned can yield results way more than the state-of-the-art machine learning techniques without having to train the whole model but only a few top layers for ISL recognition. The effect of hyperparameters like learning rate, batch size and momentum is also analyzed and presented in the paper.

深度学习已经在研究人员中流行了很长一段时间,而且新的深度卷积神经网络也经常出现。然而,由于这些网络依赖于优化超参数的调优,因此在这些网络中选择最佳网络是具有挑战性的,这是一项微不足道的任务。这种情况激发了当前的研究,我们对预训练的深度模型进行了系统的评估和统计分析。这是第一个对预训练深度模型、基于梯度的优化器和优化超参数进行静态印度手语识别的综合分析。提出了一种三层CNN模型,并对其进行了从头训练,对公开ISL数据集的数字和字母的识别准确率分别达到了99.0%和97.6%。在预训练模型中,ResNet152V2对ISL数据集的数字和字母的识别准确率分别达到96.2%和90.8%,优于其他模型。我们的研究结果强化了预训练深度模型的假设,一般来说,经过充分调整的预训练深度网络可以产生比最先进的机器学习技术更好的结果,而无需训练整个模型,而只需训练ISL识别的几个顶层。本文还分析了学习率、批量大小和动量等超参数的影响。
{"title":"A comprehensive evaluation of deep models and optimizers for Indian sign language recognition","authors":"Prachi Sharma,&nbsp;Radhey Shyam Anand","doi":"10.1016/j.gvc.2021.200032","DOIUrl":"10.1016/j.gvc.2021.200032","url":null,"abstract":"<div><p>Deep Learning has become popular among researchers for a long time, and still, new deep convolution neural networks come into the picture very frequently. However, it is challenging to select the best amongst such networks due to their dependence on the tuning of optimization hyperparameters, which is a trivial task. This situation motivates the current study, in which we perform a systematic evaluation and statistical analysis of pre-trained deep models. It is the first comprehensive analysis of pre-trained deep models, gradient-based optimizers and optimization hyperparameters for static Indian sign language recognition. A three-layered CNN model is also proposed and trained from scratch, which attained the best recognition accuracy of 99.0% and 97.6% on numerals and alphabets of a public ISL dataset. Among pre-trained models, ResNet152V2 performed better than other models with a recognition accuracy of 96.2% on numerals and 90.8% on alphabets of the ISL dataset. Our results reinforce the hypothesis for pre-trained deep models that, in general, a pre-trained deep network adequately tuned can yield results way more than the state-of-the-art machine learning techniques without having to train the whole model but only a few top layers for ISL recognition. The effect of hyperparameters like learning rate, batch size and momentum is also analyzed and presented in the paper.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200032"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123576994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Erratum to “Foreword to the Special Section on CAD/Graphics 2021” [Graph. Vis. Comput. 4 (2021) 200027] “CAD/Graphics 2021特别部分前言”的勘误表[图]。vi .计算,4 (2021)200027]
Pub Date : 2021-12-01 DOI: 10.1016/j.gvc.2021.200031
Juyong Zhang, Rui Wang, Giuseppe Patanè
{"title":"Erratum to “Foreword to the Special Section on CAD/Graphics 2021” [Graph. Vis. Comput. 4 (2021) 200027]","authors":"Juyong Zhang,&nbsp;Rui Wang,&nbsp;Giuseppe Patanè","doi":"10.1016/j.gvc.2021.200031","DOIUrl":"10.1016/j.gvc.2021.200031","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"5 ","pages":"Article 200031"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121164749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric procedural models for shape representation 用于形状表示的体积过程模型
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200018
Andrew R. Willis , Prashant Ganesh , Kyle Volle , Jincheng Zhang , Kevin Brink

This article describes a volumetric approach for procedural shape modeling and a new Procedural Shape Modeling Language (PSML) that facilitates the specification of these models. PSML provides programmers the ability to describe shapes in terms of their 3D elements where each element may be a semantic group of 3D objects, e.g., a brick wall, or an indivisible object, e.g., an individual brick. Modeling shapes in this manner facilitates the creation of models that more closely approximate the organization and structure of their real-world counterparts. As such, users may query these models for volumetric information such as the number, position, orientation and volume of 3D elements which cannot be provided using surface based model-building techniques. PSML also provides a number of new language-specific capabilities that allow for a rich variety of context-sensitive behaviors and post-processing functions. These capabilities include an object-oriented approach for model design, methods for querying the model for component-based information and the ability to access model elements and components to perform Boolean operations on the model parts. PSML is open-source and includes freely available tutorial videos, demonstration code and an integrated development environment to support writing PSML programs.

本文描述了一种用于过程形状建模的体积方法,以及一种新的过程形状建模语言(PSML),该语言有助于规范这些模型。PSML为程序员提供了根据其3D元素来描述形状的能力,其中每个元素可以是3D对象的语义组,例如砖墙,或者不可分割的对象,例如单个砖。以这种方式对形状进行建模有助于创建更接近真实世界对应对象的组织和结构的模型。因此,用户可以在这些模型中查询体积信息,例如不能使用基于表面的模型构建技术提供的3D元素的数量、位置、方向和体积。PSML还提供了许多新的特定于语言的功能,这些功能允许各种各样的上下文敏感行为和后处理功能。这些功能包括用于模型设计的面向对象方法、用于查询模型以获得基于组件的信息的方法以及访问模型元素和组件以对模型部件执行布尔运算的能力。PSML是开源的,包括免费提供的教程视频、演示代码和集成开发环境,以支持编写PSML程序。
{"title":"Volumetric procedural models for shape representation","authors":"Andrew R. Willis ,&nbsp;Prashant Ganesh ,&nbsp;Kyle Volle ,&nbsp;Jincheng Zhang ,&nbsp;Kevin Brink","doi":"10.1016/j.gvc.2021.200018","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200018","url":null,"abstract":"<div><p>This article describes a volumetric approach for procedural shape modeling and a new Procedural Shape Modeling Language (PSML) that facilitates the specification of these models. PSML provides programmers the ability to describe shapes in terms of their 3D elements where each element may be a semantic group of 3D objects, e.g., a brick wall, or an indivisible object, e.g., an individual brick. Modeling shapes in this manner facilitates the creation of models that more closely approximate the organization and structure of their real-world counterparts. As such, users may query these models for volumetric information such as the number, position, orientation and volume of 3D elements which cannot be provided using surface based model-building techniques. PSML also provides a number of new language-specific capabilities that allow for a rich variety of context-sensitive behaviors and post-processing functions. These capabilities include an object-oriented approach for model design, methods for querying the model for component-based information and the ability to access model elements and components to perform Boolean operations on the model parts. PSML is open-source and includes freely available tutorial videos, demonstration code and an integrated development environment to support writing PSML programs.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200018"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72283202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Computer Graphics teaching challenges: Guidelines for balancing depth, complexity and mentoring in a confinement context 计算机图形学教学的挑战:在限制环境中平衡深度、复杂性和指导的指导方针
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200021
Rui Rodrigues , Teresa Matos , Alexandre Valle de Carvalho , Jorge G. Barbosa , Rodrigo Assaf , Rui Nóbrega , António Coelho , A. Augusto de Sousa

We discuss challenges, methodologies, and approaches for teaching Computer Graphics (CG) courses in a confinement context, together with assessing the experience and the proposal of guidelines. Our approach balances CG topics’ depth with creating relevant and attractive content for a CG course while coping with communication, support, and assessment issues. These are especially important in a pandemic context where online classes may reduce students’ engagement and hinder communication with educators. We refined the model used over the last years based on a two-stage approach (first tutorial, then project-based) relying on an in-house WebGL-based educational library – WebCGF – that simplifies onboarding while keeping connections to the underlying concepts and technologies. The confinement constraints led to complement that model with additional collaborative tools and mentoring strategies. Those included, apart from the standard synchronous remote classes, the use of a group communication tool for structured community engagement and video presentation, and a Git-based code management system specifically configured for classes and groups, which allowed following more closely the development process of each student. Results show that the performance and students’ engagement achieved was similar to that of recent years, which led us to a set of guidelines to consider in these contexts.

我们讨论了在封闭环境中教授计算机图形学(CG)课程的挑战、方法和方法,以及评估经验和指导方针的建议。我们的方法平衡了CG主题的深度,为CG课程创建相关和有吸引力的内容,同时处理沟通,支持和评估问题。在流感大流行的背景下,这一点尤其重要,因为在线课程可能会降低学生的参与度,阻碍与教育工作者的沟通。在过去的几年里,我们基于两阶段的方法(第一个教程,然后是基于项目的)改进了模型,依赖于一个内部的基于webgl的教育库——WebCGF——它简化了入职,同时保持了与底层概念和技术的联系。限制条件导致用额外的协作工具和指导策略来补充该模型。除了标准的同步远程课程外,还包括使用小组沟通工具进行结构化社区参与和视频演示,以及专门为班级和小组配置的基于git的代码管理系统,以便更密切地关注每个学生的发展过程。结果表明,成绩和学生的参与程度与近年来相似,这使我们在这些情况下考虑了一套指导方针。
{"title":"Computer Graphics teaching challenges: Guidelines for balancing depth, complexity and mentoring in a confinement context","authors":"Rui Rodrigues ,&nbsp;Teresa Matos ,&nbsp;Alexandre Valle de Carvalho ,&nbsp;Jorge G. Barbosa ,&nbsp;Rodrigo Assaf ,&nbsp;Rui Nóbrega ,&nbsp;António Coelho ,&nbsp;A. Augusto de Sousa","doi":"10.1016/j.gvc.2021.200021","DOIUrl":"10.1016/j.gvc.2021.200021","url":null,"abstract":"<div><p>We discuss challenges, methodologies, and approaches for teaching Computer Graphics (CG) courses in a confinement context, together with assessing the experience and the proposal of guidelines. Our approach balances CG topics’ depth with creating relevant and attractive content for a CG course while coping with communication, support, and assessment issues. These are especially important in a pandemic context where online classes may reduce students’ engagement and hinder communication with educators. We refined the model used over the last years based on a two-stage approach (first tutorial, then project-based) relying on an in-house WebGL-based educational library – WebCGF – that simplifies onboarding while keeping connections to the underlying concepts and technologies. The confinement constraints led to complement that model with additional collaborative tools and mentoring strategies. Those included, apart from the standard synchronous remote classes, the use of a group communication tool for structured community engagement and video presentation, and a Git-based code management system specifically configured for classes and groups, which allowed following more closely the development process of each student. Results show that the performance and students’ engagement achieved was similar to that of recent years, which led us to a set of guidelines to consider in these contexts.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200021"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Single trunk multi-scale network for micro-expression recognition 微表情识别的单主干多尺度网络
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200026
Jie Wang , Xiao Pan , Xinyu Li , Guangshun Wei , Yuanfeng Zhou

Micro-expressions are the external manifestations of human psychological activities. Therefore, micro-expression recognition has important research and application value in many fields such as public services, criminal investigations, and clinical diagnosis. However, the particular characteristics (e.g., short duration and subtle changes) of micro-expressions bring great challenges to micro-expression recognition. In this paper, we explore the differences in the direction of facial muscle movement when people make different expressions to recognize micro-expressions. We first use optical flow to capture the subtle changes in the facial movement when a micro-expression occurs. Next, we extract facial movement information to an aniso-weighted optical flow image based on anisotropically weighting the horizontal and vertical components of the optical flow. Finally, we feed the aniso-weighted optical flow image into the proposed Single Trunk Multi-scale Network for micro-expression recognition. In particular, the designed multi-scale feature catcher in the network can capture features of micro-expressions with different intensities. We conduct extensive experiments on four spontaneous micro-expression datasets, and the experiment results show that our proposed method is competitive and effective.

微表情是人类心理活动的外在表现。因此,微表情识别在公共服务、刑事侦查、临床诊断等诸多领域具有重要的研究和应用价值。然而,由于微表情持续时间短、变化微妙等特点,给微表情识别带来了很大的挑战。在本文中,我们探讨了人们在做出不同表情来识别微表情时,面部肌肉运动方向的差异。我们首先使用光流来捕捉微表情发生时面部运动的细微变化。接下来,我们根据光流的水平分量和光流的垂直分量的各向异性加权,将面部运动信息提取到一个各向异性加权的光流图像中。最后,我们将等差加权光流图像输入到所提出的单主干多尺度网络中进行微表情识别。特别是网络中设计的多尺度特征捕捉器可以捕捉不同强度的微表情特征。我们在四个自发微表情数据集上进行了大量的实验,实验结果表明我们提出的方法是有竞争力的和有效的。
{"title":"Single trunk multi-scale network for micro-expression recognition","authors":"Jie Wang ,&nbsp;Xiao Pan ,&nbsp;Xinyu Li ,&nbsp;Guangshun Wei ,&nbsp;Yuanfeng Zhou","doi":"10.1016/j.gvc.2021.200026","DOIUrl":"10.1016/j.gvc.2021.200026","url":null,"abstract":"<div><p>Micro-expressions are the external manifestations of human psychological activities. Therefore, micro-expression recognition has important research and application value in many fields such as public services, criminal investigations, and clinical diagnosis. However, the particular characteristics (e.g., short duration and subtle changes) of micro-expressions bring great challenges to micro-expression recognition. In this paper, we explore the differences in the direction of facial muscle movement when people make different expressions to recognize micro-expressions. We first use optical flow to capture the subtle changes in the facial movement when a micro-expression occurs. Next, we extract facial movement information to an aniso-weighted optical flow image based on anisotropically weighting the horizontal and vertical components of the optical flow. Finally, we feed the aniso-weighted optical flow image into the proposed Single Trunk Multi-scale Network for micro-expression recognition. In particular, the designed multi-scale feature catcher in the network can capture features of micro-expressions with different intensities. We conduct extensive experiments on four spontaneous micro-expression datasets, and the experiment results show that our proposed method is competitive and effective.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200026"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129297292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Foreword to the special section on Computer Graphics education in the time of Covid 新冠肺炎时代的计算机图形学教育专题前言
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200028
Beatriz Sousa Santos, Gitta Domik, Eike Anderson
{"title":"Foreword to the special section on Computer Graphics education in the time of Covid","authors":"Beatriz Sousa Santos,&nbsp;Gitta Domik,&nbsp;Eike Anderson","doi":"10.1016/j.gvc.2021.200028","DOIUrl":"10.1016/j.gvc.2021.200028","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200028"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123924162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Graphics and Visual Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1