首页 > 最新文献

Proceedings. Graphics Interface (Conference)最新文献

英文 中文
Learning Multiple Mappings: an Evaluation of Interference, Transfer, and Retention with Chorded Shortcut Buttons 学习多重映射:用弦式快捷按钮评估干扰、转移和保留
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.21
C. Gutwin, Carl-Eike Hofmeister, David Ledo, Alix Goguey
Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications. To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system. The buttons can be chorded to provide seven possible shortcuts or transient mode switches. We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention. Our results show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings. Transfer to a more realistic task was successful, although with a slight reduction in accuracy. Retention after one week was initially poor, but expert performance was quickly restored. Our work provides new information about the design and use of chorded buttons for augmenting input in mobile interactions.
与当前移动设备的触摸交互具有有限的表现力。具有额外自由度的增强设备可以为交互增加功率,已经提出并测试了几种增强。然而,对于学习映射到不同应用程序的多组增强交互的效果,仍然知之甚少。为了更好地了解多个命令映射是否会相互干扰,或影响传输和保留,我们在智能手机壳上开发了一个带有三个按钮的原型,可用于为系统提供增强输入。按钮可以按弦排列,以提供七种可能的快捷方式或瞬态模式切换。我们将这些按钮映射到三组不同的动作,并进行了一项研究,看看多重映射是否会影响学习和表现、迁移和保留。我们的结果表明,所有的映射都是快速学习的,并且使用多个映射不会降低性能。转移到更现实的任务是成功的,尽管准确性略有下降。一周后的保留率起初很低,但专家级的表现很快就恢复了。我们的工作提供了关于在移动交互中增加输入的弦按钮的设计和使用的新信息。
{"title":"Learning Multiple Mappings: an Evaluation of Interference, Transfer, and Retention with Chorded Shortcut Buttons","authors":"C. Gutwin, Carl-Eike Hofmeister, David Ledo, Alix Goguey","doi":"10.20380/GI2020.21","DOIUrl":"https://doi.org/10.20380/GI2020.21","url":null,"abstract":"Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications. To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system. The buttons can be chorded to provide seven possible shortcuts or transient mode switches. We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention. Our results show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings. Transfer to a more realistic task was successful, although with a slight reduction in accuracy. Retention after one week was initially poor, but expert performance was quickly restored. Our work provides new information about the design and use of chorded buttons for augmenting input in mobile interactions.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"206-214"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42165604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Gedit: Keyboard Gestures for Mobile Text Editing 编辑:移动文本编辑的键盘手势
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.47
M. Zhang, J. Wobbrock
Text editing on mobile devices can be a tedious process. To perform various editing operations, a user must repeatedly move his or her fingers between the text input area and the keyboard, making multiple round trips and breaking the flow of typing. In this work, we present Gedit , a system of on-keyboard gestures for convenient mobile text editing. Our design includes a ring gesture and flicks for cursor control, bezel gestures for mode switching, and four gesture shortcuts for copy, paste, cut, and undo. Variations of our gestures exist for one and two hands. We conducted an experiment to compare Gedit with the de facto touch+widget based editing interactions. Our results showed that Gedit ’s gestures were easy to learn, 24% and 17% faster than the de facto interactions for one-and two-handed use, respectively, and preferred by participants.
在移动设备上编辑文本可能是一个乏味的过程。要执行各种编辑操作,用户必须在文本输入区和键盘之间反复移动手指,进行多次往返并中断打字流程。在这项工作中,我们介绍了Gedit,一个方便移动文本编辑的键盘手势系统。我们的设计包括用于光标控制的环形手势和轻弹,用于模式切换的边框手势,以及用于复制、粘贴、剪切和撤消的四个手势快捷键。一只手和两只手的手势各不相同。我们进行了一项实验,将Gedit与事实上基于触摸+小部件的编辑交互进行了比较。我们的研究结果表明,Gedit的手势很容易学习,比单手和双手的实际互动分别快24%和17%,并且受到参与者的青睐。
{"title":"Gedit: Keyboard Gestures for Mobile Text Editing","authors":"M. Zhang, J. Wobbrock","doi":"10.20380/GI2020.47","DOIUrl":"https://doi.org/10.20380/GI2020.47","url":null,"abstract":"Text editing on mobile devices can be a tedious process. To perform various editing operations, a user must repeatedly move his or her fingers between the text input area and the keyboard, making multiple round trips and breaking the flow of typing. In this work, we present Gedit , a system of on-keyboard gestures for convenient mobile text editing. Our design includes a ring gesture and flicks for cursor control, bezel gestures for mode switching, and four gesture shortcuts for copy, paste, cut, and undo. Variations of our gestures exist for one and two hands. We conducted an experiment to compare Gedit with the de facto touch+widget based editing interactions. Our results showed that Gedit ’s gestures were easy to learn, 24% and 17% faster than the de facto interactions for one-and two-handed use, respectively, and preferred by participants.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"470-473"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48571890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Exploring Video Conferencing for Doctor Appointments in the Home: A Scenario-Based Approach from Patients' Perspectives 探索家庭医生预约的视频会议:从患者角度出发的基于场景的方法
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.04
Dongqi Han, Yasamin Heshmat, Carman Neustaedter
We are beginning to see changes to health care systems where patients are now able to visit their doctor using video conferencing appointments. Yet we know little of how such systems should be designed to meet patients’ needs. We used a scenario-based design method with video prototyping and conducted patient-centered contextual interviews with people to learn about their reactions to futuristic video-based appointments. Results show that video-based appointments differ from face-toface consultations in terms of accessibility, relationship building, camera work, and privacy issues. These results illustrate design challenges for video calling systems that can support video-based appointments between doctors and patients with an emphasis on providing adequate camera control, support for showing empathy, and mitigating privacy concerns.
我们开始看到医疗保健系统的变化,病人现在可以通过视频会议预约看医生。然而,我们对如何设计这样的系统来满足病人的需求知之甚少。我们使用了基于场景的设计方法和视频原型,并与人们进行了以患者为中心的上下文访谈,以了解他们对未来视频预约的反应。结果表明,基于视频的预约在可访问性、关系建立、摄像工作和隐私问题方面与面对面咨询不同。这些结果说明了视频通话系统的设计挑战,该系统可以支持医生和病人之间基于视频的预约,重点是提供足够的摄像机控制,支持显示同情,并减轻隐私问题。
{"title":"Exploring Video Conferencing for Doctor Appointments in the Home: A Scenario-Based Approach from Patients' Perspectives","authors":"Dongqi Han, Yasamin Heshmat, Carman Neustaedter","doi":"10.20380/GI2020.04","DOIUrl":"https://doi.org/10.20380/GI2020.04","url":null,"abstract":"We are beginning to see changes to health care systems where patients are now able to visit their doctor using video conferencing appointments. Yet we know little of how such systems should be designed to meet patients’ needs. We used a scenario-based design method with video prototyping and conducted patient-centered contextual interviews with people to learn about their reactions to futuristic video-based appointments. Results show that video-based appointments differ from face-toface consultations in terms of accessibility, relationship building, camera work, and privacy issues. These results illustrate design challenges for video calling systems that can support video-based appointments between doctors and patients with an emphasis on providing adequate camera control, support for showing empathy, and mitigating privacy concerns.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"32 1","pages":"17-27"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91151978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive Shape Based Brushing Technique for Trail Sets 基于形状的轨迹集交互式绘制技术
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.25
Almoctar Hassoumi, M. Lobo, Gabriel Jarry, Vsevolod Peysakhovich, C. Hurter
Brushing techniques have a long history with the first interactive selection tools appearing in the 1990s. Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and create overlapping. Existing techniques rely on trial and error combined with many view modifications such as panning, zooming, and selection refinements. For moving object analysis, recorded positions are connected into line segments forming trajectories and thus creating more occlusions and overplotting. As a solution for selection in cluttered views, this paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area. The process can be described as follows. Firstly, the user brushes the region where trajectories of interest are visible (standard brushing technique). Secondly, the shape of the brushed area is used to select similar items. Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories. This brushing technique encompasses two types of comparison metrics, the piecewise Pearson correlation and the similarity measurement based on information geometry. To show the efficiency of this novel brushing method, we apply it to concrete scenarios with datasets from air traffic control, eye tracking, and GPS trajectories.
刷涂技术有着悠久的历史,第一个交互式选择工具出现在20世纪90年代。从那时起,已经开发了许多附加技术来解决选择准确性、可扩展性和灵活性问题。在许多视觉项目纠缠并产生重叠的大型数据集中,选择尤其困难。现有技术依赖于试错和许多视图修改,如平移、缩放和选择细化。对于移动对象分析,记录的位置被连接到形成轨迹的线段中,从而产生更多的遮挡和过度绘图。作为在杂乱视图中进行选择的解决方案,本文研究了一种新的刷涂技术,该技术不仅依赖于实际刷涂位置,还依赖于刷涂区域的形状。该过程可描述如下。首先,用户刷感兴趣轨迹可见的区域(标准刷涂技术)。其次,使用刷过的区域的形状来选择相似的项目。第三,用户可以调整相似度以过滤出所请求的轨迹。这种刷洗技术包括两种类型的比较度量,分段Pearson相关性和基于信息几何的相似性度量。为了展示这种新刷脸方法的效率,我们将其应用于具体场景,数据集包括空中交通管制、眼睛跟踪和GPS轨迹。
{"title":"Interactive Shape Based Brushing Technique for Trail Sets","authors":"Almoctar Hassoumi, M. Lobo, Gabriel Jarry, Vsevolod Peysakhovich, C. Hurter","doi":"10.20380/GI2020.25","DOIUrl":"https://doi.org/10.20380/GI2020.25","url":null,"abstract":"Brushing techniques have a long history with the first interactive selection tools appearing in the 1990s. Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and create overlapping. Existing techniques rely on trial and error combined with many view modifications such as panning, zooming, and selection refinements. For moving object analysis, recorded positions are connected into line segments forming trajectories and thus creating more occlusions and overplotting. As a solution for selection in cluttered views, this paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area. The process can be described as follows. Firstly, the user brushes the region where trajectories of interest are visible (standard brushing technique). Secondly, the shape of the brushed area is used to select similar items. Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories. This brushing technique encompasses two types of comparison metrics, the piecewise Pearson correlation and the similarity measurement based on information geometry. To show the efficiency of this novel brushing method, we apply it to concrete scenarios with datasets from air traffic control, eye tracking, and GPS trajectories.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"246-255"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41704694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring the Design of Patient-Generated Data Visualizations 探索患者生成数据可视化的设计
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.36
F. Rajabiyazdi, Charles Perin, L. Oehlberg, Sheelagh Carpendale
We were approached by a group of healthcare providers who are involved in the care of chronic patients looking for potential technologies to facilitate the process of reviewing patient-generated data during clinical visits. Aiming at understanding the healthcare providers’ attitudes towards reviewing patient-generated data, we (1) conducted a focus group with a mixed group of healthcare providers. Next, to gain the patients’ perspectives, we (2) interviewed eight chronic patients, collected a sample of their data and designed a series of visualizations representing patient data we collected. Last, we (3) sought feedback on the visualization designs from healthcare providers who requested this exploration. We found four factors shaping patient-generated data: data & context, patient’s motivation, patient’s time commitment, and patient’s support circle. Informed by the results of our studies, we discussed the importance of designing patient-generated visualizations for individuals by considering both patient and healthcare provider rather than designing with the purpose of generalization and provided guidelines for designing future patient-generated data visualizations.
一组参与慢性病患者护理的医疗保健提供者联系了我们,他们正在寻找潜在的技术,以促进在临床就诊期间审查患者生成的数据。为了了解医疗保健提供者对审查患者生成的数据的态度,我们(1)与医疗保健提供者的混合小组进行了一个焦点小组。接下来,为了获得患者的观点,我们(2)采访了八名慢性病患者,收集了他们的数据样本,并设计了一系列代表我们收集的患者数据的可视化。最后,我们(3)从要求进行此探索的医疗保健提供者那里寻求有关可视化设计的反馈。我们发现了四个影响患者生成数据的因素:数据和背景、患者的动机、患者的时间承诺和患者的支持圈。根据我们的研究结果,我们讨论了为个人设计患者生成的可视化的重要性,同时考虑患者和医疗保健提供者,而不是以泛化为目的进行设计,并为设计未来患者生成的数据可视化提供了指导。
{"title":"Exploring the Design of Patient-Generated Data Visualizations","authors":"F. Rajabiyazdi, Charles Perin, L. Oehlberg, Sheelagh Carpendale","doi":"10.20380/GI2020.36","DOIUrl":"https://doi.org/10.20380/GI2020.36","url":null,"abstract":"We were approached by a group of healthcare providers who are involved in the care of chronic patients looking for potential technologies to facilitate the process of reviewing patient-generated data during clinical visits. Aiming at understanding the healthcare providers’ attitudes towards reviewing patient-generated data, we (1) conducted a focus group with a mixed group of healthcare providers. Next, to gain the patients’ perspectives, we (2) interviewed eight chronic patients, collected a sample of their data and designed a series of visualizations representing patient data we collected. Last, we (3) sought feedback on the visualization designs from healthcare providers who requested this exploration. We found four factors shaping patient-generated data: data & context, patient’s motivation, patient’s time commitment, and patient’s support circle. Informed by the results of our studies, we discussed the importance of designing patient-generated visualizations for individuals by considering both patient and healthcare provider rather than designing with the purpose of generalization and provided guidelines for designing future patient-generated data visualizations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"362-373"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49654267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fine Feature Reconstruction in Point Clouds by Adversarial Domain Translation 对抗域翻译在点云精细特征重建中的应用
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.35
Prashant Raina, T. Popa, S. Mudur
Point cloud neighborhoods are unstructured and often lacking in fine details, particularly when the original surface is sparsely sampled. This has motivated the development of methods for reconstructing these fine geometric features before the point cloud is converted into a mesh, usually by some form of upsampling of the point cloud. We present a novel data-driven approach to reconstructing fine details of the underlying surfaces of point clouds at the local neighborhood level, along with normals and locations of edges. This is achieved by an innovative application of recent advances in domain translation using GANs. We “translate” local neighborhoods between two domains: point cloud neighborhoods and triangular mesh neighborhoods. This allows us to obtain some of the benefits of meshes at training time, while still dealing with point clouds at the time of evaluation. By resampling the translated neighborhood, we can obtain a denser point cloud equipped with normals that allows the underlying surface to be easily reconstructed as a mesh. Our reconstructed meshes preserve fine details of the original surface better than the state of the art in point cloud upsampling techniques, even at different input resolutions. In addition, the trained GAN can generalize to operate on low resolution point clouds even without being explicitly trained on low-resolution data. We also give an example demonstrating that the same domain translation approach we use for reconstructing local neighborhood geometry can also be used to estimate a scalar field at the newly generated points, thus reducing the need for expensive recomputation of the scalar field on the dense point cloud.
点云邻域是非结构化的,通常缺乏精细的细节,尤其是当原始曲面采样稀疏时。这推动了在点云转换为网格之前重建这些精细几何特征的方法的发展,通常是通过点云的某种形式的上采样。我们提出了一种新的数据驱动方法,用于在局部邻域级别重建点云下表面的精细细节,以及边的法线和位置。这是通过使用GANs的领域翻译的最新进展的创新应用实现的。我们在两个域之间“转换”局部邻域:点云邻域和三角形网格邻域。这使我们能够在训练时获得网格的一些好处,同时在评估时仍然处理点云。通过对平移的邻域重新采样,我们可以获得一个更密集的点云,该点云配备了法线,可以轻松地将底层曲面重建为网格。我们重建的网格比点云上采样技术中的现有技术更好地保留了原始表面的精细细节,即使在不同的输入分辨率下也是如此。此外,即使没有在低分辨率数据上明确训练,训练后的GAN也可以推广到低分辨率点云上操作。我们还举了一个例子,证明了我们用于重建局部邻域几何的相同域平移方法也可以用于估计新生成点处的标量场,从而减少了对密集点云上标量场的昂贵重新计算的需要。
{"title":"Fine Feature Reconstruction in Point Clouds by Adversarial Domain Translation","authors":"Prashant Raina, T. Popa, S. Mudur","doi":"10.20380/GI2020.35","DOIUrl":"https://doi.org/10.20380/GI2020.35","url":null,"abstract":"Point cloud neighborhoods are unstructured and often lacking in fine details, particularly when the original surface is sparsely sampled. This has motivated the development of methods for reconstructing these fine geometric features before the point cloud is converted into a mesh, usually by some form of upsampling of the point cloud. We present a novel data-driven approach to reconstructing fine details of the underlying surfaces of point clouds at the local neighborhood level, along with normals and locations of edges. This is achieved by an innovative application of recent advances in domain translation using GANs. We “translate” local neighborhoods between two domains: point cloud neighborhoods and triangular mesh neighborhoods. This allows us to obtain some of the benefits of meshes at training time, while still dealing with point clouds at the time of evaluation. By resampling the translated neighborhood, we can obtain a denser point cloud equipped with normals that allows the underlying surface to be easily reconstructed as a mesh. Our reconstructed meshes preserve fine details of the original surface better than the state of the art in point cloud upsampling techniques, even at different input resolutions. In addition, the trained GAN can generalize to operate on low resolution point clouds even without being explicitly trained on low-resolution data. We also give an example demonstrating that the same domain translation approach we use for reconstructing local neighborhood geometry can also be used to estimate a scalar field at the newly generated points, thus reducing the need for expensive recomputation of the scalar field on the dense point cloud.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"349-361"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47334372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Part-Based 3D Face Morphable Model with Anthropometric Local Control 基于局部控制的零件三维人脸变形模型
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.03
Donya Ghafourzadeh, Cyrus Rahgoshay, Sahel Fallahdoust, A. Beauchamp, Adeline Aubame, T. Popa, Eric Paquette
We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from a lack of local control. We thus create a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local PCA-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. The editing controls we provide to the user are intuitive as they are extracted from anthropometric measurements found in the literature. Out of a large set of possible anthropometric measurements, we filter those that have meaningful generative power given the face data set. We bind the measurements to the part-based 3DMM through mapping matrices derived from our data set of facial scans. Our part-based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation. The results show that our part-based 3DMM approach has excellent generative properties and allows the user intuitive local control. *e-mail: donya.ghafourzadeh@ubisoft.com †e-mail: cyrus.rahgoshay@ubisoft.com ‡e-mail: sahel.fallahdoust@ubisoft.com §e-mail: andre.beauchamp@ubisoft.com ¶e-mail: adeline.aubame@ubisoft.com ||e-mail: tiberiu.popa@concordia.ca **e-mail: eric.paquette@etsmtl.ca
我们提出了一种构建逼真的三维面部变形模型(3DMM)的方法,该方法允许直观的面部属性编辑工作流程。目前使用3DMM的人脸建模方法缺乏局部控制。因此,我们将基于局部部分的3DMM结合在一起,为眼睛、鼻子、嘴巴、耳朵和面膜区域创建一个3DMM。我们基于局部pca的方法采用了一种新颖的方法,从局部3DMM中选择最佳特征向量,以确保组合的3DMM具有表现力,同时允许精确重建。我们提供给用户的编辑控件是直观的,因为它们是从文献中发现的人体测量中提取的。在大量可能的人体测量数据中,我们过滤了那些给定面部数据集具有有意义的生成能力的数据。我们通过从面部扫描数据集导出的映射矩阵将测量值绑定到基于零件的3DMM。我们基于零件的3DMM结构紧凑,但精度高,与其他3DMM方法相比,它在局部和全局控制之间提供了一种新的权衡。我们在135个用于导出3DMM的扫描数据集上测试了我们的方法,加上19个用于验证的扫描数据集。结果表明,基于零件的3DMM方法具有良好的生成特性,并允许用户直观地进行局部控制。*e-mail: donya.ghafourzadeh@ubisoft.com†e-mail: cyrus.rahgoshay@ubisoft.com‡e-mail: sahel.fallahdoust@ubisoft.com§e-mail: andre.beauchamp@ubisoft.com¶e-mail: adeline.aubame@ubisoft.com ||e-mail: tiberiu.popa@concordia.ca **e-mail: eric.paquette@etsmtl.ca
{"title":"Part-Based 3D Face Morphable Model with Anthropometric Local Control","authors":"Donya Ghafourzadeh, Cyrus Rahgoshay, Sahel Fallahdoust, A. Beauchamp, Adeline Aubame, T. Popa, Eric Paquette","doi":"10.20380/GI2020.03","DOIUrl":"https://doi.org/10.20380/GI2020.03","url":null,"abstract":"We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from a lack of local control. We thus create a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local PCA-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. The editing controls we provide to the user are intuitive as they are extracted from anthropometric measurements found in the literature. Out of a large set of possible anthropometric measurements, we filter those that have meaningful generative power given the face data set. We bind the measurements to the part-based 3DMM through mapping matrices derived from our data set of facial scans. Our part-based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation. The results show that our part-based 3DMM approach has excellent generative properties and allows the user intuitive local control. *e-mail: donya.ghafourzadeh@ubisoft.com †e-mail: cyrus.rahgoshay@ubisoft.com ‡e-mail: sahel.fallahdoust@ubisoft.com §e-mail: andre.beauchamp@ubisoft.com ¶e-mail: adeline.aubame@ubisoft.com ||e-mail: tiberiu.popa@concordia.ca **e-mail: eric.paquette@etsmtl.ca","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"7-16"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47414783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
AuthAR: Concurrent Authoring of Tutorials for AR Assembly Guidance AuthAR:AR装配指导教程的并行编写
Pub Date : 2019-12-21 DOI: 10.20380/GI2020.43
Matt Whitlock, G. Fitzmaurice, Tovi Grossman, Justin Matejka
Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of situated instructions. These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task. Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself. The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces. Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial. This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.
增强现实(AR)可以通过使用定位指令来辅助物理任务,例如物体组装。这些指导可以是视频、图片、文本或指导动画的形式,其中最有帮助的媒体高度依赖于用户和任务的性质。我们的工作支持为汇编任务编写AR教程,除了简单地执行任务本身之外,几乎没有开销。所呈现的系统,AuthAR通过在作者组装物理部分时自动生成AR教程的关键组件,减少了构建交互式AR教程所需的时间和精力。此外,该系统还指导作者在教程中添加视频、图片、文本和动画。这种并发组装和教程生成方法允许编写适合不同最终用户偏好的可移植教程。
{"title":"AuthAR: Concurrent Authoring of Tutorials for AR Assembly Guidance","authors":"Matt Whitlock, G. Fitzmaurice, Tovi Grossman, Justin Matejka","doi":"10.20380/GI2020.43","DOIUrl":"https://doi.org/10.20380/GI2020.43","url":null,"abstract":"Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of situated instructions. These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task. Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself. The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces. Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial. This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that fit the preferences of different end users.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"431-439"},"PeriodicalIF":0.0,"publicationDate":"2019-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43625758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Biologically-Inspired Gameplay: Movement Algorithms for Artificially Intelligent (AI) Non-Player Characters (NPC) 受生物启发的游戏:人工智能(AI)非玩家角色(NPC)的移动算法
Pub Date : 2019-06-01 DOI: 10.20380/GI2019.28
Rina R. Wehbe, G. Riberio, Kin Pon Fung, L. Nacke, E. Lank
In computer games, designers frequently leverage biologicallyinspired movement algorithms such as flocking, particle swarm optimization, and firefly algorithms to give players the perception of intelligent behaviour of groups of enemy non-player characters (NPCs). While extensive effort has been expended designing these algorithms, a comparison between biologically inspired algorithms and naive directional algorithms (travel towards the opponent) has yet to be completed. In this paper, we compare the biological algorithms listed above against a naive control algorithm to assess the effect that these algorithms have on various measures of player experience. The results reveal that the Swarming algorithm, followed closely by Flocking, provide the best gaming experience. However, players noted that the firefly algorithm was most salient. An understanding of the strengths of different behavioural algorithms for NPCs will contribute to the design of algorithms that depict more intelligent crowd behaviour in gaming and computer simulations.
在电脑游戏中,设计师经常利用受生物启发的运动算法,如蜂群、粒子群优化和萤火虫算法,让玩家感知敌方非玩家角色(npc)群体的智能行为。虽然已经花费了大量的精力来设计这些算法,但生物学启发的算法和朴素的定向算法(向对手移动)之间的比较尚未完成。在本文中,我们将上述列出的生物算法与单纯控制算法进行比较,以评估这些算法对各种玩家体验度量的影响。结果表明,蜂群算法提供了最佳的游戏体验,其次是Flocking算法。然而,玩家们注意到萤火虫算法是最突出的。理解npc不同行为算法的优势将有助于设计出能够在游戏和计算机模拟中描述更智能人群行为的算法。
{"title":"Biologically-Inspired Gameplay: Movement Algorithms for Artificially Intelligent (AI) Non-Player Characters (NPC)","authors":"Rina R. Wehbe, G. Riberio, Kin Pon Fung, L. Nacke, E. Lank","doi":"10.20380/GI2019.28","DOIUrl":"https://doi.org/10.20380/GI2019.28","url":null,"abstract":"In computer games, designers frequently leverage biologicallyinspired movement algorithms such as flocking, particle swarm optimization, and firefly algorithms to give players the perception of intelligent behaviour of groups of enemy non-player characters (NPCs). While extensive effort has been expended designing these algorithms, a comparison between biologically inspired algorithms and naive directional algorithms (travel towards the opponent) has yet to be completed. In this paper, we compare the biological algorithms listed above against a naive control algorithm to assess the effect that these algorithms have on various measures of player experience. The results reveal that the Swarming algorithm, followed closely by Flocking, provide the best gaming experience. However, players noted that the firefly algorithm was most salient. An understanding of the strengths of different behavioural algorithms for NPCs will contribute to the design of algorithms that depict more intelligent crowd behaviour in gaming and computer simulations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"28:1-28:9"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48085003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Frequency Analysis and Dual Hierarchy for Efficient Rendering of Subsurface Scattering 有效绘制地下散射的频率分析和对偶层次
Pub Date : 2019-06-01 DOI: 10.20380/GI2019.03
David Milaenen, Laurent Belcour, Jean-Philippe Guertin, T. Hachisuka, D. Nowrouzezahrai
BSSRDFs are commonly used to model subsurface light transport in highly scattering media such as skin and marble. Rendering with BSSRDFs requires an additional spatial integration, which can be significantly more expensive than surface-only rendering with BRDFs. We introduce a novel hierarchical rendering method that can mitigate this additional spatial integration cost. Our method has two key components: a novel frequency analysis of subsurface light transport, and a dual hierarchy over shading and illumination samples. Our frequency analysis predicts the spatial and angular variation of outgoing radiance due to a BSSRDF. We use this analysis to drive adaptive spatial BSSRDF integration with sparse image and illumination samples. We propose the use of a dual-tree structure that allows us to simultaneously traverse a tree of shade points (i.e., pixels) and a tree of object-space illumination samples. Our dualtree approach generalizes existing single-tree accelerations. Both our frequency analysis and the dual-tree structure are compatible with most existing BSSRDF models, and we show that our method improves rendering times compared to the state of the art method of Jensen and Buhler.
BSSRDF通常用于模拟皮肤和大理石等高散射介质中的亚表面光传输。使用BSSRDF进行渲染需要额外的空间集成,这可能比使用BRDF仅进行表面渲染的成本高得多。我们介绍了一种新的分层渲染方法,可以减轻这种额外的空间集成成本。我们的方法有两个关键组成部分:一个是新的地下光传输频率分析,另一个是阴影和照明样本的双重层次。我们的频率分析预测了由于BSSRDF引起的出射辐射的空间和角度变化。我们使用这种分析来驱动具有稀疏图像和照明样本的自适应空间BSSRDF集成。我们建议使用对偶树结构,该结构允许我们同时遍历阴影点(即像素)树和对象空间照明样本树。我们的对偶方法推广了现有的单树加速度。我们的频率分析和对偶树结构都与大多数现有的BSSRDF模型兼容,并且我们表明,与Jensen和Buhler的最先进方法相比,我们的方法改进了渲染时间。
{"title":"A Frequency Analysis and Dual Hierarchy for Efficient Rendering of Subsurface Scattering","authors":"David Milaenen, Laurent Belcour, Jean-Philippe Guertin, T. Hachisuka, D. Nowrouzezahrai","doi":"10.20380/GI2019.03","DOIUrl":"https://doi.org/10.20380/GI2019.03","url":null,"abstract":"BSSRDFs are commonly used to model subsurface light transport in highly scattering media such as skin and marble. Rendering with BSSRDFs requires an additional spatial integration, which can be significantly more expensive than surface-only rendering with BRDFs. We introduce a novel hierarchical rendering method that can mitigate this additional spatial integration cost. Our method has two key components: a novel frequency analysis of subsurface light transport, and a dual hierarchy over shading and illumination samples. Our frequency analysis predicts the spatial and angular variation of outgoing radiance due to a BSSRDF. We use this analysis to drive adaptive spatial BSSRDF integration with sparse image and illumination samples. We propose the use of a dual-tree structure that allows us to simultaneously traverse a tree of shade points (i.e., pixels) and a tree of object-space illumination samples. Our dualtree approach generalizes existing single-tree accelerations. Both our frequency analysis and the dual-tree structure are compatible with most existing BSSRDF models, and we show that our method improves rendering times compared to the state of the art method of Jensen and Buhler.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"3:1-3:7"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43982881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings. Graphics Interface (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1