首页 > 最新文献

Proceedings of the 44th Graphics Interface Conference最新文献

英文 中文
gMotion: A Spatio-Temporal Grammar for the Procedural Generation of Motion Graphics 运动图形程序生成的时空语法
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.14
Edoardo Carra, Christian Santoni, F. Pellacini
Creating by hand compelling 2D animations that choreograph several groups of shapes requires a large number of manual edits. We present a method to procedurally generate motion graphics with timeslice grammars. Timeslice grammars are to time what split grammars are to space. We use this grammar to formally model motion graphics, manipulating them in both temporal and spatial components. We are able to combine both these aspects by representing animations as sets of affine transformations sampled uniformly in both space and time. Rules and operators in the grammar manipulate all spatio-temporal matrices as a whole, allowing us to expressively construct animation with few rules. The grammar animates shapes, which are represented as highly tessellated polygons, by applying the affine transforms to each shape vertex given the vertex position and the animation time. We introduce a small set of operators showing how we can produce 2D animations of geometric objects, by combining the expressive power of the grammar model, the composability of the operators with themselves, and the capabilities that derive from using a unified spatio-temporal representation for animation data. Throughout the paper, we show how timeslice grammars can produce a wide variety of animations that would take artists hours of tedious and time-consuming work. In particular, in cases where change of shapes is very common, our grammar can add motion detail to large collections of shapes with greater control over per-shape animations along with a compact rules structure.
手工创建引人注目的2D动画,编排几组形状需要大量的手动编辑。提出了一种用时间片语法程序生成运动图形的方法。时间片语法之于时间,就像分割语法之于空间一样。我们使用这个语法来正式建模运动图形,在时间和空间组件中操作它们。通过将动画表示为在空间和时间上均匀采样的仿射变换集合,我们能够将这两个方面结合起来。语法中的规则和运算符作为一个整体操纵所有的时空矩阵,使我们能够用很少的规则表达地构建动画。该语法通过对给定顶点位置和动画时间的每个形状顶点应用仿射变换,将高度镶嵌多边形表示为形状动画。我们介绍了一小组操作符,展示了我们如何通过结合语法模型的表达能力、操作符与自身的可组合性以及使用动画数据的统一时空表示所获得的能力,来生成几何对象的2D动画。在整篇论文中,我们展示了时间片语法如何产生各种各样的动画,这将花费艺术家数小时的繁琐和耗时的工作。特别是,在形状变化非常常见的情况下,我们的语法可以向大型形状集合添加运动细节,从而更好地控制每个形状的动画以及紧凑的规则结构。
{"title":"gMotion: A Spatio-Temporal Grammar for the Procedural Generation of Motion Graphics","authors":"Edoardo Carra, Christian Santoni, F. Pellacini","doi":"10.20380/GI2018.14","DOIUrl":"https://doi.org/10.20380/GI2018.14","url":null,"abstract":"Creating by hand compelling 2D animations that choreograph several groups of shapes requires a large number of manual edits. We present a method to procedurally generate motion graphics with timeslice grammars. Timeslice grammars are to time what split grammars are to space. We use this grammar to formally model motion graphics, manipulating them in both temporal and spatial components. We are able to combine both these aspects by representing animations as sets of affine transformations sampled uniformly in both space and time. Rules and operators in the grammar manipulate all spatio-temporal matrices as a whole, allowing us to expressively construct animation with few rules. The grammar animates shapes, which are represented as highly tessellated polygons, by applying the affine transforms to each shape vertex given the vertex position and the animation time. We introduce a small set of operators showing how we can produce 2D animations of geometric objects, by combining the expressive power of the grammar model, the composability of the operators with themselves, and the capabilities that derive from using a unified spatio-temporal representation for animation data. Throughout the paper, we show how timeslice grammars can produce a wide variety of animations that would take artists hours of tedious and time-consuming work. In particular, in cases where change of shapes is very common, our grammar can add motion detail to large collections of shapes with greater control over per-shape animations along with a compact rules structure.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122369922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mouse Cursor Movements towards Targets on the Same Screen Edge 鼠标光标移动到同一屏幕边缘上的目标
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.16
Shota Yamanaka
Buttons and icons on screen edges can be selected in a shorter time than those in the central area because the mouse cursor stops due to the impenetrable borderline. However, we have concerns regarding such edge targets, in particular, pointing to an edge target from another edge target on the same edge. For example, users would move the mouse toward outside the screen; thus, the virtual travel distance of the cursor including off-screen movements would be longer. In this study, we empirically confirmed that users exhibit such “pushing-edge” behavior, and 3% of cursor movements are wasted in off-screen movements. We also report how current user-performance models (variations of Fitts' law) can capture such pointing motions between targets on the same edge. The results show that the baseline model (Shannon formula) shows a reasonably high fit (R2 = 0.959), and bivariate pointing models show higher fitness (R2 = 0.966 at most).
在屏幕边缘的按钮和图标可以选择比中心区域的按钮和图标更短的时间,因为鼠标光标因无法穿透边界而停止。然而,我们对这样的边缘目标有顾虑,特别是,从同一边缘上的另一个边缘目标指向一个边缘目标。例如,用户可以将鼠标移向屏幕外部;因此,光标的虚拟移动距离(包括离屏移动)会变长。在本研究中,我们通过经验证实了用户表现出这种“前沿”行为,3%的光标移动被浪费在了屏幕外的移动上。我们还报告了当前的用户性能模型(Fitts定律的变体)如何捕获同一边缘上目标之间的这种指向运动。结果表明,基线模型(Shannon公式)具有较高的拟合度(R2 = 0.959),双变量指向模型具有较高的拟合度(R2 = 0.966)。
{"title":"Mouse Cursor Movements towards Targets on the Same Screen Edge","authors":"Shota Yamanaka","doi":"10.20380/GI2018.16","DOIUrl":"https://doi.org/10.20380/GI2018.16","url":null,"abstract":"Buttons and icons on screen edges can be selected in a shorter time than those in the central area because the mouse cursor stops due to the impenetrable borderline. However, we have concerns regarding such edge targets, in particular, pointing to an edge target from another edge target on the same edge. For example, users would move the mouse toward outside the screen; thus, the virtual travel distance of the cursor including off-screen movements would be longer. In this study, we empirically confirmed that users exhibit such “pushing-edge” behavior, and 3% of cursor movements are wasted in off-screen movements. We also report how current user-performance models (variations of Fitts' law) can capture such pointing motions between targets on the same edge. The results show that the baseline model (Shannon formula) shows a reasonably high fit (R2 = 0.959), and bivariate pointing models show higher fitness (R2 = 0.966 at most).","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"570 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123041404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
EZCursorVR: 2D Selection with Virtual Reality Head-Mounted Displays EZCursorVR: 2D选择与虚拟现实头戴式显示器
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.17
Adrian Ramcharitar, Robert J. Teather
We present an evaluation of a new selection technique for virtual reality (VR) systems presented on head-mounted displays. The technique, dubbed EZCursorVR, presents a 2D cursor that moves in a head-fixed plane, simulating 2D desktop-like cursor control for VR. The cursor can be controlled by any 2DOF input device, but also works with 3/6DOF devices using appropriate mappings. We conducted an experiment based on ISO 9241-9, comparing the effectiveness of EZCursorVR using a mouse, a joystick in both velocity-control and position-control mappings, a 2D-constrained ray-based technique, a standard 3D ray, and finally selection via head motion. Results indicate that the mouse offered the highest performance in terms of throughput, movement time, and error rate, while the position-control joystick was worst. The 2D-constrained ray-casting technique proved an effective alternative to the mouse when performing selections using EZCursorVR, offering better performance than standard ray-based selection.
我们提出了一种新的选择技术的评估虚拟现实(VR)系统呈现在头戴式显示器。这项技术被称为EZCursorVR,它提供了一个2D光标,在头部固定的平面上移动,模拟了VR的2D桌面式光标控制。光标可以由任何2DOF输入设备控制,但也可以使用适当的映射与3/6DOF设备一起工作。我们基于ISO 9241-9进行了一项实验,比较了EZCursorVR使用鼠标、速度控制和位置控制映射的操纵杆、基于2d约束的射线技术、标准3D射线和最后通过头部运动进行选择的有效性。结果表明,鼠标在吞吐量、移动时间和错误率方面的性能最高,而位置控制摇杆的性能最差。当使用EZCursorVR进行选择时,2d约束光线投射技术被证明是鼠标的有效替代品,提供比标准的基于光线的选择更好的性能。
{"title":"EZCursorVR: 2D Selection with Virtual Reality Head-Mounted Displays","authors":"Adrian Ramcharitar, Robert J. Teather","doi":"10.20380/GI2018.17","DOIUrl":"https://doi.org/10.20380/GI2018.17","url":null,"abstract":"We present an evaluation of a new selection technique for virtual reality (VR) systems presented on head-mounted displays. The technique, dubbed EZCursorVR, presents a 2D cursor that moves in a head-fixed plane, simulating 2D desktop-like cursor control for VR. The cursor can be controlled by any 2DOF input device, but also works with 3/6DOF devices using appropriate mappings. We conducted an experiment based on ISO 9241-9, comparing the effectiveness of EZCursorVR using a mouse, a joystick in both velocity-control and position-control mappings, a 2D-constrained ray-based technique, a standard 3D ray, and finally selection via head motion. Results indicate that the mouse offered the highest performance in terms of throughput, movement time, and error rate, while the position-control joystick was worst. The 2D-constrained ray-casting technique proved an effective alternative to the mouse when performing selections using EZCursorVR, offering better performance than standard ray-based selection.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"105 4-6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132330905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Supporting Chinese Character Educational Interfaces with Richer Assessment Feedback through Sketch Recognition 通过草图识别支持汉字教育界面,提供更丰富的评价反馈
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.08
Tianshu Chu, Paul Taele, T. Hammond
Students of Chinese as a Second Language (CSL) with primarily English fluency often struggle with the language's complex character set. Conventional classroom pedagogy and relevant educational applications have focused on providing valuable assessment feedback to address their challenges, but rely on direct instructor observation and provide constrained assessment, respectively. We propose improved sketch recognition techniques to better support Chinese character educational interfaces' realtime assessment of novice CSL students' character writing. Based on successful assessment feedback approaches from existing educational resources, we developed techniques for supporting richer automated assessment, so that students may be better informed of their writing performance outside the classroom. From our evaluations, our techniques achieved recognition rates of 91% and 85% on expert and novice Chinese character handwriting data, respectively, greater than 90% recognition rate on written technique mistakes, and 80.4% f-measure on distinguishing between expert and novice handwriting samples, without sacrificing students' natural writing input of Chinese characters.
学习汉语作为第二语言(CSL)的学生以英语流利为主,但往往难以应付复杂的字符集。传统的课堂教学法和相关的教育应用侧重于提供有价值的评估反馈来解决他们的挑战,但分别依赖于教师的直接观察和提供约束评估。为了更好地支持汉字教育界面对汉语初学者汉字书写的实时评估,我们提出了改进的素描识别技术。基于现有教育资源中成功的评估反馈方法,我们开发了支持更丰富的自动化评估的技术,这样学生就可以更好地了解他们在课堂外的写作表现。从我们的评估中,我们的技术在不牺牲学生汉字自然书写输入的情况下,对专家和新手汉字手写数据的识别率分别达到91%和85%,对书面技术错误的识别率大于90%,对专家和新手手写样本区分的f-measure值为80.4%。
{"title":"Supporting Chinese Character Educational Interfaces with Richer Assessment Feedback through Sketch Recognition","authors":"Tianshu Chu, Paul Taele, T. Hammond","doi":"10.20380/GI2018.08","DOIUrl":"https://doi.org/10.20380/GI2018.08","url":null,"abstract":"Students of Chinese as a Second Language (CSL) with primarily English fluency often struggle with the language's complex character set. Conventional classroom pedagogy and relevant educational applications have focused on providing valuable assessment feedback to address their challenges, but rely on direct instructor observation and provide constrained assessment, respectively. We propose improved sketch recognition techniques to better support Chinese character educational interfaces' realtime assessment of novice CSL students' character writing. Based on successful assessment feedback approaches from existing educational resources, we developed techniques for supporting richer automated assessment, so that students may be better informed of their writing performance outside the classroom. From our evaluations, our techniques achieved recognition rates of 91% and 85% on expert and novice Chinese character handwriting data, respectively, greater than 90% recognition rate on written technique mistakes, and 80.4% f-measure on distinguishing between expert and novice handwriting samples, without sacrificing students' natural writing input of Chinese characters.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126682107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Viewpoint Snapping to Reduce Cybersickness in Virtual Reality 视点捕捉减少虚拟现实中的晕动症
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.23
Yasin Farmani, Robert J. Teather
Cybersickness in virtual reality (VR) is an on-going problem, despite recent advances in technology. In this paper, we propose a method for reducing the likelihood of cybersickness onset when using stationary (e.g., seated) VR setups. Our approach relies on reducing optic flow via inconsistent displacement - the viewpoint is “snapped” during fast movement that would otherwise induce cybersickness. We compared our technique, which we call viewpoint snapping, to a control condition without viewpoint snapping, in a custom-developed VR first-person shooter game. We measured participant cybersickness levels via the Simulator Sickness Questionnaire (SSQ), and user reported levels of nausea, presence, and objective error rate. Overall, our results indicate that viewpoint snapping significantly reduced SSQ reported cybersickness levels by about 40% and resulted in a reduction in participant nausea levels, especially with longer VR exposure. Presence levels and error rate were not significantly different between the viewpoint snapping and the control condition.
尽管最近技术有所进步,但虚拟现实(VR)中的晕动症仍是一个持续存在的问题。在本文中,我们提出了一种方法来减少使用固定(例如,坐着)VR设置时晕机发作的可能性。我们的方法依赖于通过不一致的位移来减少光流——视点在快速移动时被“折断”,否则会引起晕屏。我们将我们的技术(我们称之为视点捕捉)与一款定制开发的VR第一人称射击游戏中没有视点捕捉的控制条件进行了比较。我们通过模拟晕机问卷(SSQ)测量参与者的晕机程度,以及用户报告的恶心程度、存在感和客观错误率。总体而言,我们的研究结果表明,视点捕捉显着降低了SSQ报告的晕动病水平约40%,并导致参与者恶心水平降低,尤其是长时间的VR暴露。视点捕捉与控制组的存在水平和错误率无显著差异。
{"title":"Viewpoint Snapping to Reduce Cybersickness in Virtual Reality","authors":"Yasin Farmani, Robert J. Teather","doi":"10.20380/GI2018.23","DOIUrl":"https://doi.org/10.20380/GI2018.23","url":null,"abstract":"Cybersickness in virtual reality (VR) is an on-going problem, despite recent advances in technology. In this paper, we propose a method for reducing the likelihood of cybersickness onset when using stationary (e.g., seated) VR setups. Our approach relies on reducing optic flow via inconsistent displacement - the viewpoint is “snapped” during fast movement that would otherwise induce cybersickness. We compared our technique, which we call viewpoint snapping, to a control condition without viewpoint snapping, in a custom-developed VR first-person shooter game. We measured participant cybersickness levels via the Simulator Sickness Questionnaire (SSQ), and user reported levels of nausea, presence, and objective error rate. Overall, our results indicate that viewpoint snapping significantly reduced SSQ reported cybersickness levels by about 40% and resulted in a reduction in participant nausea levels, especially with longer VR exposure. Presence levels and error rate were not significantly different between the viewpoint snapping and the control condition.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122317234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
It's the Gesture That (re)Counts: Annotating While Running to Recall Affective Experience 重要的是手势:在跑步时做注释以回忆情感体验
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.12
Felwah Alqahtani, Derek F. Reilly
We present results from a study exploring whether gestural annotations of felt emotion presented on a map-based visualization can support recall of affective experience during recreational runs. We compare gestural annotations with audio and video notes and a “mental note” baseline. In our study, 20 runners were asked to record their emotional state at regular intervals while running a familiar route. Each runner used one of the four methods to capture emotion over four separate runs. Five days after the last run, runners used an interactive map-based visualization to review and recall their running experiences. Results indicate that gestural annotation promoted recall of affective experience more effectively than the baseline condition, as measured by confidence in recall and detail provided. Gestural annotation was also comparable to video and audio annotation in terms of recollection confidence and detail. Audio annotation supported recall primarily through the runner's spoken annotation, but sound in the background was sometimes used. Video annotation yielded the most detail, much directly related to visual cues in the video, however using video annotations required runners to stop during their runs. Given these results we propose that background logging of ambient sounds and video may supplement gestural annotation.
我们提出了一项研究的结果,该研究探讨了在基于地图的可视化中呈现的感觉情绪的手势注释是否可以支持休闲跑步时情感体验的回忆。我们将手势注释与音频和视频笔记以及“心理笔记”基线进行比较。在我们的研究中,20名跑步者被要求在跑步熟悉的路线时定期记录他们的情绪状态。每个跑步者都使用四种方法中的一种来捕捉四次跑步时的情绪。在最后一次跑步五天后,跑步者使用基于交互式地图的可视化来回顾和回忆他们的跑步经历。结果表明,手势注释比基线条件更有效地促进了情感经验的回忆,这是通过对回忆和细节提供的信心来衡量的。手势注释与视频和音频注释在回忆可信度和细节方面也具有可比性。音频注释主要通过跑步者的语音注释来支持记忆,但有时也会使用背景声音。视频注释产生了最多的细节,与视频中的视觉线索直接相关,然而使用视频注释需要跑步者在跑步过程中停下来。鉴于这些结果,我们建议环境声音和视频的背景记录可以补充手势注释。
{"title":"It's the Gesture That (re)Counts: Annotating While Running to Recall Affective Experience","authors":"Felwah Alqahtani, Derek F. Reilly","doi":"10.20380/GI2018.12","DOIUrl":"https://doi.org/10.20380/GI2018.12","url":null,"abstract":"We present results from a study exploring whether gestural annotations of felt emotion presented on a map-based visualization can support recall of affective experience during recreational runs. We compare gestural annotations with audio and video notes and a “mental note” baseline. In our study, 20 runners were asked to record their emotional state at regular intervals while running a familiar route. Each runner used one of the four methods to capture emotion over four separate runs. Five days after the last run, runners used an interactive map-based visualization to review and recall their running experiences. Results indicate that gestural annotation promoted recall of affective experience more effectively than the baseline condition, as measured by confidence in recall and detail provided. Gestural annotation was also comparable to video and audio annotation in terms of recollection confidence and detail. Audio annotation supported recall primarily through the runner's spoken annotation, but sound in the background was sometimes used. Video annotation yielded the most detail, much directly related to visual cues in the video, however using video annotations required runners to stop during their runs. Given these results we propose that background logging of ambient sounds and video may supplement gestural annotation.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128134238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A conversation with CHCCS 2018 achievement award winner Alla Sheffer 与2018年CHCCS成就奖得主Alla Sheffer的对话
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.02
A. Sheffer
A 2018 CHCCS Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Alla Sheffer for her numerous highly impactful contributions to the field of computer graphics research. Her diverse research addresses geometric modeling and processing problems both in traditional computer graphics settings and in multiple other application domains, including product design, mechanical and civil engineering, and fashion design. CHCCS invites a publication by the award winner to be included in the proceedings, and this year we continue the tradition of an interview format rather than a formal paper. This permits a casual discussion of the research areas, insights, and contributions of the award winner. What follows is an edited transcript of a conversation between Alla Sheffer and Paul Kry that took place on 13 March, 2018, via Skype.
加拿大人机通信协会2018年CHCCS成就奖颁发给Alla Sheffer博士,以表彰她在计算机图形学研究领域做出的众多极具影响力的贡献。她的研究涉及传统计算机图形设置和多个其他应用领域的几何建模和处理问题,包括产品设计,机械和土木工程以及时装设计。CHCCS邀请获奖者的出版物收录在会议记录中,今年我们继续采用采访形式而不是正式论文的传统。这样就可以随意讨论获奖者的研究领域、见解和贡献。以下是Alla Sheffer和Paul Kry在2018年3月13日通过Skype进行的对话的编辑文本。
{"title":"A conversation with CHCCS 2018 achievement award winner Alla Sheffer","authors":"A. Sheffer","doi":"10.20380/GI2018.02","DOIUrl":"https://doi.org/10.20380/GI2018.02","url":null,"abstract":"A 2018 CHCCS Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Alla Sheffer for her numerous highly impactful contributions to the field of computer graphics research. Her diverse research addresses geometric modeling and processing problems both in traditional computer graphics settings and in multiple other application domains, including product design, mechanical and civil engineering, and fashion design. CHCCS invites a publication by the award winner to be included in the proceedings, and this year we continue the tradition of an interview format rather than a formal paper. This permits a casual discussion of the research areas, insights, and contributions of the award winner. What follows is an edited transcript of a conversation between Alla Sheffer and Paul Kry that took place on 13 March, 2018, via Skype.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130941969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adding Motion Blur to Still Images 为静止图像添加动态模糊
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.15
Xuejiao Luo, Nestor Z. Salamon, E. Eisemann
Motion blur appears in images as a visible trail along the motion path of the recorded object. It plays an important role in photography to convey a sense of motion but can be difficult to acquire as intended by the photographer. One solution is to add motion blur in a post process but current solutions involve much manual intervention and can lead to artifacts that mix moving and static objects incorrectly. In this paper, we propose a novel method to add motion blur to a single image that generates the illusion of a photographed motion. Relying on a minimal user input, a filtering process is employed to produce a virtual motion effect. It carefully treats object boundaries to avoid artifacts produced by standard filtering methods. We illustrate the effectiveness of our solution with various complex examples, including multiple objects, reflections and high intensity light sources. Our post-processing solution can achieve a convincing outcome, which makes it an alternative to attempting to capture the intended real-world motion blur.
运动模糊在图像中显示为沿着记录对象的运动路径的可见轨迹。它在摄影中扮演着重要的角色,传达一种运动感,但很难像摄影师想要的那样获得。一种解决方案是在后期处理中添加运动模糊,但目前的解决方案涉及许多人工干预,并可能导致错误地混合移动和静态对象的工件。在本文中,我们提出了一种新的方法来添加运动模糊的单一图像,产生错觉的拍摄运动。依靠最小的用户输入,滤波过程被用来产生虚拟运动效果。它仔细地处理对象边界,以避免由标准过滤方法产生的伪影。我们用各种复杂的例子来说明我们的解决方案的有效性,包括多物体、反射和高强度光源。我们的后处理解决方案可以实现令人信服的结果,这使得它成为尝试捕捉预期的真实世界运动模糊的替代方案。
{"title":"Adding Motion Blur to Still Images","authors":"Xuejiao Luo, Nestor Z. Salamon, E. Eisemann","doi":"10.20380/GI2018.15","DOIUrl":"https://doi.org/10.20380/GI2018.15","url":null,"abstract":"Motion blur appears in images as a visible trail along the motion path of the recorded object. It plays an important role in photography to convey a sense of motion but can be difficult to acquire as intended by the photographer. One solution is to add motion blur in a post process but current solutions involve much manual intervention and can lead to artifacts that mix moving and static objects incorrectly. In this paper, we propose a novel method to add motion blur to a single image that generates the illusion of a photographed motion. Relying on a minimal user input, a filtering process is employed to produce a virtual motion effect. It carefully treats object boundaries to avoid artifacts produced by standard filtering methods. We illustrate the effectiveness of our solution with various complex examples, including multiple objects, reflections and high intensity light sources. Our post-processing solution can achieve a convincing outcome, which makes it an alternative to attempting to capture the intended real-world motion blur.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128391759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Computer-Aided Imagery in Sport and Exercise: A Case Study of Indoor Wall Climbing 运动与锻炼中的计算机辅助图像:以室内攀岩为例
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.13
Kourosh Naderi, Jari Takatalo, J. Lipsanen, Perttu Hämäläinen
Movement artificial intelligence of simulated humanoid characters has been advancing rapidly through joint efforts of the computer animation, robotics, and machine learning communitites. However, practical real-life applications are still rare. We propose applying the technology to mental practice in sports, which we denote as computer-aided imagery (CAI). Imagery, i.e., rehearsing the task in one's mind, is a difficult cognitive skill that requires accurate mental simulation; we present a novel interactive computational sport simulation for exploring and planning movements and strategies. We utilize a fully physically-based avatar with motion optimization that is not limited by a movement dataset, and customize the avatar with computer vision measurements of user's body. We evaluate the approach with 20 users in preparing for real-life wall climbing. Our results indicate that the approach is promising and can affect body awareness and feelings of competence. However, more research is needed to achieve accurate enough simulation for both gross-motor body movements and fine-motor control of the myriad ways in which climbers can grasp climbing holds or shapes.
在计算机动画、机器人技术和机器学习领域的共同努力下,模拟人形人物的运动人工智能正在迅速发展。然而,在现实生活中的实际应用仍然很少。我们建议将该技术应用于体育运动中的心理练习,我们称之为计算机辅助图像(CAI)。想象,即在脑海中预演任务,是一项困难的认知技能,需要精确的心理模拟;我们提出了一种新的交互式计算运动模拟,用于探索和规划运动和策略。我们利用一个完全基于物理的、不受运动数据集限制的运动优化头像,并使用用户身体的计算机视觉测量来定制头像。我们用20个用户来评估这种方法,为现实生活中的攀岩做准备。我们的研究结果表明,这种方法是有前途的,可以影响身体意识和能力感。然而,需要更多的研究来实现足够精确的模拟大运动的身体运动和精细运动控制的无数方式,登山者可以抓住攀爬点或形状。
{"title":"Computer-Aided Imagery in Sport and Exercise: A Case Study of Indoor Wall Climbing","authors":"Kourosh Naderi, Jari Takatalo, J. Lipsanen, Perttu Hämäläinen","doi":"10.20380/GI2018.13","DOIUrl":"https://doi.org/10.20380/GI2018.13","url":null,"abstract":"Movement artificial intelligence of simulated humanoid characters has been advancing rapidly through joint efforts of the computer animation, robotics, and machine learning communitites. However, practical real-life applications are still rare. We propose applying the technology to mental practice in sports, which we denote as computer-aided imagery (CAI). Imagery, i.e., rehearsing the task in one's mind, is a difficult cognitive skill that requires accurate mental simulation; we present a novel interactive computational sport simulation for exploring and planning movements and strategies. We utilize a fully physically-based avatar with motion optimization that is not limited by a movement dataset, and customize the avatar with computer vision measurements of user's body. We evaluate the approach with 20 users in preparing for real-life wall climbing. Our results indicate that the approach is promising and can affect body awareness and feelings of competence. However, more research is needed to achieve accurate enough simulation for both gross-motor body movements and fine-motor control of the myriad ways in which climbers can grasp climbing holds or shapes.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126642731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Control and Personalization:Younger versus Older Users' Experience of Notifications 控制和个性化:年轻用户与老年用户的通知体验
Pub Date : 2018-06-01 DOI: 10.20380/GI2018.19
Izabelle Janzen, F. Vitale, Joanna McGrenere
With the increasing ubiquity of mobile technology, users are more connected than ever. Notifications facilitate prompt connections to friends, family and work, but also distract us from what we're doing. We investigated how older and younger users thought about, interacted with, and personalized their notifications. We took a qualitative approach, conducting semi-structured interviews primed through a notification categorization activity. We interviewed 20 participants with equal numbers of younger (19-30 years old) and older (48-74) adults. We extend and refine previous qualitative work and show that while enjoyment plays a minor role in the experience of notifications, urgency, directness and social closeness are far more important factors, though context remains a nuanced issue. We found that older users especially desired a sense of control over their notifications that was difficult to achieve with current technology. Lastly, we provide information about what “categories” of notifications users perceive and expand how that can be used in new personalization systems. These results lead us to advocate a number of fundamental changes to how notifications are personalized.
随着移动技术的日益普及,用户的联系比以往任何时候都更加紧密。通知有助于我们及时与朋友、家人和工作联系,但也会分散我们对正在做的事情的注意力。我们调查了年龄较大和较年轻的用户如何思考、交互和个性化他们的通知。我们采用定性方法,通过通知分类活动进行半结构化访谈。我们采访了20名参与者,其中年龄较小(19-30岁)和年龄较大(48-74岁)的成年人人数相等。我们扩展并完善了之前的定性研究,并表明尽管乐趣在通知体验中扮演着次要角色,但紧迫性,直接性和社交亲密性是更重要的因素,尽管环境仍然是一个微妙的问题。我们发现,年龄较大的用户尤其希望对他们的通知有一种控制感,这在当前的技术条件下很难实现。最后,我们提供了关于用户感知的通知“类别”的信息,并扩展了如何在新的个性化系统中使用这些信息。这些结果促使我们提倡对个性化通知进行一些根本性的改变。
{"title":"Control and Personalization:Younger versus Older Users' Experience of Notifications","authors":"Izabelle Janzen, F. Vitale, Joanna McGrenere","doi":"10.20380/GI2018.19","DOIUrl":"https://doi.org/10.20380/GI2018.19","url":null,"abstract":"With the increasing ubiquity of mobile technology, users are more connected than ever. Notifications facilitate prompt connections to friends, family and work, but also distract us from what we're doing. We investigated how older and younger users thought about, interacted with, and personalized their notifications. We took a qualitative approach, conducting semi-structured interviews primed through a notification categorization activity. We interviewed 20 participants with equal numbers of younger (19-30 years old) and older (48-74) adults. We extend and refine previous qualitative work and show that while enjoyment plays a minor role in the experience of notifications, urgency, directness and social closeness are far more important factors, though context remains a nuanced issue. We found that older users especially desired a sense of control over their notifications that was difficult to achieve with current technology. Lastly, we provide information about what “categories” of notifications users perceive and expand how that can be used in new personalization systems. These results lead us to advocate a number of fundamental changes to how notifications are personalized.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 44th Graphics Interface Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1