Three-dimensional typography (3D typography) refers to the arrangement of text in three-dimensional space. It injects vitality into the letters, thereby giving the viewer a strong impression that is hard to forget. These days, 3D typography plays an important role in daily life beyond the artistic design. It is easy to observe the 3D typography used in the 3D virtual space such as movie or games. Also it is used frequently in signboard or furniture design. Despite its noticeable strength, most of the 3D typography is generated by just a simple extrusion of flat 2D typography. Comparing with 2D typography, 3D typography is more difficult to generate in short time due to its high complexity.
{"title":"Automatic generation of 3D typography","authors":"Suzi Kim, Sunghee Choi","doi":"10.1145/2945078.2945099","DOIUrl":"https://doi.org/10.1145/2945078.2945099","url":null,"abstract":"Three-dimensional typography (3D typography) refers to the arrangement of text in three-dimensional space. It injects vitality into the letters, thereby giving the viewer a strong impression that is hard to forget. These days, 3D typography plays an important role in daily life beyond the artistic design. It is easy to observe the 3D typography used in the 3D virtual space such as movie or games. Also it is used frequently in signboard or furniture design. Despite its noticeable strength, most of the 3D typography is generated by just a simple extrusion of flat 2D typography. Comparing with 2D typography, 3D typography is more difficult to generate in short time due to its high complexity.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114781737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For high skilled players, an easy game might become boring and for low skilled players, a difficult game might become frustrating. This research's goal is to offer players a personalized experience adapted according to their performance and levels of attention. We created a simple side-scrolling 2D platform game using Procedural Content Generation, Dynamic Difficulty Adjustment techniques and brain computer data obtained from players in real time using an Electroencephalography device. We conducted a series of experiments with different players and got results that confirm that our method is adjusting each level according to performance and attention.
{"title":"Adaptable game experience through procedural content generation and brain computer interface","authors":"Henry Fernández, Koji Mikami, K. Kondo","doi":"10.1145/2945078.2945124","DOIUrl":"https://doi.org/10.1145/2945078.2945124","url":null,"abstract":"For high skilled players, an easy game might become boring and for low skilled players, a difficult game might become frustrating. This research's goal is to offer players a personalized experience adapted according to their performance and levels of attention. We created a simple side-scrolling 2D platform game using Procedural Content Generation, Dynamic Difficulty Adjustment techniques and brain computer data obtained from players in real time using an Electroencephalography device. We conducted a series of experiments with different players and got results that confirm that our method is adjusting each level according to performance and attention.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129750325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Caputo, Victoria McGowen, Joe Geigel, Steven Cerqueira, Q. Williams, M. Schweppe, Zhongyuan Fa, Anastasia Pembrook, Heather Roffe
Farewell to Dawn is a mixed reality dance performance which explores two dancers' voyage from a physical space to a virtual stage and back, as the day passes before them.
{"title":"Farewell to dawn: a mixed reality dance performance in a virtual space","authors":"F. Caputo, Victoria McGowen, Joe Geigel, Steven Cerqueira, Q. Williams, M. Schweppe, Zhongyuan Fa, Anastasia Pembrook, Heather Roffe","doi":"10.1145/2945078.2945127","DOIUrl":"https://doi.org/10.1145/2945078.2945127","url":null,"abstract":"Farewell to Dawn is a mixed reality dance performance which explores two dancers' voyage from a physical space to a virtual stage and back, as the day passes before them.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124443285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Barbieri, Nicola Garau, Wenyu Hu, Zhidong Xiao, Xiaosong Yang
Sketch as the most intuitive and powerful 2D design method has been used by artists for decades. However it is not fully integrated into current 3D animation pipeline as the difficulties of interpreting 2D line drawing into 3D. Several successful research for character posing from sketch has been presented in the past few years, such as the Line of Action [Guay et al. 2013] and Sketch Abstractions [Hahn et al. 2015]. However both of the methods require animators to manually give some initial setup to solve the corresponding problems. In this paper, we propose a new sketch based character posing system which is more flexible and efficient. It requires less input from the user than the system from [Hahn et al. 2015]. The character can be easily posed no matter the sketch represents a skeleton structure or shape contours.
素描作为最直观、最强大的2D设计方法已经被艺术家们使用了几十年。然而,由于将2D线条绘制解释为3D的困难,它并没有完全集成到当前的3D动画管道中。在过去几年中,已经提出了几项成功的人物素描研究,例如动作线[Guay等人,2013]和素描抽象[Hahn等人,2015]。然而,这两种方法都需要动画师手动给出一些初始设置来解决相应的问题。本文提出了一种新的基于草图的人物造型系统,该系统更加灵活高效。它比[Hahn et al. 2015]中的系统需要更少的用户输入。无论草图代表骨架结构还是形状轮廓,角色都可以很容易地摆出姿势。
{"title":"Enhancing character posing by a sketch-based interaction","authors":"Simone Barbieri, Nicola Garau, Wenyu Hu, Zhidong Xiao, Xiaosong Yang","doi":"10.1145/2945078.2945134","DOIUrl":"https://doi.org/10.1145/2945078.2945134","url":null,"abstract":"Sketch as the most intuitive and powerful 2D design method has been used by artists for decades. However it is not fully integrated into current 3D animation pipeline as the difficulties of interpreting 2D line drawing into 3D. Several successful research for character posing from sketch has been presented in the past few years, such as the Line of Action [Guay et al. 2013] and Sketch Abstractions [Hahn et al. 2015]. However both of the methods require animators to manually give some initial setup to solve the corresponding problems. In this paper, we propose a new sketch based character posing system which is more flexible and efficient. It requires less input from the user than the system from [Hahn et al. 2015]. The character can be easily posed no matter the sketch represents a skeleton structure or shape contours.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125745411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a framework for using human acting as input for the animation of non-humanoid creatures; captured motion is classified using machine learning techniques, and a combination of preexisting clips and motion retargeting are used to synthetize new motions. This should lead to a broader use of motion capture.
{"title":"Non-humanoid creature performance from human acting","authors":"Gustavo Eggert Boehs, M. Vieira","doi":"10.1145/2945078.2945080","DOIUrl":"https://doi.org/10.1145/2945078.2945080","url":null,"abstract":"We propose a framework for using human acting as input for the animation of non-humanoid creatures; captured motion is classified using machine learning techniques, and a combination of preexisting clips and motion retargeting are used to synthetize new motions. This should lead to a broader use of motion capture.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133638995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, expressions close to realities have become possible thanks to the technologically advanced computer graphics. Secular change and weathering are important factors to create realistic computer graphics images. Metal rust is an important secular change and there are much research work on rust [Kanazawa et al. 2015]. Although the rust forming processes vary according to coating rain-water and seawater, dissolved oxygen contents of them and flowing water effects, no rust forming methods which have examined the object geometry of models and chemical reaction processes exist as far as we know. Our proposed method calculates water flowing on 3D models to reproduce the process of corrosion which advances from the surface region coated with water. Our corrosion simulation model takes into account the quantity of coating water and the chemical reaction processes. As a result, we confirm that the images close to the rust formed in reality can be obtained.
{"title":"Rusting and corroding simulation taking into account chemical reaction processes","authors":"Tomokazu Ishikawa, Kousaku Kamata, Yuriko Takeshima, Masanori Kakimoto","doi":"10.1145/2945078.2945143","DOIUrl":"https://doi.org/10.1145/2945078.2945143","url":null,"abstract":"In recent years, expressions close to realities have become possible thanks to the technologically advanced computer graphics. Secular change and weathering are important factors to create realistic computer graphics images. Metal rust is an important secular change and there are much research work on rust [Kanazawa et al. 2015]. Although the rust forming processes vary according to coating rain-water and seawater, dissolved oxygen contents of them and flowing water effects, no rust forming methods which have examined the object geometry of models and chemical reaction processes exist as far as we know. Our proposed method calculates water flowing on 3D models to reproduce the process of corrosion which advances from the surface region coated with water. Our corrosion simulation model takes into account the quantity of coating water and the chemical reaction processes. As a result, we confirm that the images close to the rust formed in reality can be obtained.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130555730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yukari Konishi, Nobuhisa Hanamitsu, K. Minamizawa, Ayahiko Sato, Tetsuya Mizuguchi
The Synesthesia Suit provides immersive embodied experience in Virtual Reality environment with vibro-tactile sensations on the entire body. Each vibro-tactile actuator provides not a simple vibration such as traditional game controller, but we designed the haptic sensation based on the haptic design method we have developed in the TECHTILE[Minamizawa et al. 2012] technology. In haptics research using multi-channel vibro-tactile feedback, Surround Haptics [Israr et al. 2012] proposed moving tactile strokes using multiple vibrators spaced on a gaming chair. And then they also proposed Po2[Israr et al. 2015], which shows illusion of tactile sensation for gesture based games by providing vibrations on the hand based on psycho-physical study.
联觉服在虚拟现实环境中提供身临其境的体验,整个身体都有振动触觉。每个振动触觉致动器都不像传统游戏控制器那样提供简单的振动,而是基于我们在TECHTILE[Minamizawa et al. 2012]技术中开发的触觉设计方法来设计触觉。在使用多通道振动触觉反馈的触觉研究中,Surround haptics [Israr et al. 2012]提出使用游戏椅上间隔的多个振动器来移动触觉触击。然后他们还提出了Po2[Israr et al. 2015],它通过基于心理物理研究提供手部振动来显示基于手势的游戏的触觉错觉。
{"title":"Synesthesia suit: the full body immersive experience","authors":"Yukari Konishi, Nobuhisa Hanamitsu, K. Minamizawa, Ayahiko Sato, Tetsuya Mizuguchi","doi":"10.1145/2945078.2945149","DOIUrl":"https://doi.org/10.1145/2945078.2945149","url":null,"abstract":"The Synesthesia Suit provides immersive embodied experience in Virtual Reality environment with vibro-tactile sensations on the entire body. Each vibro-tactile actuator provides not a simple vibration such as traditional game controller, but we designed the haptic sensation based on the haptic design method we have developed in the TECHTILE[Minamizawa et al. 2012] technology. In haptics research using multi-channel vibro-tactile feedback, Surround Haptics [Israr et al. 2012] proposed moving tactile strokes using multiple vibrators spaced on a gaming chair. And then they also proposed Po2[Israr et al. 2015], which shows illusion of tactile sensation for gesture based games by providing vibrations on the hand based on psycho-physical study.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"337 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takeshi Oozu, Aki Yamada, Yuki Enzaki, Hiroo Iwata
Furniture-device is the device having furniture appearance and physical input and output function. The Escaping Chair is a furniture-device having physical and dynamic interaction with a user to let them perceive the intent of their action and personify the furniture. The Escaping Chair interacts with the bystanders by trying to move away from nearby people. By doing this, the device tries to make the person fail to sit on it, and stimulates their perception about sitting. The idea of a furniture-shaped device was extended from one of my previous artworks, which used furniture as input mechanisms. I exhibited the chair and observed the interaction sit produced with exhibition visitors. It succeeded in making people wonder during the interaction, as I planned, and making them further chase the chair, which indicates a new capability of the device. There were some challenges regarding load tolerance, detection latency and failure, which I have proposed improvements for.
{"title":"Escaping chair: furniture-shaped device art","authors":"Takeshi Oozu, Aki Yamada, Yuki Enzaki, Hiroo Iwata","doi":"10.1145/2945078.2945086","DOIUrl":"https://doi.org/10.1145/2945078.2945086","url":null,"abstract":"Furniture-device is the device having furniture appearance and physical input and output function. The Escaping Chair is a furniture-device having physical and dynamic interaction with a user to let them perceive the intent of their action and personify the furniture. The Escaping Chair interacts with the bystanders by trying to move away from nearby people. By doing this, the device tries to make the person fail to sit on it, and stimulates their perception about sitting. The idea of a furniture-shaped device was extended from one of my previous artworks, which used furniture as input mechanisms. I exhibited the chair and observed the interaction sit produced with exhibition visitors. It succeeded in making people wonder during the interaction, as I planned, and making them further chase the chair, which indicates a new capability of the device. There were some challenges regarding load tolerance, detection latency and failure, which I have proposed improvements for.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130839534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a system for visualizing the results of loitering discovery in surveillance videos. Since loitering is a suspicious behaviour that often leads to abnormal situations, such as pickpocketing, its analysis attracts attention from researchers [Bird et al. 2005; Ke et al. 2013; A. et al. 2015]. Most of them mainly focus on how to detect or identify loitering individuals by human tracking techniques. A robust approach in [Nam 2015] is one of the state-of-theart methods for detecting loitering persons in crowded scenes using pedestrian tracking based on spatio-temporal changes. However, such tracking-based methods are quite time-consuming. Therefore, it is hard to apply loitering detection across multiple cameras for a long time, or take into account the visualization of loiterers at a glance. To solve this problem, we propose a system, named VisLoiter (Figure 1), which enables efficient loitering discovery based on face features extracted from longtime videos across multiple cameras, instead of the tracking-based manner. By taking the advantage of efficiency, the VisLoiter realizes the visualization of loiterers at a glance. The visualization consists of three display components for (1) the appearance patterns of loitering individuals, (2) the frequency ranking of faces of loiterers, and (3) the lightweight playback of video clips where the discovered loiterer frequently appeared (see Figure 1 (b) and (c)).
本文介绍了一种监控视频中游荡发现结果的可视化系统。由于徘徊是一种可疑的行为,经常会导致异常情况,例如扒窃,因此对它的分析引起了研究人员的注意[Bird et al. 2005;Ke et al. 2013;A. et al. 2015]。大多数研究主要集中在如何通过人体跟踪技术来检测或识别游荡的个体。[Nam 2015]中的一种鲁棒方法是利用基于时空变化的行人跟踪来检测拥挤场景中游荡者的最新方法之一。然而,这种基于跟踪的方法非常耗时。因此,很难在长时间内跨多个摄像头进行游荡检测,也很难考虑到对游荡者的可视化。为了解决这个问题,我们提出了一个名为VisLoiter的系统(图1),该系统可以基于从多个摄像头的长时间视频中提取的面部特征,而不是基于跟踪的方式,实现高效的闲逛发现。VisLoiter利用效率的优势,实现了对游行者的可视化。可视化包括三个显示组件:(1)游荡个体的外观模式,(2)游荡者面孔的频率排名,以及(3)发现游荡者经常出现的视频片段的轻量播放(见图1 (b)和(c))。
{"title":"VisLoiter: a system to visualize loiterers discovered from surveillance videos","authors":"Jianquan Liu, Shoji Nishimura, Takuya Araki","doi":"10.1145/2945078.2945125","DOIUrl":"https://doi.org/10.1145/2945078.2945125","url":null,"abstract":"This paper presents a system for visualizing the results of loitering discovery in surveillance videos. Since loitering is a suspicious behaviour that often leads to abnormal situations, such as pickpocketing, its analysis attracts attention from researchers [Bird et al. 2005; Ke et al. 2013; A. et al. 2015]. Most of them mainly focus on how to detect or identify loitering individuals by human tracking techniques. A robust approach in [Nam 2015] is one of the state-of-theart methods for detecting loitering persons in crowded scenes using pedestrian tracking based on spatio-temporal changes. However, such tracking-based methods are quite time-consuming. Therefore, it is hard to apply loitering detection across multiple cameras for a long time, or take into account the visualization of loiterers at a glance. To solve this problem, we propose a system, named VisLoiter (Figure 1), which enables efficient loitering discovery based on face features extracted from longtime videos across multiple cameras, instead of the tracking-based manner. By taking the advantage of efficiency, the VisLoiter realizes the visualization of loiterers at a glance. The visualization consists of three display components for (1) the appearance patterns of loitering individuals, (2) the frequency ranking of faces of loiterers, and (3) the lightweight playback of video clips where the discovered loiterer frequently appeared (see Figure 1 (b) and (c)).","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123646275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic Frame Rate (DFR) is the change in frame rate of a movie sequence in real time as the sequence is playing. Throughout the majority of the past century and after the introduction of sound in films, frame rates used in films have been kept at a standardization of 24 frame per second despite technological advancement [Salmon et. Al 2011]. In the past decade, spatial resolution has been increasing in display systems while the temporal resolution, the frame rate, has not been changed. Because of this, researchers and filmmakers stress that motion judders and blurriness are much more apparent and they propose that high frame rates will solve the issue [Emoto et. Al 2014] [Turnock 2013]. Some industry experts and critics, however, oppose the use of high frame rates [Wilcox 2015]. Despite all the research and attempts in using high frame rate, the idea of using dynamic frame rate in digital cinema has not been explored in depth. As such, there is very limited information on how people perceive DFR and how it actually works. By understanding DFR and how viewers perceive the changes in frame rate, it will help us adapt new techniques in the creation of cinema. We can utilize high frame rate in sequences that could benefit from high frame rate while keeping the rest of the sequences at standard frame rate. This thesis aims to understand the basics of DFR, how different implementations of DFR changes viewer perception and how people perceive a change of frame rate in an animated movie sequence displayed.
动态帧率(DFR)是指电影序列在播放过程中帧率的实时变化。在过去一个世纪的大部分时间里,在电影中引入声音之后,尽管技术进步,但电影中使用的帧率一直保持在每秒24帧的标准化[Salmon等人2011]。在过去的十年中,显示系统的空间分辨率一直在提高,而时间分辨率,即帧率却没有改变。正因为如此,研究人员和电影制作人强调运动抖动和模糊更加明显,他们提出高帧率将解决这个问题[Emoto et. Al 2014] [turnck 2013]。然而,一些行业专家和评论家反对使用高帧率[Wilcox 2015]。尽管在使用高帧率方面进行了很多研究和尝试,但是在数字电影中使用动态帧率的想法还没有得到深入的探讨。因此,关于人们如何看待DFR及其实际工作原理的信息非常有限。通过了解DFR以及观众如何感知帧率的变化,它将帮助我们在电影创作中适应新的技术。我们可以在序列中利用高帧率,从而受益于高帧率,同时保持序列的其余部分处于标准帧率。本文旨在了解DFR的基础知识,DFR的不同实现如何改变观众的感知,以及人们如何感知动画电影序列中显示的帧率变化。
{"title":"Dynamic frame rate: a study on viewer perception of changes in frame rate within an animated movie sequence","authors":"K. Chuang","doi":"10.1145/2945078.2945159","DOIUrl":"https://doi.org/10.1145/2945078.2945159","url":null,"abstract":"Dynamic Frame Rate (DFR) is the change in frame rate of a movie sequence in real time as the sequence is playing. Throughout the majority of the past century and after the introduction of sound in films, frame rates used in films have been kept at a standardization of 24 frame per second despite technological advancement [Salmon et. Al 2011]. In the past decade, spatial resolution has been increasing in display systems while the temporal resolution, the frame rate, has not been changed. Because of this, researchers and filmmakers stress that motion judders and blurriness are much more apparent and they propose that high frame rates will solve the issue [Emoto et. Al 2014] [Turnock 2013]. Some industry experts and critics, however, oppose the use of high frame rates [Wilcox 2015]. Despite all the research and attempts in using high frame rate, the idea of using dynamic frame rate in digital cinema has not been explored in depth. As such, there is very limited information on how people perceive DFR and how it actually works. By understanding DFR and how viewers perceive the changes in frame rate, it will help us adapt new techniques in the creation of cinema. We can utilize high frame rate in sequences that could benefit from high frame rate while keeping the rest of the sequences at standard frame rate. This thesis aims to understand the basics of DFR, how different implementations of DFR changes viewer perception and how people perceive a change of frame rate in an animated movie sequence displayed.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125133721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}