Several perceptually-based quality metrics have been introduced to predict the global impact of geometric artifacts on the visual appearance of a 3D model. They usually produce a single score that reflects the global level of annoyance caused by the distortions. However, beside this global information, it is also important in many applications to obtain information about the local visibility of the artifacts (i.e. estimating a localized distortion measure). In this work we present a psychophysical experiment where observers are asked to mark areas of 3D meshes that contain noticeable distortions. The collected per-vertex distortion maps are first used to illustrate several perceptual mechanisms of the human visual system. They then serve as ground-truth to evaluate the performance of well-known geometric attributes and metrics for predicting the visibility of artifacts. Results show that curvature-based attributes demonstrate excellent performance. As expected, the Hausdorff distance is a poor predictor of the perceived local distortion while the recent perceptually-based metrics provide the best results.
{"title":"Evaluating the local visibility of geometric artifacts","authors":"Jinjiang Guo, V. Vidal, A. Baskurt, G. Lavoué","doi":"10.1145/2804408.2804418","DOIUrl":"https://doi.org/10.1145/2804408.2804418","url":null,"abstract":"Several perceptually-based quality metrics have been introduced to predict the global impact of geometric artifacts on the visual appearance of a 3D model. They usually produce a single score that reflects the global level of annoyance caused by the distortions. However, beside this global information, it is also important in many applications to obtain information about the local visibility of the artifacts (i.e. estimating a localized distortion measure). In this work we present a psychophysical experiment where observers are asked to mark areas of 3D meshes that contain noticeable distortions. The collected per-vertex distortion maps are first used to illustrate several perceptual mechanisms of the human visual system. They then serve as ground-truth to evaluate the performance of well-known geometric attributes and metrics for predicting the visibility of artifacts. Results show that curvature-based attributes demonstrate excellent performance. As expected, the Hausdorff distance is a poor predictor of the perceived local distortion while the recent perceptually-based metrics provide the best results.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116484364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Schorradt, K. Legde, Susana Castillo, D. Cunningham
We convey a tremendous amount of information vocally. In addition to the obvious exchange of semantic information, we unconsciously vary a number of acoustic properties of the speech wave to provide information about our emotions, thoughts, and intentions. [Cahn 1990] Advances in understanding of human physiology combined with increases in the computational power available in modern computers have made the simulation of the human vocal tract a realistic option for creating artificial speech. Such systems can, in principle, produce any sound that a human can make. Here we present two experiments examining the expression of emotion using prosody (i.e., speech melody) in human recordings and an articulatory speech synthesis system.
{"title":"Integration and evaluation of emotion in an articulatory speech synthesis system","authors":"Martin Schorradt, K. Legde, Susana Castillo, D. Cunningham","doi":"10.1145/2804408.2814183","DOIUrl":"https://doi.org/10.1145/2804408.2814183","url":null,"abstract":"We convey a tremendous amount of information vocally. In addition to the obvious exchange of semantic information, we unconsciously vary a number of acoustic properties of the speech wave to provide information about our emotions, thoughts, and intentions. [Cahn 1990] Advances in understanding of human physiology combined with increases in the computational power available in modern computers have made the simulation of the human vocal tract a realistic option for creating artificial speech. Such systems can, in principle, produce any sound that a human can make. Here we present two experiments examining the expression of emotion using prosody (i.e., speech melody) in human recordings and an articulatory speech synthesis system.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128984224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In head-mounted display systems, the confined size of the tracked space limits users from navigating larger virtual environments than the tracked physical space. Previous work suggests this constraint could be broken by asking users to back up or turn 180°whenever they encouter a wall in the real world [Williams et al. 2007]. In this work, we propose that the reorientation rate can be dynamically determined based on the user's instantaneous positional information and the shape of the navigable virtual space around the user. We conducted an experiment to compare our proposed dynamic reorientations with the previous Freeze-Turn reorientation. The results show that, with dynamic reorientations, participants walked a significantly longer distance between orientations than with Freeze-Turn reorientations.
在头戴式显示系统中,跟踪空间的有限大小限制了用户在比跟踪物理空间更大的虚拟环境中导航。先前的研究表明,当用户在现实世界中遇到墙时,可以要求他们后退或转180°来打破这一限制[Williams et al. 2007]。在这项工作中,我们提出可以根据用户的瞬时位置信息和用户周围可导航虚拟空间的形状动态确定重定向速率。我们进行了一个实验来比较我们提出的动态重定向与之前的冻结转向重定向。结果表明,动态定向时,参与者在两次定向之间行走的距离明显长于冻结-转向定向。
{"title":"Improving redirection with dynamic reorientations and gains","authors":"Ruimin Zhang, James W. Walker, S. Kuhl","doi":"10.1145/2804408.2814180","DOIUrl":"https://doi.org/10.1145/2804408.2814180","url":null,"abstract":"In head-mounted display systems, the confined size of the tracked space limits users from navigating larger virtual environments than the tracked physical space. Previous work suggests this constraint could be broken by asking users to back up or turn 180°whenever they encouter a wall in the real world [Williams et al. 2007]. In this work, we propose that the reorientation rate can be dynamically determined based on the user's instantaneous positional information and the shape of the navigable virtual space around the user. We conducted an experiment to compare our proposed dynamic reorientations with the previous Freeze-Turn reorientation. The results show that, with dynamic reorientations, participants walked a significantly longer distance between orientations than with Freeze-Turn reorientations.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117248458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morgan McCullough, Hong Xu, Joel Michelson, Matthew Jackoski, Wyatt Pease, William Cobb, William Kalescky, Joshua Ladd, B. Sanders
In this paper, we use an inexpensive wearable device called the Myo armband (199 USD) to implement a simple arm swinging algorithm that allows a user to freely explore an HMD-based virtual environment. Using a spatial orientation task we directly compared our Myo arm--swinging method to joystick locomotion and physical walking. We find that our arm swinging method outperforms the simple joystick and that spatial orientation is comparable to physically walking on foot. Our arm--swinging method is inexpensive compared to tracking systems that permit foot exploration, does not suffer from space constraints, and requires less physical energy than walking on foot.
{"title":"Myo arm: swinging to explore a VE","authors":"Morgan McCullough, Hong Xu, Joel Michelson, Matthew Jackoski, Wyatt Pease, William Cobb, William Kalescky, Joshua Ladd, B. Sanders","doi":"10.1145/2804408.2804416","DOIUrl":"https://doi.org/10.1145/2804408.2804416","url":null,"abstract":"In this paper, we use an inexpensive wearable device called the Myo armband (199 USD) to implement a simple arm swinging algorithm that allows a user to freely explore an HMD-based virtual environment. Using a spatial orientation task we directly compared our Myo arm--swinging method to joystick locomotion and physical walking. We find that our arm swinging method outperforms the simple joystick and that spatial orientation is comparable to physically walking on foot. Our arm--swinging method is inexpensive compared to tracking systems that permit foot exploration, does not suffer from space constraints, and requires less physical energy than walking on foot.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124462109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Lambrant, Francisco López Luro, V. Sundstedt
Selecting color schemes for game objects is an important task. It can be valuable to game designers to know what colors are preferred. Principles of color theory are important to select appropriate colors. This paper presents a perceptual experiment that evaluates some basic principles of color theory applied to game objects to study if a particular combination is preferred. An experiment was conducted with 15 participants who performed a two-alternative forced choice (2AFC) preference experiment using 236 pairs of images each. The pairs were based on color harmonies derived from the colors red, green, and blue. The color harmonies were evaluated against each other and included analogous, complementary, split-complementary, triad, and warm and cool colors. A high and low saturation condition was also included. The color harmonies were applied to an existing game character (avatar) and a new object (cube) to study any potential differences in the results. The initial results show that some color harmonies, in particular triad and split-complementary, were generally preferred over others meaning that it is important to take into account these aspects in game design. Additional results also show that color harmonies with a base in green were not as popular as red and blue color harmonies.
{"title":"Avatar preference selection in game design based on color theory","authors":"Andreas Lambrant, Francisco López Luro, V. Sundstedt","doi":"10.1145/2804408.2804421","DOIUrl":"https://doi.org/10.1145/2804408.2804421","url":null,"abstract":"Selecting color schemes for game objects is an important task. It can be valuable to game designers to know what colors are preferred. Principles of color theory are important to select appropriate colors. This paper presents a perceptual experiment that evaluates some basic principles of color theory applied to game objects to study if a particular combination is preferred. An experiment was conducted with 15 participants who performed a two-alternative forced choice (2AFC) preference experiment using 236 pairs of images each. The pairs were based on color harmonies derived from the colors red, green, and blue. The color harmonies were evaluated against each other and included analogous, complementary, split-complementary, triad, and warm and cool colors. A high and low saturation condition was also included. The color harmonies were applied to an existing game character (avatar) and a new object (cube) to study any potential differences in the results. The initial results show that some color harmonies, in particular triad and split-complementary, were generally preferred over others meaning that it is important to take into account these aspects in game design. Additional results also show that color harmonies with a base in green were not as popular as red and blue color harmonies.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131028507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we conducted a perceptual experiment to determine if specific personality traits can be portrayed through eye and head movement in the absence of other facial animation cues. We created a collection of eye and head motions captured from three female actors portraying different personalities, while listening to instructional videos. In a between-groups experiment, we tested the perception of personality on a realistic model and a cartoon stylisation in order to determine if stylisation can positively influence the perceived personality or if personality is more easily identified on a realistic face. Our results verify that participants were able to differentiate between personality traits portrayed only through eye gaze, blinks and head movement. The results also show that perception of personality was robust across character realism.
{"title":"Perception of personality through eye gaze of realistic and cartoon models","authors":"K. Ruhland, Katja Zibrek, R. Mcdonnell","doi":"10.1145/2804408.2804424","DOIUrl":"https://doi.org/10.1145/2804408.2804424","url":null,"abstract":"In this paper, we conducted a perceptual experiment to determine if specific personality traits can be portrayed through eye and head movement in the absence of other facial animation cues. We created a collection of eye and head motions captured from three female actors portraying different personalities, while listening to instructional videos. In a between-groups experiment, we tested the perception of personality on a realistic model and a cartoon stylisation in order to determine if stylisation can positively influence the perceived personality or if personality is more easily identified on a realistic face. Our results verify that participants were able to differentiate between personality traits portrayed only through eye gaze, blinks and head movement. The results also show that perception of personality was robust across character realism.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127553996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyungho Jeong, Youn-Ju Seong, Ju-No Chung, Yongsung Park, Woo-nyoung Lee
Recently, there has been an increasing research interest on thermal feedback. This includes utilizing thermal systems as simple messaging tools, such as: a system that uses temperature to present the importance of messages [Wilson G. 2012] and help users navigate the road by presenting speed or distance with thermal stimuli on the users arm [David and Henry 2013]. There is no researches about array-type thermal feedback. Therefore, we have focused on communication via an array-type thermal patterns. As a first step, this poster presents 1) how well the subjects can differentiate between different spots on which thermal stimulation is presented and 2) whether the subjects can recognize the directional thermal stimulations. In order to find the answer, we have placed the thermal device on where the subjects felt most comfortable: the wrist. The device was also placed on the back of the neck to mimic a scarf or a bluetooth headset.
近年来,热反馈的研究日益引起人们的兴趣。这包括利用热系统作为简单的信息传递工具,例如:一种利用温度来显示信息重要性的系统[Wilson G. 2012],并通过在用户手臂上的热刺激显示速度或距离来帮助用户导航[David and Henry 2013]。阵列式热反馈的研究尚不存在。因此,我们专注于通过阵列型热模式进行通信。作为第一步,这张海报展示了1)受试者如何区分不同的热刺激点,2)受试者是否能够识别定向热刺激。为了找到答案,我们将热装置放置在受试者感觉最舒适的地方:手腕。该设备还被放置在脖子后面,模仿围巾或蓝牙耳机。
{"title":"Directional thermal perception for wearable device","authors":"Kyungho Jeong, Youn-Ju Seong, Ju-No Chung, Yongsung Park, Woo-nyoung Lee","doi":"10.1145/2804408.2814184","DOIUrl":"https://doi.org/10.1145/2804408.2814184","url":null,"abstract":"Recently, there has been an increasing research interest on thermal feedback. This includes utilizing thermal systems as simple messaging tools, such as: a system that uses temperature to present the importance of messages [Wilson G. 2012] and help users navigate the road by presenting speed or distance with thermal stimuli on the users arm [David and Henry 2013]. There is no researches about array-type thermal feedback. Therefore, we have focused on communication via an array-type thermal patterns. As a first step, this poster presents 1) how well the subjects can differentiate between different spots on which thermal stimulation is presented and 2) whether the subjects can recognize the directional thermal stimulations. In order to find the answer, we have placed the thermal device on where the subjects felt most comfortable: the wrist. The device was also placed on the back of the neck to mimic a scarf or a bluetooth headset.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contemporary game engines offer an outstanding graphics quality but they are not free from visual artefacts. A typical example is aliasing, which, despite advanced antialiasing techniques, is still visible to the game players. Essential deteriorations are the shadow acne and peter panning artefacts related to deficiency of the shadow mapping technique. Also Z-fighting, caused by the incorrect order of drawing polygons, significantly affects the quality of the graphics and makes the gameplay difficult. These artefacts are laborious to eliminate in an algorithm way because either they require computational effort inadequate to obtained results or visibility of artefacts depends on the ambiguous parameters. In this work we propose a technique, in which visibility of deteriorations is perceptually assessed by human observers. We conduct subjective experiments in which people manually mark the visible local artefacts in the screenshots from the games. Then, the detection maps averaged over a number of observers are compared with results generated by the image quality metrics (IQMs). Simple mathematically-based metric - MSE, and advanced IQMs: S-CIELAB, SSIM, MSSIM, and HDR-VDP-2 are evaluated. We compare convergence in the detection between the maps created by humans and computed by IQMs. The obtained results show that SSIM and MSSIM metrics outperform other techniques. However, the results are not indisputable because, for small and scattered aliasing artefacts, HDR-VDP-2 metrics reports the results most consistent with the average human observer. Notwithstanding, the results suggest that it is feasible to use the IQMs detection maps to leverage and calibrate the rendering algorithms directly based on the analysis of quality of the output images.
{"title":"Using full reference image quality metrics to detect game engine artefacts","authors":"Rafal Piórkowski, R. Mantiuk","doi":"10.1145/2804408.2804414","DOIUrl":"https://doi.org/10.1145/2804408.2804414","url":null,"abstract":"Contemporary game engines offer an outstanding graphics quality but they are not free from visual artefacts. A typical example is aliasing, which, despite advanced antialiasing techniques, is still visible to the game players. Essential deteriorations are the shadow acne and peter panning artefacts related to deficiency of the shadow mapping technique. Also Z-fighting, caused by the incorrect order of drawing polygons, significantly affects the quality of the graphics and makes the gameplay difficult. These artefacts are laborious to eliminate in an algorithm way because either they require computational effort inadequate to obtained results or visibility of artefacts depends on the ambiguous parameters. In this work we propose a technique, in which visibility of deteriorations is perceptually assessed by human observers. We conduct subjective experiments in which people manually mark the visible local artefacts in the screenshots from the games. Then, the detection maps averaged over a number of observers are compared with results generated by the image quality metrics (IQMs). Simple mathematically-based metric - MSE, and advanced IQMs: S-CIELAB, SSIM, MSSIM, and HDR-VDP-2 are evaluated. We compare convergence in the detection between the maps created by humans and computed by IQMs. The obtained results show that SSIM and MSSIM metrics outperform other techniques. However, the results are not indisputable because, for small and scattered aliasing artefacts, HDR-VDP-2 metrics reports the results most consistent with the average human observer. Notwithstanding, the results suggest that it is feasible to use the IQMs detection maps to leverage and calibrate the rendering algorithms directly based on the analysis of quality of the output images.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Jackoski, William Kalescky, Joshua Ladd, William Cobb, B. Sanders
Immersive virtual environments (IVEs) provide an opportunity for humans to learn and to experience a place, which because of time, distance, danger, or expense, would not otherwise be available. Since navigation is the most common way users interact with 3D environments, much research has examined how well people navigate and learn the spatial layouts of IVEs. Ideally, a user's experience of an IVE would mimic a real world experience. However, this does not happen in practice. Thus, much work, examines the differences in real world and similar virtual experiences. Additionally, the navigation mechanisms that are used to explore an IVE do not allow for the exact same interactions as the real world. For example, it is difficult to replicate the physical aspect of climbing stairs or walking over rough terrain in an IVE. More work needs to be completed to assess how well people interact and learn in different types of virtual worlds. For example, the ground plane of the virtual worlds that people explore will not necessarily be flat. Thus, the question becomes how well can a person maintain comparable spatial awareness in an environment with uneven terrain? In this work, we examine subjects' spatial orientation as they traverse over uneven terrain while they physically locomote on a flat surface. IVEs are best explored on foot. That is, spatial awareness of the IVEs is best when users physically explore a virtual environment on foot. Thus, in this work we examine what happens to spatial orientation when subjects traverse over hilly virtual environments on foot.
{"title":"Walking on foot to explore a virtual environment with uneven terrain","authors":"Matthew Jackoski, William Kalescky, Joshua Ladd, William Cobb, B. Sanders","doi":"10.1145/2804408.2814186","DOIUrl":"https://doi.org/10.1145/2804408.2814186","url":null,"abstract":"Immersive virtual environments (IVEs) provide an opportunity for humans to learn and to experience a place, which because of time, distance, danger, or expense, would not otherwise be available. Since navigation is the most common way users interact with 3D environments, much research has examined how well people navigate and learn the spatial layouts of IVEs. Ideally, a user's experience of an IVE would mimic a real world experience. However, this does not happen in practice. Thus, much work, examines the differences in real world and similar virtual experiences. Additionally, the navigation mechanisms that are used to explore an IVE do not allow for the exact same interactions as the real world. For example, it is difficult to replicate the physical aspect of climbing stairs or walking over rough terrain in an IVE. More work needs to be completed to assess how well people interact and learn in different types of virtual worlds. For example, the ground plane of the virtual worlds that people explore will not necessarily be flat. Thus, the question becomes how well can a person maintain comparable spatial awareness in an environment with uneven terrain? In this work, we examine subjects' spatial orientation as they traverse over uneven terrain while they physically locomote on a flat surface. IVEs are best explored on foot. That is, spatial awareness of the IVEs is best when users physically explore a virtual environment on foot. Thus, in this work we examine what happens to spatial orientation when subjects traverse over hilly virtual environments on foot.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131257547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Aliaga, C. O'Sullivan, D. Gutierrez, Rasmus Tamstorf
Physical simulation and rendering of cloth is widely used in 3D graphics applications to create realistic and compelling scenes. However, cloth animation can be slow to compute and difficult to specify. In this paper, we present a set of experiments in which we explore some factors that contribute to the perception of cloth, to determine how efficiency could be improved without sacrificing realism. Using real video footage of several fabrics covering a wide range of visual appearances and dynamic behaviors, and their simulated counterparts, we explore the interplay of visual appearance and dynamics in cloth animation.
{"title":"Sackcloth or silk?: the impact of appearance vs dynamics on the perception of animated cloth","authors":"Carlos Aliaga, C. O'Sullivan, D. Gutierrez, Rasmus Tamstorf","doi":"10.1145/2804408.2804412","DOIUrl":"https://doi.org/10.1145/2804408.2804412","url":null,"abstract":"Physical simulation and rendering of cloth is widely used in 3D graphics applications to create realistic and compelling scenes. However, cloth animation can be slow to compute and difficult to specify. In this paper, we present a set of experiments in which we explore some factors that contribute to the perception of cloth, to determine how efficiency could be improved without sacrificing realism. Using real video footage of several fabrics covering a wide range of visual appearances and dynamic behaviors, and their simulated counterparts, we explore the interplay of visual appearance and dynamics in cloth animation.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125318217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}