This study explored whether people misremember having seen a physical object when they actually had viewed a virtual one in augmented reality (and vice versa). Participants viewed uniquely shaped objects in a virtual form or a physical, 3D-printed form. A camera mounted behind a computer monitor showed either the physical object or an augmented reality version of it on the display. After viewing the full set of objects, participants viewed photographs of each object (taken from the physical version) and judged whether they had originally seen it as a physical or virtual object. On average, participants correctly identified the object format for 60% of the photographs. When participants were allowed to manipulate the physical or virtual object (using a Leap Motion Controller), accuracy increased to 73%. In both cases, participants were biased to remember the objects as having been virtual.
{"title":"Remembering the physical as virtual: source confusion and physical interaction in augmented reality","authors":"Ajoy S. Fernandes, R. Wang, D. Simons","doi":"10.1145/2804408.2804423","DOIUrl":"https://doi.org/10.1145/2804408.2804423","url":null,"abstract":"This study explored whether people misremember having seen a physical object when they actually had viewed a virtual one in augmented reality (and vice versa). Participants viewed uniquely shaped objects in a virtual form or a physical, 3D-printed form. A camera mounted behind a computer monitor showed either the physical object or an augmented reality version of it on the display. After viewing the full set of objects, participants viewed photographs of each object (taken from the physical version) and judged whether they had originally seen it as a physical or virtual object. On average, participants correctly identified the object format for 60% of the photographs. When participants were allowed to manipulate the physical or virtual object (using a Leap Motion Controller), accuracy increased to 73%. In both cases, participants were biased to remember the objects as having been virtual.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128577907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Schorradt, K. Legde, Susana Castillo, D. Cunningham
We convey a tremendous amount of information vocally. In addition to the obvious exchange of semantic information, we unconsciously vary a number of acoustic properties of the speech wave to provide information about our emotions, thoughts, and intentions. [Cahn 1990] Advances in understanding of human physiology combined with increases in the computational power available in modern computers have made the simulation of the human vocal tract a realistic option for creating artificial speech. Such systems can, in principle, produce any sound that a human can make. Here we present two experiments examining the expression of emotion using prosody (i.e., speech melody) in human recordings and an articulatory speech synthesis system.
{"title":"Integration and evaluation of emotion in an articulatory speech synthesis system","authors":"Martin Schorradt, K. Legde, Susana Castillo, D. Cunningham","doi":"10.1145/2804408.2814183","DOIUrl":"https://doi.org/10.1145/2804408.2814183","url":null,"abstract":"We convey a tremendous amount of information vocally. In addition to the obvious exchange of semantic information, we unconsciously vary a number of acoustic properties of the speech wave to provide information about our emotions, thoughts, and intentions. [Cahn 1990] Advances in understanding of human physiology combined with increases in the computational power available in modern computers have made the simulation of the human vocal tract a realistic option for creating artificial speech. Such systems can, in principle, produce any sound that a human can make. Here we present two experiments examining the expression of emotion using prosody (i.e., speech melody) in human recordings and an articulatory speech synthesis system.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128984224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is growing interest in capturing and projecting movies at higher frame rates than the traditional 24 frames per second. Yet there has been little scientific assessment of the impact of higher frame rates (HFR) on the perceived quality of cinema content. Here we investigated the effect of frame rate, and associated variables (shutter angle and camera motion) on viewers' ability to discriminate letters in S3D movie clips captured by a professional film crew. The footage was filmed and projected at varying combinations of frame rate, camera speed and shutter angle. Our results showed that, overall, legibility improved with increased frame rate and reduced camera velocity. However, contrary to expectations, there was little effect of shutter angle on legibility. We also show that specific combinations of camera parameters can lead to dramatic reductions in legibility for localized regions in a scene.
{"title":"Evaluation of the impact of high frame rates on legibility in S3D film","authors":"Michael Marianovski, L. Wilcox, R. Allison","doi":"10.1145/2804408.2804411","DOIUrl":"https://doi.org/10.1145/2804408.2804411","url":null,"abstract":"There is growing interest in capturing and projecting movies at higher frame rates than the traditional 24 frames per second. Yet there has been little scientific assessment of the impact of higher frame rates (HFR) on the perceived quality of cinema content. Here we investigated the effect of frame rate, and associated variables (shutter angle and camera motion) on viewers' ability to discriminate letters in S3D movie clips captured by a professional film crew. The footage was filmed and projected at varying combinations of frame rate, camera speed and shutter angle. Our results showed that, overall, legibility improved with increased frame rate and reduced camera velocity. However, contrary to expectations, there was little effect of shutter angle on legibility. We also show that specific combinations of camera parameters can lead to dramatic reductions in legibility for localized regions in a scene.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129463178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morgan McCullough, Hong Xu, Joel Michelson, Matthew Jackoski, Wyatt Pease, William Cobb, William Kalescky, Joshua Ladd, B. Sanders
In this paper, we use an inexpensive wearable device called the Myo armband (199 USD) to implement a simple arm swinging algorithm that allows a user to freely explore an HMD-based virtual environment. Using a spatial orientation task we directly compared our Myo arm--swinging method to joystick locomotion and physical walking. We find that our arm swinging method outperforms the simple joystick and that spatial orientation is comparable to physically walking on foot. Our arm--swinging method is inexpensive compared to tracking systems that permit foot exploration, does not suffer from space constraints, and requires less physical energy than walking on foot.
{"title":"Myo arm: swinging to explore a VE","authors":"Morgan McCullough, Hong Xu, Joel Michelson, Matthew Jackoski, Wyatt Pease, William Cobb, William Kalescky, Joshua Ladd, B. Sanders","doi":"10.1145/2804408.2804416","DOIUrl":"https://doi.org/10.1145/2804408.2804416","url":null,"abstract":"In this paper, we use an inexpensive wearable device called the Myo armband (199 USD) to implement a simple arm swinging algorithm that allows a user to freely explore an HMD-based virtual environment. Using a spatial orientation task we directly compared our Myo arm--swinging method to joystick locomotion and physical walking. We find that our arm swinging method outperforms the simple joystick and that spatial orientation is comparable to physically walking on foot. Our arm--swinging method is inexpensive compared to tracking systems that permit foot exploration, does not suffer from space constraints, and requires less physical energy than walking on foot.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124462109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Lambrant, Francisco López Luro, V. Sundstedt
Selecting color schemes for game objects is an important task. It can be valuable to game designers to know what colors are preferred. Principles of color theory are important to select appropriate colors. This paper presents a perceptual experiment that evaluates some basic principles of color theory applied to game objects to study if a particular combination is preferred. An experiment was conducted with 15 participants who performed a two-alternative forced choice (2AFC) preference experiment using 236 pairs of images each. The pairs were based on color harmonies derived from the colors red, green, and blue. The color harmonies were evaluated against each other and included analogous, complementary, split-complementary, triad, and warm and cool colors. A high and low saturation condition was also included. The color harmonies were applied to an existing game character (avatar) and a new object (cube) to study any potential differences in the results. The initial results show that some color harmonies, in particular triad and split-complementary, were generally preferred over others meaning that it is important to take into account these aspects in game design. Additional results also show that color harmonies with a base in green were not as popular as red and blue color harmonies.
{"title":"Avatar preference selection in game design based on color theory","authors":"Andreas Lambrant, Francisco López Luro, V. Sundstedt","doi":"10.1145/2804408.2804421","DOIUrl":"https://doi.org/10.1145/2804408.2804421","url":null,"abstract":"Selecting color schemes for game objects is an important task. It can be valuable to game designers to know what colors are preferred. Principles of color theory are important to select appropriate colors. This paper presents a perceptual experiment that evaluates some basic principles of color theory applied to game objects to study if a particular combination is preferred. An experiment was conducted with 15 participants who performed a two-alternative forced choice (2AFC) preference experiment using 236 pairs of images each. The pairs were based on color harmonies derived from the colors red, green, and blue. The color harmonies were evaluated against each other and included analogous, complementary, split-complementary, triad, and warm and cool colors. A high and low saturation condition was also included. The color harmonies were applied to an existing game character (avatar) and a new object (cube) to study any potential differences in the results. The initial results show that some color harmonies, in particular triad and split-complementary, were generally preferred over others meaning that it is important to take into account these aspects in game design. Additional results also show that color harmonies with a base in green were not as popular as red and blue color harmonies.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131028507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we conducted a perceptual experiment to determine if specific personality traits can be portrayed through eye and head movement in the absence of other facial animation cues. We created a collection of eye and head motions captured from three female actors portraying different personalities, while listening to instructional videos. In a between-groups experiment, we tested the perception of personality on a realistic model and a cartoon stylisation in order to determine if stylisation can positively influence the perceived personality or if personality is more easily identified on a realistic face. Our results verify that participants were able to differentiate between personality traits portrayed only through eye gaze, blinks and head movement. The results also show that perception of personality was robust across character realism.
{"title":"Perception of personality through eye gaze of realistic and cartoon models","authors":"K. Ruhland, Katja Zibrek, R. Mcdonnell","doi":"10.1145/2804408.2804424","DOIUrl":"https://doi.org/10.1145/2804408.2804424","url":null,"abstract":"In this paper, we conducted a perceptual experiment to determine if specific personality traits can be portrayed through eye and head movement in the absence of other facial animation cues. We created a collection of eye and head motions captured from three female actors portraying different personalities, while listening to instructional videos. In a between-groups experiment, we tested the perception of personality on a realistic model and a cartoon stylisation in order to determine if stylisation can positively influence the perceived personality or if personality is more easily identified on a realistic face. Our results verify that participants were able to differentiate between personality traits portrayed only through eye gaze, blinks and head movement. The results also show that perception of personality was robust across character realism.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127553996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyungho Jeong, Youn-Ju Seong, Ju-No Chung, Yongsung Park, Woo-nyoung Lee
Recently, there has been an increasing research interest on thermal feedback. This includes utilizing thermal systems as simple messaging tools, such as: a system that uses temperature to present the importance of messages [Wilson G. 2012] and help users navigate the road by presenting speed or distance with thermal stimuli on the users arm [David and Henry 2013]. There is no researches about array-type thermal feedback. Therefore, we have focused on communication via an array-type thermal patterns. As a first step, this poster presents 1) how well the subjects can differentiate between different spots on which thermal stimulation is presented and 2) whether the subjects can recognize the directional thermal stimulations. In order to find the answer, we have placed the thermal device on where the subjects felt most comfortable: the wrist. The device was also placed on the back of the neck to mimic a scarf or a bluetooth headset.
近年来,热反馈的研究日益引起人们的兴趣。这包括利用热系统作为简单的信息传递工具,例如:一种利用温度来显示信息重要性的系统[Wilson G. 2012],并通过在用户手臂上的热刺激显示速度或距离来帮助用户导航[David and Henry 2013]。阵列式热反馈的研究尚不存在。因此,我们专注于通过阵列型热模式进行通信。作为第一步,这张海报展示了1)受试者如何区分不同的热刺激点,2)受试者是否能够识别定向热刺激。为了找到答案,我们将热装置放置在受试者感觉最舒适的地方:手腕。该设备还被放置在脖子后面,模仿围巾或蓝牙耳机。
{"title":"Directional thermal perception for wearable device","authors":"Kyungho Jeong, Youn-Ju Seong, Ju-No Chung, Yongsung Park, Woo-nyoung Lee","doi":"10.1145/2804408.2814184","DOIUrl":"https://doi.org/10.1145/2804408.2814184","url":null,"abstract":"Recently, there has been an increasing research interest on thermal feedback. This includes utilizing thermal systems as simple messaging tools, such as: a system that uses temperature to present the importance of messages [Wilson G. 2012] and help users navigate the road by presenting speed or distance with thermal stimuli on the users arm [David and Henry 2013]. There is no researches about array-type thermal feedback. Therefore, we have focused on communication via an array-type thermal patterns. As a first step, this poster presents 1) how well the subjects can differentiate between different spots on which thermal stimulation is presented and 2) whether the subjects can recognize the directional thermal stimulations. In order to find the answer, we have placed the thermal device on where the subjects felt most comfortable: the wrist. The device was also placed on the back of the neck to mimic a scarf or a bluetooth headset.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117274625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contemporary game engines offer an outstanding graphics quality but they are not free from visual artefacts. A typical example is aliasing, which, despite advanced antialiasing techniques, is still visible to the game players. Essential deteriorations are the shadow acne and peter panning artefacts related to deficiency of the shadow mapping technique. Also Z-fighting, caused by the incorrect order of drawing polygons, significantly affects the quality of the graphics and makes the gameplay difficult. These artefacts are laborious to eliminate in an algorithm way because either they require computational effort inadequate to obtained results or visibility of artefacts depends on the ambiguous parameters. In this work we propose a technique, in which visibility of deteriorations is perceptually assessed by human observers. We conduct subjective experiments in which people manually mark the visible local artefacts in the screenshots from the games. Then, the detection maps averaged over a number of observers are compared with results generated by the image quality metrics (IQMs). Simple mathematically-based metric - MSE, and advanced IQMs: S-CIELAB, SSIM, MSSIM, and HDR-VDP-2 are evaluated. We compare convergence in the detection between the maps created by humans and computed by IQMs. The obtained results show that SSIM and MSSIM metrics outperform other techniques. However, the results are not indisputable because, for small and scattered aliasing artefacts, HDR-VDP-2 metrics reports the results most consistent with the average human observer. Notwithstanding, the results suggest that it is feasible to use the IQMs detection maps to leverage and calibrate the rendering algorithms directly based on the analysis of quality of the output images.
{"title":"Using full reference image quality metrics to detect game engine artefacts","authors":"Rafal Piórkowski, R. Mantiuk","doi":"10.1145/2804408.2804414","DOIUrl":"https://doi.org/10.1145/2804408.2804414","url":null,"abstract":"Contemporary game engines offer an outstanding graphics quality but they are not free from visual artefacts. A typical example is aliasing, which, despite advanced antialiasing techniques, is still visible to the game players. Essential deteriorations are the shadow acne and peter panning artefacts related to deficiency of the shadow mapping technique. Also Z-fighting, caused by the incorrect order of drawing polygons, significantly affects the quality of the graphics and makes the gameplay difficult. These artefacts are laborious to eliminate in an algorithm way because either they require computational effort inadequate to obtained results or visibility of artefacts depends on the ambiguous parameters. In this work we propose a technique, in which visibility of deteriorations is perceptually assessed by human observers. We conduct subjective experiments in which people manually mark the visible local artefacts in the screenshots from the games. Then, the detection maps averaged over a number of observers are compared with results generated by the image quality metrics (IQMs). Simple mathematically-based metric - MSE, and advanced IQMs: S-CIELAB, SSIM, MSSIM, and HDR-VDP-2 are evaluated. We compare convergence in the detection between the maps created by humans and computed by IQMs. The obtained results show that SSIM and MSSIM metrics outperform other techniques. However, the results are not indisputable because, for small and scattered aliasing artefacts, HDR-VDP-2 metrics reports the results most consistent with the average human observer. Notwithstanding, the results suggest that it is feasible to use the IQMs detection maps to leverage and calibrate the rendering algorithms directly based on the analysis of quality of the output images.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Jackoski, William Kalescky, Joshua Ladd, William Cobb, B. Sanders
Immersive virtual environments (IVEs) provide an opportunity for humans to learn and to experience a place, which because of time, distance, danger, or expense, would not otherwise be available. Since navigation is the most common way users interact with 3D environments, much research has examined how well people navigate and learn the spatial layouts of IVEs. Ideally, a user's experience of an IVE would mimic a real world experience. However, this does not happen in practice. Thus, much work, examines the differences in real world and similar virtual experiences. Additionally, the navigation mechanisms that are used to explore an IVE do not allow for the exact same interactions as the real world. For example, it is difficult to replicate the physical aspect of climbing stairs or walking over rough terrain in an IVE. More work needs to be completed to assess how well people interact and learn in different types of virtual worlds. For example, the ground plane of the virtual worlds that people explore will not necessarily be flat. Thus, the question becomes how well can a person maintain comparable spatial awareness in an environment with uneven terrain? In this work, we examine subjects' spatial orientation as they traverse over uneven terrain while they physically locomote on a flat surface. IVEs are best explored on foot. That is, spatial awareness of the IVEs is best when users physically explore a virtual environment on foot. Thus, in this work we examine what happens to spatial orientation when subjects traverse over hilly virtual environments on foot.
{"title":"Walking on foot to explore a virtual environment with uneven terrain","authors":"Matthew Jackoski, William Kalescky, Joshua Ladd, William Cobb, B. Sanders","doi":"10.1145/2804408.2814186","DOIUrl":"https://doi.org/10.1145/2804408.2814186","url":null,"abstract":"Immersive virtual environments (IVEs) provide an opportunity for humans to learn and to experience a place, which because of time, distance, danger, or expense, would not otherwise be available. Since navigation is the most common way users interact with 3D environments, much research has examined how well people navigate and learn the spatial layouts of IVEs. Ideally, a user's experience of an IVE would mimic a real world experience. However, this does not happen in practice. Thus, much work, examines the differences in real world and similar virtual experiences. Additionally, the navigation mechanisms that are used to explore an IVE do not allow for the exact same interactions as the real world. For example, it is difficult to replicate the physical aspect of climbing stairs or walking over rough terrain in an IVE. More work needs to be completed to assess how well people interact and learn in different types of virtual worlds. For example, the ground plane of the virtual worlds that people explore will not necessarily be flat. Thus, the question becomes how well can a person maintain comparable spatial awareness in an environment with uneven terrain? In this work, we examine subjects' spatial orientation as they traverse over uneven terrain while they physically locomote on a flat surface. IVEs are best explored on foot. That is, spatial awareness of the IVEs is best when users physically explore a virtual environment on foot. Thus, in this work we examine what happens to spatial orientation when subjects traverse over hilly virtual environments on foot.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131257547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Aliaga, C. O'Sullivan, D. Gutierrez, Rasmus Tamstorf
Physical simulation and rendering of cloth is widely used in 3D graphics applications to create realistic and compelling scenes. However, cloth animation can be slow to compute and difficult to specify. In this paper, we present a set of experiments in which we explore some factors that contribute to the perception of cloth, to determine how efficiency could be improved without sacrificing realism. Using real video footage of several fabrics covering a wide range of visual appearances and dynamic behaviors, and their simulated counterparts, we explore the interplay of visual appearance and dynamics in cloth animation.
{"title":"Sackcloth or silk?: the impact of appearance vs dynamics on the perception of animated cloth","authors":"Carlos Aliaga, C. O'Sullivan, D. Gutierrez, Rasmus Tamstorf","doi":"10.1145/2804408.2804412","DOIUrl":"https://doi.org/10.1145/2804408.2804412","url":null,"abstract":"Physical simulation and rendering of cloth is widely used in 3D graphics applications to create realistic and compelling scenes. However, cloth animation can be slow to compute and difficult to specify. In this paper, we present a set of experiments in which we explore some factors that contribute to the perception of cloth, to determine how efficiency could be improved without sacrificing realism. Using real video footage of several fabrics covering a wide range of visual appearances and dynamic behaviors, and their simulated counterparts, we explore the interplay of visual appearance and dynamics in cloth animation.","PeriodicalId":283323,"journal":{"name":"Proceedings of the ACM SIGGRAPH Symposium on Applied Perception","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125318217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}