Color mapping, which involves assigning colors to the individual elements of an underlying data distribution, is a commonly used method for data visualization. Although color maps are used in many disciplines and for a variety of tasks, in this study we focus on its usage for visualizing luminance maps. Specifically, we ask ourselves the question of how to best visualize a luminance distribution encoded in a high-dynamic-range (HDR) image using false colors such that the resulting visualization is the most descriptive. To this end, we first propose a definition for descriptiveness. We then propose a methodology to evaluate it subjectively. Then, we propose an objective metric that correlates well with the subjective evaluation results. Using this metric, we evaluate several false coloring strategies using a large number of HDR images. Finally, we conduct a second psychophysical experiment using images representing a diverse set of scenes. Our results indicate that the luminance compression method has a significant effect and the commonly used logarithmic compression is inferior to histogram equalization. Furthermore, we find that the default color scale of the Radiance global illumination software consistently performs well when combined with histogram equalization. On the other hand, the commonly used rainbow color scale was found to be inferior. We believe that the proposed methodology is suitable for evaluating future color mapping strategies as well.
{"title":"A Proposed Methodology for Evaluating HDR False Color Maps","authors":"A. Akyüz, Osman Kaya","doi":"10.1145/2911986","DOIUrl":"https://doi.org/10.1145/2911986","url":null,"abstract":"Color mapping, which involves assigning colors to the individual elements of an underlying data distribution, is a commonly used method for data visualization. Although color maps are used in many disciplines and for a variety of tasks, in this study we focus on its usage for visualizing luminance maps. Specifically, we ask ourselves the question of how to best visualize a luminance distribution encoded in a high-dynamic-range (HDR) image using false colors such that the resulting visualization is the most descriptive. To this end, we first propose a definition for descriptiveness. We then propose a methodology to evaluate it subjectively. Then, we propose an objective metric that correlates well with the subjective evaluation results. Using this metric, we evaluate several false coloring strategies using a large number of HDR images. Finally, we conduct a second psychophysical experiment using images representing a diverse set of scenes. Our results indicate that the luminance compression method has a significant effect and the commonly used logarithmic compression is inferior to histogram equalization. Furthermore, we find that the default color scale of the Radiance global illumination software consistently performs well when combined with histogram equalization. On the other hand, the commonly used rainbow color scale was found to be inferior. We believe that the proposed methodology is suitable for evaluating future color mapping strategies as well.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80271568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigated the detection of sound displacement in a four-voice musical piece under conditions that manipulated the attentional setting (selective or divided attention), the sound source numerosity, the spatial dispersion of the voices, and the tonal complexity of the piece. Detection was easiest when each voice was played in isolation and performance deteriorated when source numerosity increased and uncertainty with respect to the voice in which displacement would occur was introduced. Restricting the area occupied by the voices improved performance in agreement with the auditory spotlight hypothesis as did reducing the tonal complexity of the piece. Performance under increased numerosity conditions depended on the voice in which displacement occurred. The results highlight the importance of top-down processes in the context of the detection of spatial displacement in a musical scene.
{"title":"Top-Down Influences in the Detection of Spatial Displacement in a Musical Scene","authors":"G. Marentakis, Cathryn Griffiths, S. McAdams","doi":"10.1145/2911985","DOIUrl":"https://doi.org/10.1145/2911985","url":null,"abstract":"We investigated the detection of sound displacement in a four-voice musical piece under conditions that manipulated the attentional setting (selective or divided attention), the sound source numerosity, the spatial dispersion of the voices, and the tonal complexity of the piece. Detection was easiest when each voice was played in isolation and performance deteriorated when source numerosity increased and uncertainty with respect to the voice in which displacement would occur was introduced. Restricting the area occupied by the voices improved performance in agreement with the auditory spotlight hypothesis as did reducing the tonal complexity of the piece. Performance under increased numerosity conditions depended on the voice in which displacement occurred. The results highlight the importance of top-down processes in the context of the detection of spatial displacement in a musical scene.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87591661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using mixed reality in vehicles provides a potential alternative to using driving simulators when studying driver-vehicle interaction. However, virtual reality systems introduce latency in the visual system that may alter driving behavior, which, in turn, results in questionable validity. Previous studies have mainly focused on visual latency as a separate phenomenon. In this work, latency is studied from a task-dependent viewpoint to investigate how participants’ driving behavior changed with increased latency. In this study, the investigation was performed through experiments in which regular drivers were subjected to different levels of visual latency while performing a simple slalom driving task. The drivers’ performances were recorded and evaluated in both lateral and longitudinal directions along with self-assessment questionnaires regarding task performance and difficulty. All participants managed to complete the driving tasks successfully, even under high latency conditions, but were clearly affected by the increased visual latency. The results suggest that drivers compensate for longer latencies by steering more and increasing the safety margins but without reducing their speed.
{"title":"Effects of Visual Latency on Vehicle Driving Behavior","authors":"Björn Blissing, F. Bruzelius, Olle Eriksson","doi":"10.1145/2971320","DOIUrl":"https://doi.org/10.1145/2971320","url":null,"abstract":"Using mixed reality in vehicles provides a potential alternative to using driving simulators when studying driver-vehicle interaction. However, virtual reality systems introduce latency in the visual system that may alter driving behavior, which, in turn, results in questionable validity. Previous studies have mainly focused on visual latency as a separate phenomenon. In this work, latency is studied from a task-dependent viewpoint to investigate how participants’ driving behavior changed with increased latency. In this study, the investigation was performed through experiments in which regular drivers were subjected to different levels of visual latency while performing a simple slalom driving task. The drivers’ performances were recorded and evaluated in both lateral and longitudinal directions along with self-assessment questionnaires regarding task performance and difficulty. All participants managed to complete the driving tasks successfully, even under high latency conditions, but were clearly affected by the increased visual latency. The results suggest that drivers compensate for longer latencies by steering more and increasing the safety margins but without reducing their speed.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79580902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bright glare in nighttime situations strongly decreases human contrast perception. Nighttime simulations therefore require a way to realistically depict contrast perception of the user. Due to the limited luminance of popular as well as specialized high-dynamic range displays, physical adaptation of the human eye cannot yet be replicated in a physically correct manner in a simulation environment. To overcome this limitation, we propose a method to emulate the adaptation in nighttime glare situations using a perception-based model. We implemented a postprocessing tone mapping algorithm that simulates the corresponding contrast reduction effect for a night-driving simulation with glares from oncoming vehicles headlights. During glare, tone mapping reduces image contrast in accordance with the incident veiling luminance. As the glare expires, the contrast starts to normalize smoothly over time. The conversion of glare parameters and elapsed time into image contrast during the readaptation phase is based on extensive user studies carried out first in a controlled laboratory setup. Additional user studies have then been conducted in field tests to ensure validity of the derived time-dependent tone-mapping function and to verify transferability onto real-world traffic scenarios.
{"title":"Simulating Visual Contrast Reduction during Nighttime Glare Situations on Conventional Displays","authors":"B. Meyer, S. Grogorick, M. Vollrath, M. Magnor","doi":"10.1145/2934684","DOIUrl":"https://doi.org/10.1145/2934684","url":null,"abstract":"Bright glare in nighttime situations strongly decreases human contrast perception. Nighttime simulations therefore require a way to realistically depict contrast perception of the user. Due to the limited luminance of popular as well as specialized high-dynamic range displays, physical adaptation of the human eye cannot yet be replicated in a physically correct manner in a simulation environment. To overcome this limitation, we propose a method to emulate the adaptation in nighttime glare situations using a perception-based model. We implemented a postprocessing tone mapping algorithm that simulates the corresponding contrast reduction effect for a night-driving simulation with glares from oncoming vehicles headlights. During glare, tone mapping reduces image contrast in accordance with the incident veiling luminance. As the glare expires, the contrast starts to normalize smoothly over time. The conversion of glare parameters and elapsed time into image contrast during the readaptation phase is based on extensive user studies carried out first in a controlled laboratory setup. Additional user studies have then been conducted in field tests to ensure validity of the derived time-dependent tone-mapping function and to verify transferability onto real-world traffic scenarios.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73175438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most research on multisensory processing focuses on impoverished stimuli and simple tasks. In consequence, very little is known about the sensory contributions in the perception of real environments. Here, we presented 23 participants with paired comparison tasks, where natural scenes were discriminated in three perceptually meaningful attributes: movement, openness, and noisiness. The goal was to assess the auditory and visual modality contributions in scene discrimination with short (≤500ms) natural scene exposures. The scenes were reproduced in an immersive audiovisual environment with 3D sound and surrounding visuals. Movement and openness were found to be mainly visual attributes with some input from auditory information. In some scenes, the auditory system was able to derive information about movement and openness that was comparable with audiovisual condition already after 500ms stimulation. Noisiness was mainly auditory, but visual information was found to have a facilitatory role in a few scenes. The sensory weights were highly imbalanced in favor of the stronger modality, but the weaker modality was able to affect the bimodal estimate in some scenes.
{"title":"Reproducing Reality: Multimodal Contributions in Natural Scene Discrimination","authors":"Olli S. Rummukainen, Catarina Mendonça","doi":"10.1145/2915917","DOIUrl":"https://doi.org/10.1145/2915917","url":null,"abstract":"Most research on multisensory processing focuses on impoverished stimuli and simple tasks. In consequence, very little is known about the sensory contributions in the perception of real environments. Here, we presented 23 participants with paired comparison tasks, where natural scenes were discriminated in three perceptually meaningful attributes: movement, openness, and noisiness. The goal was to assess the auditory and visual modality contributions in scene discrimination with short (≤500ms) natural scene exposures. The scenes were reproduced in an immersive audiovisual environment with 3D sound and surrounding visuals. Movement and openness were found to be mainly visual attributes with some input from auditory information. In some scenes, the auditory system was able to derive information about movement and openness that was comparable with audiovisual condition already after 500ms stimulation. Noisiness was mainly auditory, but visual information was found to have a facilitatory role in a few scenes. The sensory weights were highly imbalanced in favor of the stronger modality, but the weaker modality was able to affect the bimodal estimate in some scenes.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80560147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We test the possibility of tapping the subconscious mind for face recognition using consumer-grade BCIs. To this end, we performed an experiment whereby subjects were presented with photographs of famous persons with the expectation that about 20% of them would be (consciously) recognized; and since the photos are of famous persons, we expected that subjects would have seen before some of the 80% they didn’t (consciously) recognize. Further, we expected that their subconscious would have recognized some of those in the 80% pool that they had seen before. An exit questionnaire and a set of criteria allowed us to label recognitions as conscious, false, no recognitions, or subconscious recognitions. We analyzed a number of event related potentials training and testing a support vector machine. We found that our method is capable of differentiating between no recognitions and subconscious recognitions with promising accuracy levels, suggesting that tapping the subconscious mind for face recognition is feasible.
{"title":"Detection of Subconscious Face Recognition Using Consumer-Grade Brain-Computer Interfaces","authors":"Miguel Vargas Martin, V. Cho, Gabriel Aversano","doi":"10.1145/2955097","DOIUrl":"https://doi.org/10.1145/2955097","url":null,"abstract":"We test the possibility of tapping the subconscious mind for face recognition using consumer-grade BCIs. To this end, we performed an experiment whereby subjects were presented with photographs of famous persons with the expectation that about 20% of them would be (consciously) recognized; and since the photos are of famous persons, we expected that subjects would have seen before some of the 80% they didn’t (consciously) recognize. Further, we expected that their subconscious would have recognized some of those in the 80% pool that they had seen before. An exit questionnaire and a set of criteria allowed us to label recognitions as conscious, false, no recognitions, or subconscious recognitions. We analyzed a number of event related potentials training and testing a support vector machine. We found that our method is capable of differentiating between no recognitions and subconscious recognitions with promising accuracy levels, suggesting that tapping the subconscious mind for face recognition is feasible.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81961333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco V. Bernardo, A. Pinheiro, P. Fiadeiro, Manuela Pereira
The influence of chromatic impairments on the perceived image quality is studied in this article. Under the D65 standard illuminant, a set of hyperspectral images were represented into the CIELAB color space, and the corresponding chromatic coordinates were subdivided into clusters with the k-means algorithm. Each color cluster was shifted by a predefined chromatic impairment ΔE*ab with random direction in a*b* chromatic coordinates only. Applying impairments of 3, 6, 9, 12, and 15 in a*b* coordinates to five hyperspectral images a set of modified images was generated. Those images were shown to subjects that were asked to rank their quality based on their naturalness. The Mean Opinion Score of the subjective evaluations was computed to quantify the sensitivity to the chromatic variations. This article is also complemented with an objective evaluation of the quality using several state-of-the-art metrics and using the CIEDE2000 color difference among others. Analyzing the correlations between subjective and objective quality evaluation helps us to conclude that the proposed quality estimators based on the CIEDE2000 provide the best representation. Moreover, it was concluded that the established quality metrics only become reliable by averaging their results on each color component.
{"title":"Image Quality under Chromatic Impairments","authors":"Marco V. Bernardo, A. Pinheiro, P. Fiadeiro, Manuela Pereira","doi":"10.1145/2964908","DOIUrl":"https://doi.org/10.1145/2964908","url":null,"abstract":"The influence of chromatic impairments on the perceived image quality is studied in this article. Under the D65 standard illuminant, a set of hyperspectral images were represented into the CIELAB color space, and the corresponding chromatic coordinates were subdivided into clusters with the k-means algorithm. Each color cluster was shifted by a predefined chromatic impairment ΔE*ab with random direction in a*b* chromatic coordinates only. Applying impairments of 3, 6, 9, 12, and 15 in a*b* coordinates to five hyperspectral images a set of modified images was generated. Those images were shown to subjects that were asked to rank their quality based on their naturalness. The Mean Opinion Score of the subjective evaluations was computed to quantify the sensitivity to the chromatic variations. This article is also complemented with an objective evaluation of the quality using several state-of-the-art metrics and using the CIEDE2000 color difference among others. Analyzing the correlations between subjective and objective quality evaluation helps us to conclude that the proposed quality estimators based on the CIEDE2000 provide the best representation. Moreover, it was concluded that the established quality metrics only become reliable by averaging their results on each color component.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90539422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6--33.5% for the verification scenario, and in range of 22.3--53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues.
{"title":"Biometric Recognition via Eye Movements: Saccadic Vigor and Acceleration Cues","authors":"Ioannis Rigas, Oleg V. Komogortsev, R. Shadmehr","doi":"10.1145/2842614","DOIUrl":"https://doi.org/10.1145/2842614","url":null,"abstract":"Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6--33.5% for the verification scenario, and in range of 22.3--53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79529252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern computer graphics are capable of generating highly photorealistic images. Although this can be considered a success for the computer graphics community, it has given rise to complex forensic and legal issues. A compelling example comes from the need to distinguish between computer-generated and photographic images as it pertains to the legality and prosecution of child pornography in the United States. We performed psychophysical experiments to determine the accuracy with which observers are capable of distinguishing computer-generated from photographic images. We find that observers have considerable difficulty performing this task—more difficulty than we observed 5 years ago when computer-generated imagery was not as photorealistic. We also find that observers are more likely to report that an image is photographic rather than computer generated, and that resolution has surprisingly little effect on performance. Finally, we find that a small amount of training greatly improves accuracy.
{"title":"Assessing and Improving the Identification of Computer-Generated Portraits","authors":"Olivia Holmes, M. Banks, H. Farid","doi":"10.1145/2871714","DOIUrl":"https://doi.org/10.1145/2871714","url":null,"abstract":"Modern computer graphics are capable of generating highly photorealistic images. Although this can be considered a success for the computer graphics community, it has given rise to complex forensic and legal issues. A compelling example comes from the need to distinguish between computer-generated and photographic images as it pertains to the legality and prosecution of child pornography in the United States. We performed psychophysical experiments to determine the accuracy with which observers are capable of distinguishing computer-generated from photographic images. We find that observers have considerable difficulty performing this task—more difficulty than we observed 5 years ago when computer-generated imagery was not as photorealistic. We also find that observers are more likely to report that an image is photographic rather than computer generated, and that resolution has surprisingly little effect on performance. Finally, we find that a small amount of training greatly improves accuracy.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79322937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingying Wang, J. E. Tree, M. Walker, Michael Neff
Designing virtual characters that are capable of conveying a sense of personality is important for generating realistic experiences, and thus a key goal in computer animation research. Though the influence of gesture and body motion on personality perception has been studied, little is known about which attributes of hand pose and motion convey particular personality traits. Using the “Big Five” model as a framework for evaluating personality traits, this work examines how variations in hand pose and motion impact the perception of a character's personality. As has been done with facial motion, we first study hand motion in isolation as a requirement for running controlled experiments that avoid the combinatorial explosion of multimodal communication (all combinations of facial expressions, arm movements, body movements, and hands) and allow us to understand the communicative content of hands. We determined a set of features likely to reflect personality, based on research in psychology and previous human motion perception work: shape, direction, amplitude, speed, and manipulation. Then we captured realistic hand motion varying these attributes and conducted three perceptual experiments to determine the contribution of these attributes to the character's personalities. Both hand poses and the amplitude of hand motion affected the perception of all five personality traits. Speed impacted all traits except openness. Direction impacted extraversion and openness. Manipulation was perceived as an indicator of introversion, disagreeableness, neuroticism, and less openness to experience. From these results, we generalize guidelines for designing detailed hand motion that can add to the expressiveness and personality of characters. We performed an evaluation study that combined hand motion with gesture and body motion. Even in the presence of body motion, hand motion still significantly impacted the perception of a character's personality and could even be the dominant factor in certain situations.
{"title":"Assessing the Impact of Hand Motion on Virtual Character Personality","authors":"Yingying Wang, J. E. Tree, M. Walker, Michael Neff","doi":"10.1145/2874357","DOIUrl":"https://doi.org/10.1145/2874357","url":null,"abstract":"Designing virtual characters that are capable of conveying a sense of personality is important for generating realistic experiences, and thus a key goal in computer animation research. Though the influence of gesture and body motion on personality perception has been studied, little is known about which attributes of hand pose and motion convey particular personality traits. Using the “Big Five” model as a framework for evaluating personality traits, this work examines how variations in hand pose and motion impact the perception of a character's personality. As has been done with facial motion, we first study hand motion in isolation as a requirement for running controlled experiments that avoid the combinatorial explosion of multimodal communication (all combinations of facial expressions, arm movements, body movements, and hands) and allow us to understand the communicative content of hands. We determined a set of features likely to reflect personality, based on research in psychology and previous human motion perception work: shape, direction, amplitude, speed, and manipulation. Then we captured realistic hand motion varying these attributes and conducted three perceptual experiments to determine the contribution of these attributes to the character's personalities. Both hand poses and the amplitude of hand motion affected the perception of all five personality traits. Speed impacted all traits except openness. Direction impacted extraversion and openness. Manipulation was perceived as an indicator of introversion, disagreeableness, neuroticism, and less openness to experience. From these results, we generalize guidelines for designing detailed hand motion that can add to the expressiveness and personality of characters. We performed an evaluation study that combined hand motion with gesture and body motion. Even in the presence of body motion, hand motion still significantly impacted the perception of a character's personality and could even be the dominant factor in certain situations.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91055689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}