Most research on multisensory processing focuses on impoverished stimuli and simple tasks. In consequence, very little is known about the sensory contributions in the perception of real environments. Here, we presented 23 participants with paired comparison tasks, where natural scenes were discriminated in three perceptually meaningful attributes: movement, openness, and noisiness. The goal was to assess the auditory and visual modality contributions in scene discrimination with short (≤500ms) natural scene exposures. The scenes were reproduced in an immersive audiovisual environment with 3D sound and surrounding visuals. Movement and openness were found to be mainly visual attributes with some input from auditory information. In some scenes, the auditory system was able to derive information about movement and openness that was comparable with audiovisual condition already after 500ms stimulation. Noisiness was mainly auditory, but visual information was found to have a facilitatory role in a few scenes. The sensory weights were highly imbalanced in favor of the stronger modality, but the weaker modality was able to affect the bimodal estimate in some scenes.
{"title":"Reproducing Reality: Multimodal Contributions in Natural Scene Discrimination","authors":"Olli S. Rummukainen, Catarina Mendonça","doi":"10.1145/2915917","DOIUrl":"https://doi.org/10.1145/2915917","url":null,"abstract":"Most research on multisensory processing focuses on impoverished stimuli and simple tasks. In consequence, very little is known about the sensory contributions in the perception of real environments. Here, we presented 23 participants with paired comparison tasks, where natural scenes were discriminated in three perceptually meaningful attributes: movement, openness, and noisiness. The goal was to assess the auditory and visual modality contributions in scene discrimination with short (≤500ms) natural scene exposures. The scenes were reproduced in an immersive audiovisual environment with 3D sound and surrounding visuals. Movement and openness were found to be mainly visual attributes with some input from auditory information. In some scenes, the auditory system was able to derive information about movement and openness that was comparable with audiovisual condition already after 500ms stimulation. Noisiness was mainly auditory, but visual information was found to have a facilitatory role in a few scenes. The sensory weights were highly imbalanced in favor of the stronger modality, but the weaker modality was able to affect the bimodal estimate in some scenes.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"146 1","pages":"1:1-1:19"},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80560147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bright glare in nighttime situations strongly decreases human contrast perception. Nighttime simulations therefore require a way to realistically depict contrast perception of the user. Due to the limited luminance of popular as well as specialized high-dynamic range displays, physical adaptation of the human eye cannot yet be replicated in a physically correct manner in a simulation environment. To overcome this limitation, we propose a method to emulate the adaptation in nighttime glare situations using a perception-based model. We implemented a postprocessing tone mapping algorithm that simulates the corresponding contrast reduction effect for a night-driving simulation with glares from oncoming vehicles headlights. During glare, tone mapping reduces image contrast in accordance with the incident veiling luminance. As the glare expires, the contrast starts to normalize smoothly over time. The conversion of glare parameters and elapsed time into image contrast during the readaptation phase is based on extensive user studies carried out first in a controlled laboratory setup. Additional user studies have then been conducted in field tests to ensure validity of the derived time-dependent tone-mapping function and to verify transferability onto real-world traffic scenarios.
{"title":"Simulating Visual Contrast Reduction during Nighttime Glare Situations on Conventional Displays","authors":"B. Meyer, S. Grogorick, M. Vollrath, M. Magnor","doi":"10.1145/2934684","DOIUrl":"https://doi.org/10.1145/2934684","url":null,"abstract":"Bright glare in nighttime situations strongly decreases human contrast perception. Nighttime simulations therefore require a way to realistically depict contrast perception of the user. Due to the limited luminance of popular as well as specialized high-dynamic range displays, physical adaptation of the human eye cannot yet be replicated in a physically correct manner in a simulation environment. To overcome this limitation, we propose a method to emulate the adaptation in nighttime glare situations using a perception-based model. We implemented a postprocessing tone mapping algorithm that simulates the corresponding contrast reduction effect for a night-driving simulation with glares from oncoming vehicles headlights. During glare, tone mapping reduces image contrast in accordance with the incident veiling luminance. As the glare expires, the contrast starts to normalize smoothly over time. The conversion of glare parameters and elapsed time into image contrast during the readaptation phase is based on extensive user studies carried out first in a controlled laboratory setup. Additional user studies have then been conducted in field tests to ensure validity of the derived time-dependent tone-mapping function and to verify transferability onto real-world traffic scenarios.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"69 1","pages":"4:1-4:20"},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73175438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We test the possibility of tapping the subconscious mind for face recognition using consumer-grade BCIs. To this end, we performed an experiment whereby subjects were presented with photographs of famous persons with the expectation that about 20% of them would be (consciously) recognized; and since the photos are of famous persons, we expected that subjects would have seen before some of the 80% they didn’t (consciously) recognize. Further, we expected that their subconscious would have recognized some of those in the 80% pool that they had seen before. An exit questionnaire and a set of criteria allowed us to label recognitions as conscious, false, no recognitions, or subconscious recognitions. We analyzed a number of event related potentials training and testing a support vector machine. We found that our method is capable of differentiating between no recognitions and subconscious recognitions with promising accuracy levels, suggesting that tapping the subconscious mind for face recognition is feasible.
{"title":"Detection of Subconscious Face Recognition Using Consumer-Grade Brain-Computer Interfaces","authors":"Miguel Vargas Martin, V. Cho, Gabriel Aversano","doi":"10.1145/2955097","DOIUrl":"https://doi.org/10.1145/2955097","url":null,"abstract":"We test the possibility of tapping the subconscious mind for face recognition using consumer-grade BCIs. To this end, we performed an experiment whereby subjects were presented with photographs of famous persons with the expectation that about 20% of them would be (consciously) recognized; and since the photos are of famous persons, we expected that subjects would have seen before some of the 80% they didn’t (consciously) recognize. Further, we expected that their subconscious would have recognized some of those in the 80% pool that they had seen before. An exit questionnaire and a set of criteria allowed us to label recognitions as conscious, false, no recognitions, or subconscious recognitions. We analyzed a number of event related potentials training and testing a support vector machine. We found that our method is capable of differentiating between no recognitions and subconscious recognitions with promising accuracy levels, suggesting that tapping the subconscious mind for face recognition is feasible.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"13 1","pages":"7:1-7:20"},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81961333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco V. Bernardo, A. Pinheiro, P. Fiadeiro, Manuela Pereira
The influence of chromatic impairments on the perceived image quality is studied in this article. Under the D65 standard illuminant, a set of hyperspectral images were represented into the CIELAB color space, and the corresponding chromatic coordinates were subdivided into clusters with the k-means algorithm. Each color cluster was shifted by a predefined chromatic impairment ΔE*ab with random direction in a*b* chromatic coordinates only. Applying impairments of 3, 6, 9, 12, and 15 in a*b* coordinates to five hyperspectral images a set of modified images was generated. Those images were shown to subjects that were asked to rank their quality based on their naturalness. The Mean Opinion Score of the subjective evaluations was computed to quantify the sensitivity to the chromatic variations. This article is also complemented with an objective evaluation of the quality using several state-of-the-art metrics and using the CIEDE2000 color difference among others. Analyzing the correlations between subjective and objective quality evaluation helps us to conclude that the proposed quality estimators based on the CIEDE2000 provide the best representation. Moreover, it was concluded that the established quality metrics only become reliable by averaging their results on each color component.
{"title":"Image Quality under Chromatic Impairments","authors":"Marco V. Bernardo, A. Pinheiro, P. Fiadeiro, Manuela Pereira","doi":"10.1145/2964908","DOIUrl":"https://doi.org/10.1145/2964908","url":null,"abstract":"The influence of chromatic impairments on the perceived image quality is studied in this article. Under the D65 standard illuminant, a set of hyperspectral images were represented into the CIELAB color space, and the corresponding chromatic coordinates were subdivided into clusters with the k-means algorithm. Each color cluster was shifted by a predefined chromatic impairment ΔE*ab with random direction in a*b* chromatic coordinates only. Applying impairments of 3, 6, 9, 12, and 15 in a*b* coordinates to five hyperspectral images a set of modified images was generated. Those images were shown to subjects that were asked to rank their quality based on their naturalness. The Mean Opinion Score of the subjective evaluations was computed to quantify the sensitivity to the chromatic variations. This article is also complemented with an objective evaluation of the quality using several state-of-the-art metrics and using the CIEDE2000 color difference among others. Analyzing the correlations between subjective and objective quality evaluation helps us to conclude that the proposed quality estimators based on the CIEDE2000 provide the best representation. Moreover, it was concluded that the established quality metrics only become reliable by averaging their results on each color component.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"120 1","pages":"6:1-6:20"},"PeriodicalIF":1.6,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90539422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6--33.5% for the verification scenario, and in range of 22.3--53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues.
{"title":"Biometric Recognition via Eye Movements: Saccadic Vigor and Acceleration Cues","authors":"Ioannis Rigas, Oleg V. Komogortsev, R. Shadmehr","doi":"10.1145/2842614","DOIUrl":"https://doi.org/10.1145/2842614","url":null,"abstract":"Previous research shows that human eye movements can serve as a valuable source of information about the structural elements of the oculomotor system and they also can open a window to the neural functions and cognitive mechanisms related to visual attention and perception. The research field of eye movement-driven biometrics explores the extraction of individual-specific characteristics from eye movements and their employment for recognition purposes. In this work, we present a study for the incorporation of dynamic saccadic features into a model of eye movement-driven biometrics. We show that when these features are added to our previous biometric framework and tested on a large database of 322 subjects, the biometric accuracy presents a relative improvement in the range of 31.6--33.5% for the verification scenario, and in range of 22.3--53.1% for the identification scenario. More importantly, this improvement is demonstrated for different types of visual stimulus (random dot, text, video), indicating the enhanced robustness offered by the incorporation of saccadic vigor and acceleration cues.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"5 1","pages":"6:1-6:21"},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79529252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern computer graphics are capable of generating highly photorealistic images. Although this can be considered a success for the computer graphics community, it has given rise to complex forensic and legal issues. A compelling example comes from the need to distinguish between computer-generated and photographic images as it pertains to the legality and prosecution of child pornography in the United States. We performed psychophysical experiments to determine the accuracy with which observers are capable of distinguishing computer-generated from photographic images. We find that observers have considerable difficulty performing this task—more difficulty than we observed 5 years ago when computer-generated imagery was not as photorealistic. We also find that observers are more likely to report that an image is photographic rather than computer generated, and that resolution has surprisingly little effect on performance. Finally, we find that a small amount of training greatly improves accuracy.
{"title":"Assessing and Improving the Identification of Computer-Generated Portraits","authors":"Olivia Holmes, M. Banks, H. Farid","doi":"10.1145/2871714","DOIUrl":"https://doi.org/10.1145/2871714","url":null,"abstract":"Modern computer graphics are capable of generating highly photorealistic images. Although this can be considered a success for the computer graphics community, it has given rise to complex forensic and legal issues. A compelling example comes from the need to distinguish between computer-generated and photographic images as it pertains to the legality and prosecution of child pornography in the United States. We performed psychophysical experiments to determine the accuracy with which observers are capable of distinguishing computer-generated from photographic images. We find that observers have considerable difficulty performing this task—more difficulty than we observed 5 years ago when computer-generated imagery was not as photorealistic. We also find that observers are more likely to report that an image is photographic rather than computer generated, and that resolution has surprisingly little effect on performance. Finally, we find that a small amount of training greatly improves accuracy.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"32 1","pages":"7:1-7:12"},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79322937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingying Wang, J. E. Tree, M. Walker, Michael Neff
Designing virtual characters that are capable of conveying a sense of personality is important for generating realistic experiences, and thus a key goal in computer animation research. Though the influence of gesture and body motion on personality perception has been studied, little is known about which attributes of hand pose and motion convey particular personality traits. Using the “Big Five” model as a framework for evaluating personality traits, this work examines how variations in hand pose and motion impact the perception of a character's personality. As has been done with facial motion, we first study hand motion in isolation as a requirement for running controlled experiments that avoid the combinatorial explosion of multimodal communication (all combinations of facial expressions, arm movements, body movements, and hands) and allow us to understand the communicative content of hands. We determined a set of features likely to reflect personality, based on research in psychology and previous human motion perception work: shape, direction, amplitude, speed, and manipulation. Then we captured realistic hand motion varying these attributes and conducted three perceptual experiments to determine the contribution of these attributes to the character's personalities. Both hand poses and the amplitude of hand motion affected the perception of all five personality traits. Speed impacted all traits except openness. Direction impacted extraversion and openness. Manipulation was perceived as an indicator of introversion, disagreeableness, neuroticism, and less openness to experience. From these results, we generalize guidelines for designing detailed hand motion that can add to the expressiveness and personality of characters. We performed an evaluation study that combined hand motion with gesture and body motion. Even in the presence of body motion, hand motion still significantly impacted the perception of a character's personality and could even be the dominant factor in certain situations.
{"title":"Assessing the Impact of Hand Motion on Virtual Character Personality","authors":"Yingying Wang, J. E. Tree, M. Walker, Michael Neff","doi":"10.1145/2874357","DOIUrl":"https://doi.org/10.1145/2874357","url":null,"abstract":"Designing virtual characters that are capable of conveying a sense of personality is important for generating realistic experiences, and thus a key goal in computer animation research. Though the influence of gesture and body motion on personality perception has been studied, little is known about which attributes of hand pose and motion convey particular personality traits. Using the “Big Five” model as a framework for evaluating personality traits, this work examines how variations in hand pose and motion impact the perception of a character's personality. As has been done with facial motion, we first study hand motion in isolation as a requirement for running controlled experiments that avoid the combinatorial explosion of multimodal communication (all combinations of facial expressions, arm movements, body movements, and hands) and allow us to understand the communicative content of hands. We determined a set of features likely to reflect personality, based on research in psychology and previous human motion perception work: shape, direction, amplitude, speed, and manipulation. Then we captured realistic hand motion varying these attributes and conducted three perceptual experiments to determine the contribution of these attributes to the character's personalities. Both hand poses and the amplitude of hand motion affected the perception of all five personality traits. Speed impacted all traits except openness. Direction impacted extraversion and openness. Manipulation was perceived as an indicator of introversion, disagreeableness, neuroticism, and less openness to experience. From these results, we generalize guidelines for designing detailed hand motion that can add to the expressiveness and personality of characters. We performed an evaluation study that combined hand motion with gesture and body motion. Even in the presence of body motion, hand motion still significantly impacted the perception of a character's personality and could even be the dominant factor in certain situations.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"17 1","pages":"9:1-9:23"},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91055689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animated characters are expected to fulfill a variety of social roles across different domains. To be successful and effective, these characters must display a wide range of personalities. Designers and animators create characters with appropriate personalities by using their intuition and artistic expertise. Our goal is to provide evidence-based principles for creating social characters. In this article, we describe the results of two experiments that show how exaggerated and damped facial motion magnitude influence impressions of cartoon and more realistic animated characters. In our first experiment, participants watched animated characters that varied in rendering style and facial motion magnitude. The participants then rated the different animated characters on extroversion, warmth, and competence, which are social traits that are relevant for characters used in entertainment, therapy, and education. We found that facial motion magnitude affected these social traits in cartoon and realistic characters differently. Facial motion magnitude affected ratings of cartoon characters’ extroversion and competence more than their warmth. In contrast, facial motion magnitude affected ratings of realistic characters’ extroversion but not their competence nor warmth. We ran a second experiment to extend the results of the first. In the second experiment, we added emotional valence as a variable. We also asked participants to rate the characters on more specific aspects of warmth, such as respectfulness, calmness, and attentiveness. Although the characters’ emotional valence did not affect ratings, we found that facial motion magnitude influenced ratings of the characters’ respectfulness and calmness but not attentiveness. These findings provide a basis for how animators can fine-tune facial motion to control perceptions of animated characters’ personalities.
{"title":"Evaluating Animated Characters: Facial Motion Magnitude Influences Personality Perceptions","authors":"Jennifer Hyde, E. Carter, S. Kiesler, J. Hodgins","doi":"10.1145/2851499","DOIUrl":"https://doi.org/10.1145/2851499","url":null,"abstract":"Animated characters are expected to fulfill a variety of social roles across different domains. To be successful and effective, these characters must display a wide range of personalities. Designers and animators create characters with appropriate personalities by using their intuition and artistic expertise. Our goal is to provide evidence-based principles for creating social characters. In this article, we describe the results of two experiments that show how exaggerated and damped facial motion magnitude influence impressions of cartoon and more realistic animated characters. In our first experiment, participants watched animated characters that varied in rendering style and facial motion magnitude. The participants then rated the different animated characters on extroversion, warmth, and competence, which are social traits that are relevant for characters used in entertainment, therapy, and education. We found that facial motion magnitude affected these social traits in cartoon and realistic characters differently. Facial motion magnitude affected ratings of cartoon characters’ extroversion and competence more than their warmth. In contrast, facial motion magnitude affected ratings of realistic characters’ extroversion but not their competence nor warmth. We ran a second experiment to extend the results of the first. In the second experiment, we added emotional valence as a variable. We also asked participants to rate the characters on more specific aspects of warmth, such as respectfulness, calmness, and attentiveness. Although the characters’ emotional valence did not affect ratings, we found that facial motion magnitude influenced ratings of the characters’ respectfulness and calmness but not attentiveness. These findings provide a basis for how animators can fine-tune facial motion to control perceptions of animated characters’ personalities.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"7 1","pages":"8:1-8:17"},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81841390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Giraud, Florian Focone, Virginie Demulier, Jean-Claude Martin, B. Isableu
Virtual sport coaches guide users through their physical activity and provide motivational support. Users’ motivation can rapidly decay if the movements of the virtual coach are too stereotyped. Kinematic patterns generated while performing a predefined fitness movement can elicit and help to prolong users’ interaction and interest in training. Human body kinematics has been shown to convey various social attributes such as gender, identity, and acted emotions. To date, no study provides information regarding how spontaneous emotions and personality traits together are perceived from full-body movements. In this article, we study how people make reliable inferences regarding spontaneous emotional dimensions and personality traits of human coaches from kinematic patterns they produced when performing a fitness sequence. Movements were presented to participants via a virtual mannequin to isolate the influence of kinematics on perception. Kinematic patterns of biological movement were analyzed in terms of movement qualities according to the effort-shape [Dell 1977] notation proposed by Laban [1950]. Three studies were performed to provide an analysis of the process leading to perception: from coaches’ states and traits through bodily movements to observers’ social perception. Thirty-two participants (i.e., observers) were asked to rate the movements of the virtual mannequin in terms of conveyed emotion dimensions, personality traits (five-factor model of personality), and perceived movement qualities (effort-shape) from 56 fitness movement sequences. The results showed high reliability for most of the evaluated dimensions, confirming interobserver agreement from kinematics at zero acquaintance. A large expressive halo merging emotional (e.g., perceived intensity) and personality aspects (e.g., extraversion) was found, driven by perceived kinematic impulsivity and energy. Observers’ perceptions were partially accurate for emotion dimensions and were not accurate for personality traits. Together, these results contribute to both the understanding of dimensions of social perception through movement and the design of expressive virtual sport coaches.
{"title":"Perception of Emotion and Personality through Full-Body Movement Qualities: A Sport Coach Case Study","authors":"Tom Giraud, Florian Focone, Virginie Demulier, Jean-Claude Martin, B. Isableu","doi":"10.1145/2791294","DOIUrl":"https://doi.org/10.1145/2791294","url":null,"abstract":"Virtual sport coaches guide users through their physical activity and provide motivational support. Users’ motivation can rapidly decay if the movements of the virtual coach are too stereotyped. Kinematic patterns generated while performing a predefined fitness movement can elicit and help to prolong users’ interaction and interest in training. Human body kinematics has been shown to convey various social attributes such as gender, identity, and acted emotions. To date, no study provides information regarding how spontaneous emotions and personality traits together are perceived from full-body movements. In this article, we study how people make reliable inferences regarding spontaneous emotional dimensions and personality traits of human coaches from kinematic patterns they produced when performing a fitness sequence. Movements were presented to participants via a virtual mannequin to isolate the influence of kinematics on perception. Kinematic patterns of biological movement were analyzed in terms of movement qualities according to the effort-shape [Dell 1977] notation proposed by Laban [1950]. Three studies were performed to provide an analysis of the process leading to perception: from coaches’ states and traits through bodily movements to observers’ social perception. Thirty-two participants (i.e., observers) were asked to rate the movements of the virtual mannequin in terms of conveyed emotion dimensions, personality traits (five-factor model of personality), and perceived movement qualities (effort-shape) from 56 fitness movement sequences. The results showed high reliability for most of the evaluated dimensions, confirming interobserver agreement from kinematics at zero acquaintance. A large expressive halo merging emotional (e.g., perceived intensity) and personality aspects (e.g., extraversion) was found, driven by perceived kinematic impulsivity and energy. Observers’ perceptions were partially accurate for emotion dimensions and were not accurate for personality traits. Together, these results contribute to both the understanding of dimensions of social perception through movement and the design of expressive virtual sport coaches.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"14 1","pages":"2:1-2:27"},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85019930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clemens Schuwerk, Xiao Xu, R. Chaudhari, E. Steinbach
Shared haptic virtual environments can be realized using a client-server architecture. In this architecture, each client maintains a local copy of the virtual environment (VE). A centralized physics simulation running on a server calculates the object states based on haptic device position information received from the clients. The object states are sent back to the clients to update the local copies of the VE, which are used to render interaction forces displayed to the user through a haptic device. Communication delay leads to delayed object state updates and increased force feedback rendered at the clients. In this article, we analyze the effect of communication delay on the magnitude of the rendered forces at the clients for cooperative multi-user interactions with rigid objects. The analysis reveals guidelines on the tolerable communication delay. If this delay is exceeded, the increased force magnitude becomes haptically perceivable. We propose an adaptive force rendering scheme to compensate for this effect, which dynamically changes the stiffness used in the force rendering at the clients. Our experimental results, including a subjective user study, verify the applicability of the analysis and the proposed scheme to compensate the effect of time-varying communication delay in a multi-user SHVE.
{"title":"Compensating the Effect of Communication Delay in Client-Server--Based Shared Haptic Virtual Environments","authors":"Clemens Schuwerk, Xiao Xu, R. Chaudhari, E. Steinbach","doi":"10.1145/2835176","DOIUrl":"https://doi.org/10.1145/2835176","url":null,"abstract":"Shared haptic virtual environments can be realized using a client-server architecture. In this architecture, each client maintains a local copy of the virtual environment (VE). A centralized physics simulation running on a server calculates the object states based on haptic device position information received from the clients. The object states are sent back to the clients to update the local copies of the VE, which are used to render interaction forces displayed to the user through a haptic device. Communication delay leads to delayed object state updates and increased force feedback rendered at the clients. In this article, we analyze the effect of communication delay on the magnitude of the rendered forces at the clients for cooperative multi-user interactions with rigid objects. The analysis reveals guidelines on the tolerable communication delay. If this delay is exceeded, the increased force magnitude becomes haptically perceivable. We propose an adaptive force rendering scheme to compensate for this effect, which dynamically changes the stiffness used in the force rendering at the clients. Our experimental results, including a subjective user study, verify the applicability of the analysis and the proposed scheme to compensate the effect of time-varying communication delay in a multi-user SHVE.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"47 1","pages":"5:1-5:22"},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81469389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}