Animated characters are expected to fulfill a variety of social roles across different domains. To be successful and effective, these characters must display a wide range of personalities. Designers and animators create characters with appropriate personalities by using their intuition and artistic expertise. Our goal is to provide evidence-based principles for creating social characters. In this article, we describe the results of two experiments that show how exaggerated and damped facial motion magnitude influence impressions of cartoon and more realistic animated characters. In our first experiment, participants watched animated characters that varied in rendering style and facial motion magnitude. The participants then rated the different animated characters on extroversion, warmth, and competence, which are social traits that are relevant for characters used in entertainment, therapy, and education. We found that facial motion magnitude affected these social traits in cartoon and realistic characters differently. Facial motion magnitude affected ratings of cartoon characters’ extroversion and competence more than their warmth. In contrast, facial motion magnitude affected ratings of realistic characters’ extroversion but not their competence nor warmth. We ran a second experiment to extend the results of the first. In the second experiment, we added emotional valence as a variable. We also asked participants to rate the characters on more specific aspects of warmth, such as respectfulness, calmness, and attentiveness. Although the characters’ emotional valence did not affect ratings, we found that facial motion magnitude influenced ratings of the characters’ respectfulness and calmness but not attentiveness. These findings provide a basis for how animators can fine-tune facial motion to control perceptions of animated characters’ personalities.
{"title":"Evaluating Animated Characters: Facial Motion Magnitude Influences Personality Perceptions","authors":"Jennifer Hyde, E. Carter, S. Kiesler, J. Hodgins","doi":"10.1145/2851499","DOIUrl":"https://doi.org/10.1145/2851499","url":null,"abstract":"Animated characters are expected to fulfill a variety of social roles across different domains. To be successful and effective, these characters must display a wide range of personalities. Designers and animators create characters with appropriate personalities by using their intuition and artistic expertise. Our goal is to provide evidence-based principles for creating social characters. In this article, we describe the results of two experiments that show how exaggerated and damped facial motion magnitude influence impressions of cartoon and more realistic animated characters. In our first experiment, participants watched animated characters that varied in rendering style and facial motion magnitude. The participants then rated the different animated characters on extroversion, warmth, and competence, which are social traits that are relevant for characters used in entertainment, therapy, and education. We found that facial motion magnitude affected these social traits in cartoon and realistic characters differently. Facial motion magnitude affected ratings of cartoon characters’ extroversion and competence more than their warmth. In contrast, facial motion magnitude affected ratings of realistic characters’ extroversion but not their competence nor warmth. We ran a second experiment to extend the results of the first. In the second experiment, we added emotional valence as a variable. We also asked participants to rate the characters on more specific aspects of warmth, such as respectfulness, calmness, and attentiveness. Although the characters’ emotional valence did not affect ratings, we found that facial motion magnitude influenced ratings of the characters’ respectfulness and calmness but not attentiveness. These findings provide a basis for how animators can fine-tune facial motion to control perceptions of animated characters’ personalities.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81841390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Giraud, Florian Focone, Virginie Demulier, Jean-Claude Martin, B. Isableu
Virtual sport coaches guide users through their physical activity and provide motivational support. Users’ motivation can rapidly decay if the movements of the virtual coach are too stereotyped. Kinematic patterns generated while performing a predefined fitness movement can elicit and help to prolong users’ interaction and interest in training. Human body kinematics has been shown to convey various social attributes such as gender, identity, and acted emotions. To date, no study provides information regarding how spontaneous emotions and personality traits together are perceived from full-body movements. In this article, we study how people make reliable inferences regarding spontaneous emotional dimensions and personality traits of human coaches from kinematic patterns they produced when performing a fitness sequence. Movements were presented to participants via a virtual mannequin to isolate the influence of kinematics on perception. Kinematic patterns of biological movement were analyzed in terms of movement qualities according to the effort-shape [Dell 1977] notation proposed by Laban [1950]. Three studies were performed to provide an analysis of the process leading to perception: from coaches’ states and traits through bodily movements to observers’ social perception. Thirty-two participants (i.e., observers) were asked to rate the movements of the virtual mannequin in terms of conveyed emotion dimensions, personality traits (five-factor model of personality), and perceived movement qualities (effort-shape) from 56 fitness movement sequences. The results showed high reliability for most of the evaluated dimensions, confirming interobserver agreement from kinematics at zero acquaintance. A large expressive halo merging emotional (e.g., perceived intensity) and personality aspects (e.g., extraversion) was found, driven by perceived kinematic impulsivity and energy. Observers’ perceptions were partially accurate for emotion dimensions and were not accurate for personality traits. Together, these results contribute to both the understanding of dimensions of social perception through movement and the design of expressive virtual sport coaches.
{"title":"Perception of Emotion and Personality through Full-Body Movement Qualities: A Sport Coach Case Study","authors":"Tom Giraud, Florian Focone, Virginie Demulier, Jean-Claude Martin, B. Isableu","doi":"10.1145/2791294","DOIUrl":"https://doi.org/10.1145/2791294","url":null,"abstract":"Virtual sport coaches guide users through their physical activity and provide motivational support. Users’ motivation can rapidly decay if the movements of the virtual coach are too stereotyped. Kinematic patterns generated while performing a predefined fitness movement can elicit and help to prolong users’ interaction and interest in training. Human body kinematics has been shown to convey various social attributes such as gender, identity, and acted emotions. To date, no study provides information regarding how spontaneous emotions and personality traits together are perceived from full-body movements. In this article, we study how people make reliable inferences regarding spontaneous emotional dimensions and personality traits of human coaches from kinematic patterns they produced when performing a fitness sequence. Movements were presented to participants via a virtual mannequin to isolate the influence of kinematics on perception. Kinematic patterns of biological movement were analyzed in terms of movement qualities according to the effort-shape [Dell 1977] notation proposed by Laban [1950]. Three studies were performed to provide an analysis of the process leading to perception: from coaches’ states and traits through bodily movements to observers’ social perception. Thirty-two participants (i.e., observers) were asked to rate the movements of the virtual mannequin in terms of conveyed emotion dimensions, personality traits (five-factor model of personality), and perceived movement qualities (effort-shape) from 56 fitness movement sequences. The results showed high reliability for most of the evaluated dimensions, confirming interobserver agreement from kinematics at zero acquaintance. A large expressive halo merging emotional (e.g., perceived intensity) and personality aspects (e.g., extraversion) was found, driven by perceived kinematic impulsivity and energy. Observers’ perceptions were partially accurate for emotion dimensions and were not accurate for personality traits. Together, these results contribute to both the understanding of dimensions of social perception through movement and the design of expressive virtual sport coaches.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85019930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clemens Schuwerk, Xiao Xu, R. Chaudhari, E. Steinbach
Shared haptic virtual environments can be realized using a client-server architecture. In this architecture, each client maintains a local copy of the virtual environment (VE). A centralized physics simulation running on a server calculates the object states based on haptic device position information received from the clients. The object states are sent back to the clients to update the local copies of the VE, which are used to render interaction forces displayed to the user through a haptic device. Communication delay leads to delayed object state updates and increased force feedback rendered at the clients. In this article, we analyze the effect of communication delay on the magnitude of the rendered forces at the clients for cooperative multi-user interactions with rigid objects. The analysis reveals guidelines on the tolerable communication delay. If this delay is exceeded, the increased force magnitude becomes haptically perceivable. We propose an adaptive force rendering scheme to compensate for this effect, which dynamically changes the stiffness used in the force rendering at the clients. Our experimental results, including a subjective user study, verify the applicability of the analysis and the proposed scheme to compensate the effect of time-varying communication delay in a multi-user SHVE.
{"title":"Compensating the Effect of Communication Delay in Client-Server--Based Shared Haptic Virtual Environments","authors":"Clemens Schuwerk, Xiao Xu, R. Chaudhari, E. Steinbach","doi":"10.1145/2835176","DOIUrl":"https://doi.org/10.1145/2835176","url":null,"abstract":"Shared haptic virtual environments can be realized using a client-server architecture. In this architecture, each client maintains a local copy of the virtual environment (VE). A centralized physics simulation running on a server calculates the object states based on haptic device position information received from the clients. The object states are sent back to the clients to update the local copies of the VE, which are used to render interaction forces displayed to the user through a haptic device. Communication delay leads to delayed object state updates and increased force feedback rendered at the clients. In this article, we analyze the effect of communication delay on the magnitude of the rendered forces at the clients for cooperative multi-user interactions with rigid objects. The analysis reveals guidelines on the tolerable communication delay. If this delay is exceeded, the increased force magnitude becomes haptically perceivable. We propose an adaptive force rendering scheme to compensate for this effect, which dynamically changes the stiffness used in the force rendering at the clients. Our experimental results, including a subjective user study, verify the applicability of the analysis and the proposed scheme to compensate the effect of time-varying communication delay in a multi-user SHVE.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81469389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We easily adapt to changes in the environment that involve cross-sensory discrepancies (e.g., between vision and proprioception). Adaptation can lead to changes in motor commands so that the experienced sensory consequences are appropriate for the new environment (e.g., we program a movement differently while wearing prisms that shift our visual space). In addition to these motor changes, perceptual judgments of space can also be altered (e.g., how far can I reach with my arm?). However, in previous studies that assessed perceptual judgments of space after visuomotor adaptation, the manipulation was always a planar spatial shift, whereas changes in body perception could not directly be assessed. In this study, we investigated the effects of velocity-dependent (spatiotemporal) and spatial scaling distortions of arm movements on space and body perception, taking advantage of immersive virtual reality. Exploiting the perceptual illusion of embodiment in an entire virtual body, we endowed subjects with new spatiotemporal or spatial 3D mappings between motor commands and their sensory consequences. The results imply that spatiotemporal manipulation of 2 and 4 times faster can significantly change participants’ proprioceptive judgments of a virtual object’s size without affecting the perceived body ownership, although it did affect the agency of the movements. Equivalent spatial manipulations of 11 and 22 degrees of angular offset also had a significant effect on the perceived virtual object’s size; however, the mismatched information did not affect either the sense of body ownership or agency. We conclude that adaptation to spatial and spatiotemporal distortion can similarly change our perception of space, although spatiotemporal distortions can more easily be detected.
{"title":"The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality","authors":"Elena Kokkinara, M. Slater, Joan López-Moliner","doi":"10.1145/2818998","DOIUrl":"https://doi.org/10.1145/2818998","url":null,"abstract":"We easily adapt to changes in the environment that involve cross-sensory discrepancies (e.g., between vision and proprioception). Adaptation can lead to changes in motor commands so that the experienced sensory consequences are appropriate for the new environment (e.g., we program a movement differently while wearing prisms that shift our visual space). In addition to these motor changes, perceptual judgments of space can also be altered (e.g., how far can I reach with my arm?). However, in previous studies that assessed perceptual judgments of space after visuomotor adaptation, the manipulation was always a planar spatial shift, whereas changes in body perception could not directly be assessed. In this study, we investigated the effects of velocity-dependent (spatiotemporal) and spatial scaling distortions of arm movements on space and body perception, taking advantage of immersive virtual reality. Exploiting the perceptual illusion of embodiment in an entire virtual body, we endowed subjects with new spatiotemporal or spatial 3D mappings between motor commands and their sensory consequences. The results imply that spatiotemporal manipulation of 2 and 4 times faster can significantly change participants’ proprioceptive judgments of a virtual object’s size without affecting the perceived body ownership, although it did affect the agency of the movements. Equivalent spatial manipulations of 11 and 22 degrees of angular offset also had a significant effect on the perceived virtual object’s size; however, the mismatched information did not affect either the sense of body ownership or agency. We conclude that adaptation to spatial and spatiotemporal distortion can similarly change our perception of space, although spatiotemporal distortions can more easily be detected.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81694315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krzysztof Krejtz, A. Duchowski, T. Szmidt, I. Krejtz, Fernando González Perilli, A. Pires, A. Vilaró, N. Villalobos
This article details a two-step method of quantifying eye movement transitions between areas of interest (AOIs). First, individuals' gaze switching patterns, represented by fixated AOI sequences, are modeled as Markov chains. Second, Shannon's entropy coefficient of the fit Markov model is computed to quantify the complexity of individual switching patterns. To determine the overall distribution of attention over AOIs, the entropy coefficient of individuals' stationary distribution of fixations is calculated. The novelty of the method is that it captures the variability of individual differences in eye movement characteristics, which are then summarized statistically. The method is demonstrated on gaze data collected from two studies, during free viewing of classical art paintings. Normalized Shannon's entropy, derived from individual transition matrices, is related to participants' individual differences as well as to either their aesthetic impression or recognition of artwork. Low transition and high stationary entropies suggest greater curiosity mixed with a higher subjective aesthetic affinity toward artwork, possibly indicative of visual scanning of the artwork in a more deliberate way. Meanwhile, both high transition and stationary entropies may be indicative of recognition of familiar artwork.
{"title":"Gaze Transition Entropy","authors":"Krzysztof Krejtz, A. Duchowski, T. Szmidt, I. Krejtz, Fernando González Perilli, A. Pires, A. Vilaró, N. Villalobos","doi":"10.1145/2834121","DOIUrl":"https://doi.org/10.1145/2834121","url":null,"abstract":"This article details a two-step method of quantifying eye movement transitions between areas of interest (AOIs). First, individuals' gaze switching patterns, represented by fixated AOI sequences, are modeled as Markov chains. Second, Shannon's entropy coefficient of the fit Markov model is computed to quantify the complexity of individual switching patterns. To determine the overall distribution of attention over AOIs, the entropy coefficient of individuals' stationary distribution of fixations is calculated. The novelty of the method is that it captures the variability of individual differences in eye movement characteristics, which are then summarized statistically. The method is demonstrated on gaze data collected from two studies, during free viewing of classical art paintings. Normalized Shannon's entropy, derived from individual transition matrices, is related to participants' individual differences as well as to either their aesthetic impression or recognition of artwork. Low transition and high stationary entropies suggest greater curiosity mixed with a higher subjective aesthetic affinity toward artwork, possibly indicative of visual scanning of the artwork in a more deliberate way. Meanwhile, both high transition and stationary entropies may be indicative of recognition of familiar artwork.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90333965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Kiiski, Ludovic Hoyet, Andrew T. Woods, C. O'Sullivan, F. Newell
A better understanding of how intentions and traits are perceived from body movements is required for the design of more effective virtual characters that behave in a socially realistic manner. For this purpose, realistic body motion, captured from human movements, is being used more frequently for creating characters with natural animations in games and entertainment. However, it is not always clear for programmers and designers which specific motion parameters best convey specific information such as certain emotions, intentions, or traits. We conducted two experiments to investigate whether the perceived traits of actors could be determined from their body motion, and whether these traits were associated with their perceived intentions. We first recorded body motions from 26 professional actors, who were instructed to move in a “hero”-like or a “villain”-like manner. In the first experiment, 190 participants viewed individual video recordings of these actors and were required to provide ratings to the body motion stimuli along a series of different cognitive dimensions (intentions, attractiveness, dominance, trustworthiness, and distinctiveness). The intersubject ratings across observers were highly consistent, suggesting that social traits are readily determined from body motion. Moreover, correlational analyses between these ratings revealed consistent associations across traits, for example, that perceived “good” intentions were associated with higher ratings of attractiveness and dominance. Experiment 2 was designed to elucidate the qualitative body motion cues that were critical for determining specific intentions and traits from the hero- and villain-like body movements. The results revealed distinct body motions that were readily associated with the perception of either “good” or “bad” intentions. Moreover, regression analyses revealed that these ratings accurately predicted the perception of the portrayed character type. These findings indicate that intentions and social traits are communicated effectively via specific sets of body motion features. Furthermore, these results have important implications for the design of the motion of virtual characters to convey desired social information.
{"title":"Strutting Hero, Sneaking Villain: Utilizing Body Motion Cues to Predict the Intentions of Others","authors":"H. Kiiski, Ludovic Hoyet, Andrew T. Woods, C. O'Sullivan, F. Newell","doi":"10.1145/2791293","DOIUrl":"https://doi.org/10.1145/2791293","url":null,"abstract":"A better understanding of how intentions and traits are perceived from body movements is required for the design of more effective virtual characters that behave in a socially realistic manner. For this purpose, realistic body motion, captured from human movements, is being used more frequently for creating characters with natural animations in games and entertainment. However, it is not always clear for programmers and designers which specific motion parameters best convey specific information such as certain emotions, intentions, or traits. We conducted two experiments to investigate whether the perceived traits of actors could be determined from their body motion, and whether these traits were associated with their perceived intentions. We first recorded body motions from 26 professional actors, who were instructed to move in a “hero”-like or a “villain”-like manner. In the first experiment, 190 participants viewed individual video recordings of these actors and were required to provide ratings to the body motion stimuli along a series of different cognitive dimensions (intentions, attractiveness, dominance, trustworthiness, and distinctiveness). The intersubject ratings across observers were highly consistent, suggesting that social traits are readily determined from body motion. Moreover, correlational analyses between these ratings revealed consistent associations across traits, for example, that perceived “good” intentions were associated with higher ratings of attractiveness and dominance. Experiment 2 was designed to elucidate the qualitative body motion cues that were critical for determining specific intentions and traits from the hero- and villain-like body movements. The results revealed distinct body motions that were readily associated with the perception of either “good” or “bad” intentions. Moreover, regression analyses revealed that these ratings accurately predicted the perception of the portrayed character type. These findings indicate that intentions and social traits are communicated effectively via specific sets of body motion features. Furthermore, these results have important implications for the design of the motion of virtual characters to convey desired social information.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75452961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takahiro Kawabe, Taiki Fukiage, Masataka Sawayama, S. Nishida
Light projection is a powerful technique that can be used to edit the appearance of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss, and shading. Here, we propose an alternative light projection technique that adds a variety of illusory yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, referred to as (Deformation Lamps), is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observer's brain automatically combines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction and found that the correction was critically dependent on the retinal magnitude of the inconsistency. Another experiment showed that the perceived magnitude of image deformation produced by our techniques was underestimated. The results ruled out the possibility that the effect obtained by our technique stemmed simply from the physical change in an object's appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, and objects with natural textures including human bodies.
{"title":"Deformation Lamps: A Projection Technique to Make Static Objects Perceptually Dynamic","authors":"Takahiro Kawabe, Taiki Fukiage, Masataka Sawayama, S. Nishida","doi":"10.1145/2874358","DOIUrl":"https://doi.org/10.1145/2874358","url":null,"abstract":"Light projection is a powerful technique that can be used to edit the appearance of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss, and shading. Here, we propose an alternative light projection technique that adds a variety of illusory yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, referred to as (Deformation Lamps), is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observer's brain automatically combines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction and found that the correction was critically dependent on the retinal magnitude of the inconsistency. Another experiment showed that the perceived magnitude of image deformation produced by our techniques was underestimated. The results ruled out the possibility that the effect obtained by our technique stemmed simply from the physical change in an object's appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, and objects with natural textures including human bodies.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84764483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rebekka S. Renner, Erik Steindecker, Mathias Müller, B. Velichkovsky, R. Stelzer, S. Pannasch, J. Helmert
In virtual environments, perceived distances are frequently reported to be shorter than intended. One important parameter for spatial perception in a stereoscopic virtual environment is the stereo base—that is, the distance between the two viewing cameras. We systematically varied the stereo base relative to the interpupillary distance (IPD) and examined influences on distance and size perception. Furthermore, we tested whether an individual adjustment of the stereo base through an alignment task would reduce the errors in distance estimation. Participants performed reaching movements toward a virtual tennis ball either with closed eyes (blind reaches) or open eyes (sighted reaches). Using the participants' individual IPD, the stereo base was set to (a) the IPD, (b) proportionally smaller, (c) proportionally larger, or (d) adjusted according to the individual performance in an alignment task that was conducted beforehand. Overall, consistent with previous research, distances were underestimated. As expected, with a smaller stereo base, the virtual object was perceived as being farther away and bigger, in contrast to a larger stereo base, where the virtual object was perceived to be nearer and smaller. However, the manipulation of the stereo base influenced blind reaching estimates to a smaller extent than expected, which might be due to a combination of binocular disparity and pictorial depth cues. In sighted reaching, when visual feedback was available, presumably the use of disparity matching led to a larger effect of the stereo base. The use of an individually adjusted stereo base diminished the average underestimation but did not reduce interindividual variance. Interindividual differences were task specific and could not be explained through differences in stereo acuity or fixation disparity.
{"title":"The Influence of the Stereo Base on Blind and Sighted Reaches in a Virtual Environment","authors":"Rebekka S. Renner, Erik Steindecker, Mathias Müller, B. Velichkovsky, R. Stelzer, S. Pannasch, J. Helmert","doi":"10.1145/2724716","DOIUrl":"https://doi.org/10.1145/2724716","url":null,"abstract":"In virtual environments, perceived distances are frequently reported to be shorter than intended. One important parameter for spatial perception in a stereoscopic virtual environment is the stereo base—that is, the distance between the two viewing cameras. We systematically varied the stereo base relative to the interpupillary distance (IPD) and examined influences on distance and size perception. Furthermore, we tested whether an individual adjustment of the stereo base through an alignment task would reduce the errors in distance estimation. Participants performed reaching movements toward a virtual tennis ball either with closed eyes (blind reaches) or open eyes (sighted reaches). Using the participants' individual IPD, the stereo base was set to (a) the IPD, (b) proportionally smaller, (c) proportionally larger, or (d) adjusted according to the individual performance in an alignment task that was conducted beforehand. Overall, consistent with previous research, distances were underestimated. As expected, with a smaller stereo base, the virtual object was perceived as being farther away and bigger, in contrast to a larger stereo base, where the virtual object was perceived to be nearer and smaller. However, the manipulation of the stereo base influenced blind reaching estimates to a smaller extent than expected, which might be due to a combination of binocular disparity and pictorial depth cues. In sighted reaching, when visual feedback was available, presumably the use of disparity matching led to a larger effect of the stereo base. The use of an individually adjusted stereo base diminished the average underestimation but did not reduce interindividual variance. Interindividual differences were task specific and could not be explained through differences in stereo acuity or fixation disparity.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78745206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People judge what they can and cannot do all the time when acting in the physical world. Can I step over that fence or do I need to duck under it? Can I step off of that ledge or do I need to climb off of it? These qualities of the environment that people perceive that allow them to act are called affordances. This article compares people’s judgments of affordances on two tasks in both the real world and in virtual environments presented with head-mounted displays. The two tasks were stepping over or ducking under a pole, and stepping straight off of a ledge. Comparisons between the real world and virtual environments are important because they allow us to evaluate the fidelity of virtual environments. Another reason is that virtual environment technologies enable precise control of the myriad perceptual cues at work in the physical world and deepen our understanding of how people use vision to decide how to act. In the experiments presented here, the presence or absence of a self-avatar—an animated graphical representation of a person embedded in the virtual environment—was a central factor. Another important factor was the presence or absence of action, that is, whether people performed the task or reported that they could or could not perform the task. The results show that animated self-avatars provide critical information for people deciding what they can and cannot do in virtual environments, and that action is significant in people’s affordance judgments.
{"title":"Affordance Judgments in HMD-Based Virtual Environments: Stepping over a Pole and Stepping off a Ledge","authors":"Qiufeng Lin, J. Rieser, Bobby Bodenheimer","doi":"10.1145/2720020","DOIUrl":"https://doi.org/10.1145/2720020","url":null,"abstract":"People judge what they can and cannot do all the time when acting in the physical world. Can I step over that fence or do I need to duck under it? Can I step off of that ledge or do I need to climb off of it? These qualities of the environment that people perceive that allow them to act are called affordances. This article compares people’s judgments of affordances on two tasks in both the real world and in virtual environments presented with head-mounted displays. The two tasks were stepping over or ducking under a pole, and stepping straight off of a ledge. Comparisons between the real world and virtual environments are important because they allow us to evaluate the fidelity of virtual environments. Another reason is that virtual environment technologies enable precise control of the myriad perceptual cues at work in the physical world and deepen our understanding of how people use vision to decide how to act. In the experiments presented here, the presence or absence of a self-avatar—an animated graphical representation of a person embedded in the virtual environment—was a central factor. Another important factor was the presence or absence of action, that is, whether people performed the task or reported that they could or could not perform the task. The results show that animated self-avatars provide critical information for people deciding what they can and cannot do in virtual environments, and that action is significant in people’s affordance judgments.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84988637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presenting points of interest in the environment by means of audio augmented reality offers benefits compared with traditional visual augmented reality and map-based approaches. However, presentation of distant virtual sound sources is problematic. This study looks at combining well-known auditory distance cues to convey the distance of points of interest. The results indicate that although the provided cues are intuitively mapped to relatively short distances, users can with only little training learn to map these cues to larger distances.
{"title":"Auditory Distance Presentation in an Urban Augmented Reality Environment","authors":"R. Albrecht, T. Lokki","doi":"10.1145/2723568","DOIUrl":"https://doi.org/10.1145/2723568","url":null,"abstract":"Presenting points of interest in the environment by means of audio augmented reality offers benefits compared with traditional visual augmented reality and map-based approaches. However, presentation of distant virtual sound sources is problematic. This study looks at combining well-known auditory distance cues to convey the distance of points of interest. The results indicate that although the provided cues are intuitively mapped to relatively short distances, users can with only little training learn to map these cues to larger distances.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2015-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74617743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}