{"title":"Introduction to the SAP 2023 Special Issue","authors":"Alexandre Chapiro, Andrew Robb","doi":"10.1145/3629977","DOIUrl":"https://doi.org/10.1145/3629977","url":null,"abstract":"<p>No abstract available.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigated the ability of two fingers to discriminate stiffness with stochastic resonance. It is known that the haptic perception at the fingertip improves when vibrotactile noise propagates to the fingertip, which is a phenomenon called the stochastic resonance. The improvement in the haptic sensation of a fingertip depends on the intensity of the noise propagating to the fingertip. An improvement in the haptic sensation of multiple fingertips does not require multiple noise sources, such as vibrators, to be attached to multiple fingertips; i.e., even a single vibrator can propagate noise to multiple fingers. In this study, we focus on stiffness discrimination as a task using multiple fingers, in which the thumb and index finger are used to touch an object and perceive its stiffness. Subsequently, we demonstrate that the stiffness perception is improved by propagating sufficiently intense noise to the thumb and index finger using only a single vibrator. The findings indicate the possibility of improving the haptic sensation at multiple fingertips using one vibrator.
{"title":"Two-finger Stiffness Discrimination with the Stochastic Resonance Effect","authors":"Komi Chamnongthai, Takahiro Endo, Fumitoshi Matsuno","doi":"10.1145/3630254","DOIUrl":"https://doi.org/10.1145/3630254","url":null,"abstract":"We investigated the ability of two fingers to discriminate stiffness with stochastic resonance. It is known that the haptic perception at the fingertip improves when vibrotactile noise propagates to the fingertip, which is a phenomenon called the stochastic resonance. The improvement in the haptic sensation of a fingertip depends on the intensity of the noise propagating to the fingertip. An improvement in the haptic sensation of multiple fingertips does not require multiple noise sources, such as vibrators, to be attached to multiple fingertips; i.e., even a single vibrator can propagate noise to multiple fingers. In this study, we focus on stiffness discrimination as a task using multiple fingers, in which the thumb and index finger are used to touch an object and perceive its stiffness. Subsequently, we demonstrate that the stiffness perception is improved by propagating sufficiently intense noise to the thumb and index finger using only a single vibrator. The findings indicate the possibility of improving the haptic sensation at multiple fingertips using one vibrator.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136234256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The primary goals of this research are to strengthen the understanding of the mechanisms underlying presence and cybersickness in relation to the body position and locomotion method when navigating a virtual environment (VE). In this regard, we compared two body positions (standing and sitting) and four locomotion methods (steering + embodied control [EC], steering + instrumental control [IC], teleportation + EC, and teleportation + IC) to examine the association between body position, locomotion method, presence, and cybersickness in VR. The results of a two-way ANOVA revealed a main effect for locomotion method on presence, with the sense of presence significantly lower for the steering + IC condition. However, there was no main effect for body position on presence, nor was there an interaction between body position and locomotion method. For cybersickness, nonparametric tests were used due to non-normality. The results of Mann-Whitney U tests indicated a statistically significant effect of body position on cybersickness. In particular, the level of cybersickness was significantly higher for a standing position than for a sitting position. In addition, the results of Kruskal-Wallis tests revealed that the locomotion method had a meaningful effect on cybersickness, with participants in the steering conditions feeling stronger symptoms of cybersickness than those in the teleportation conditions. Overall, this study confirmed the relationship between body position, locomotion method, presence, and cybersickness when navigating a VE.
{"title":"Exploring the Relative Effects of Body Position and Locomotion Method on Presence and Cybersickness when Navigating a Virtual Environment","authors":"Aelee Kim, Jeong-Eun Lee, Kyoung-Min Lee","doi":"10.1145/3627706","DOIUrl":"https://doi.org/10.1145/3627706","url":null,"abstract":"The primary goals of this research are to strengthen the understanding of the mechanisms underlying presence and cybersickness in relation to the body position and locomotion method when navigating a virtual environment (VE). In this regard, we compared two body positions (standing and sitting) and four locomotion methods (steering + embodied control [EC], steering + instrumental control [IC], teleportation + EC, and teleportation + IC) to examine the association between body position, locomotion method, presence, and cybersickness in VR. The results of a two-way ANOVA revealed a main effect for locomotion method on presence, with the sense of presence significantly lower for the steering + IC condition. However, there was no main effect for body position on presence, nor was there an interaction between body position and locomotion method. For cybersickness, nonparametric tests were used due to non-normality. The results of Mann-Whitney U tests indicated a statistically significant effect of body position on cybersickness. In particular, the level of cybersickness was significantly higher for a standing position than for a sitting position. In addition, the results of Kruskal-Wallis tests revealed that the locomotion method had a meaningful effect on cybersickness, with participants in the steering conditions feeling stronger symptoms of cybersickness than those in the teleportation conditions. Overall, this study confirmed the relationship between body position, locomotion method, presence, and cybersickness when navigating a VE.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136079077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristin A. Bartlett, Almudena Palacios-Ibáñez, Jorge Dorribo Camba
Mental rotation, a common measure of spatial ability, has traditionally been assessed through paper-based instruments like the Mental Rotation Test (MRT) or the Purdue Spatial Visualization Test: Rotations (PSVT:R). The fact that these instruments present 3D shapes in a 2D format devoid of natural cues like shading and perspective likely limits their ability to accurately assess the fundamental skill of mentally rotating 3D shapes. In this paper, we describe the Virtual Reality Mental Rotation Assessment (VRMRA), a virtual reality-based mental rotation assessment derived from the Revised PSVT:R and MRT. The VRMRA reimagines traditional mental rotation assessments in a room-scale virtual environment and uses hand-tracking and elements of gamification in attempts to create an intuitive, engaging experience for test-takers. To validate the instrument, we compared response patterns in the VRMRA with patterns observed on the MRT and Revised PSVT:R. For the PSVT:R-type questions, items requiring a rotation around two axes were significantly harder than items requiring rotations around a single axis in the VRMRA, which is not the case in the Revised PSVT:R. For the MRT-type questions in the VRMRA, a moderate negative correlation was found between the degree of rotation in the X direction and item difficulty. While the problem of occlusion was reduced, features of the shapes and distractors accounted for 50.6% of the variance in item difficulty. Results suggest that the VRMRA is likely a more accurate tool to assess mental rotation ability in comparison to traditional instruments which present the stimuli through 2D media. Our findings also point to potential problems with the fundamental designs of the Revised PSVT:R and MRT question formats.
{"title":"Design and Validation of a Virtual Reality Mental Rotation Test","authors":"Kristin A. Bartlett, Almudena Palacios-Ibáñez, Jorge Dorribo Camba","doi":"10.1145/3626238","DOIUrl":"https://doi.org/10.1145/3626238","url":null,"abstract":"Mental rotation, a common measure of spatial ability, has traditionally been assessed through paper-based instruments like the Mental Rotation Test (MRT) or the Purdue Spatial Visualization Test: Rotations (PSVT:R). The fact that these instruments present 3D shapes in a 2D format devoid of natural cues like shading and perspective likely limits their ability to accurately assess the fundamental skill of mentally rotating 3D shapes. In this paper, we describe the Virtual Reality Mental Rotation Assessment (VRMRA), a virtual reality-based mental rotation assessment derived from the Revised PSVT:R and MRT. The VRMRA reimagines traditional mental rotation assessments in a room-scale virtual environment and uses hand-tracking and elements of gamification in attempts to create an intuitive, engaging experience for test-takers. To validate the instrument, we compared response patterns in the VRMRA with patterns observed on the MRT and Revised PSVT:R. For the PSVT:R-type questions, items requiring a rotation around two axes were significantly harder than items requiring rotations around a single axis in the VRMRA, which is not the case in the Revised PSVT:R. For the MRT-type questions in the VRMRA, a moderate negative correlation was found between the degree of rotation in the X direction and item difficulty. While the problem of occlusion was reduced, features of the shapes and distractors accounted for 50.6% of the variance in item difficulty. Results suggest that the VRMRA is likely a more accurate tool to assess mental rotation ability in comparison to traditional instruments which present the stimuli through 2D media. Our findings also point to potential problems with the fundamental designs of the Revised PSVT:R and MRT question formats.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135141330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Makarov, Snorri Steinn Stefánsson Thors, Elvar Atli Aevarsson, Finnur Kári Pind Jörgensson, Nashmin Yeganyeh, Árni Kristjánsson, Runar Unnthorsson
When two brief vibrotactile stimulations are sequentially applied to observers’ lower back,there is systematic mislocalization of the stimulation: If the second stimulation is of higher intensity than the first one, observers tend to respond that the second stimulation was above the first one, and vice versa when weak intensity stimulation follows a strong one. This haptic mislocalization effect has been called the intensity order illusion . In the original demonstration of the illusion, frequency and amplitude of the stimulation were inextricably linked so that changes in amplitude also resulted in changes in frequency. It is therefore unknown whether the illusion is caused by changes in frequency, amplitude or both. To test this, we performed a multifactorial experiment, where we used L5 actuators that allow independent manipulation of frequency and amplitude. This approach enabled us to investigate the effects of stimulus amplitude, frequency and location and assess any potential interactions between these factors. We report four main findings: 1) we were able to replicate the intensity order illusion with the L5 tactors, 2) the illusion mainly occurred in the upwards direction, or in other words, when strong stimulation following a weaker one occurred above or in the same location as the first stimulation, 3) the illusion did not occur when similar stimulation patterns were applied in the horizontal direction and 4) the illusion was solely due to changes in amplitude, while changes in frequency (100 Hz vs 200 Hz) had no effect.
当对观察者的下背部连续施加两个短暂的振动触觉刺激时,会出现系统性的刺激定位错误:如果第二个刺激强度高于第一个刺激强度,观察者倾向于认为第二个刺激高于第一个刺激强度,而当弱强度刺激紧跟着强强度刺激强度时,观察者倾向于认为第二个刺激强度高于第一个刺激强度,反之亦然。这种触觉错位效应被称为强度顺序错觉。在最初的幻觉演示中,刺激的频率和幅度是密不可分的,因此幅度的变化也会导致频率的变化。因此,尚不清楚这种错觉是由频率、幅度或两者的变化引起的。为了测试这一点,我们进行了一个多因素实验,其中我们使用了允许独立操作频率和幅度的L5致动器。这种方法使我们能够研究刺激幅度、频率和位置的影响,并评估这些因素之间的潜在相互作用。我们报告了四个主要发现:1)我们能够复制强度顺序与L5内壁错觉,2)错觉主要发生在向上的方向,或者换句话说,当强烈刺激后发生了较弱的一个高于或在相同的位置作为第一个刺激,3)幻觉没有发生类似的刺激模式应用于水平方向时,4)幻觉完全是由于振幅的变化,而变化的频率(100 Hz vs 200 Hz)没有影响。
{"title":"The Haptic Intensity Order Illusion is Caused by Amplitude Changes","authors":"Ivan Makarov, Snorri Steinn Stefánsson Thors, Elvar Atli Aevarsson, Finnur Kári Pind Jörgensson, Nashmin Yeganyeh, Árni Kristjánsson, Runar Unnthorsson","doi":"10.1145/3626237","DOIUrl":"https://doi.org/10.1145/3626237","url":null,"abstract":"When two brief vibrotactile stimulations are sequentially applied to observers’ lower back,there is systematic mislocalization of the stimulation: If the second stimulation is of higher intensity than the first one, observers tend to respond that the second stimulation was above the first one, and vice versa when weak intensity stimulation follows a strong one. This haptic mislocalization effect has been called the intensity order illusion . In the original demonstration of the illusion, frequency and amplitude of the stimulation were inextricably linked so that changes in amplitude also resulted in changes in frequency. It is therefore unknown whether the illusion is caused by changes in frequency, amplitude or both. To test this, we performed a multifactorial experiment, where we used L5 actuators that allow independent manipulation of frequency and amplitude. This approach enabled us to investigate the effects of stimulus amplitude, frequency and location and assess any potential interactions between these factors. We report four main findings: 1) we were able to replicate the intensity order illusion with the L5 tactors, 2) the illusion mainly occurred in the upwards direction, or in other words, when strong stimulation following a weaker one occurred above or in the same location as the first stimulation, 3) the illusion did not occur when similar stimulation patterns were applied in the horizontal direction and 4) the illusion was solely due to changes in amplitude, while changes in frequency (100 Hz vs 200 Hz) had no effect.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135893088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality (AR) devices seek to create compelling visual experiences that merge virtual imagery with the natural world. These devices often rely on wearable near-eye display systems that can optically overlay digital images to the left and right eyes of the user separately. Ideally, the two eyes should be shown images with minimal radiometric differences (e.g., the same overall luminance, contrast, and color in both eyes), but achieving this binocular equality can be challenging in wearable systems with stringent demands on weight and size. Basic vision research has shown that a spectrum of potentially detrimental perceptual effects can be elicited by imagery with radiometric differences between the eyes, but it is not clear whether and how these findings apply to the experience of modern AR devices. In this work, we first develop a testing paradigm for assessing multiple aspects of visual appearance at once, and characterize five key perceptual factors when participants viewed stimuli with interocular contrast differences. In a second experiment, we simulate optical see-through AR imagery using conventional desktop LCD monitors and use the same paradigm to evaluate the multifaceted perceptual implications when the AR display luminance differs between the two eyes. We also include simulations of monocular AR systems (i.e., systems in which only one eye sees the displayed image). Our results suggest that interocular contrast differences can drive several potentially detrimental perceptual effects in binocular AR systems, such as binocular luster, rivalry, and spurious depth differences. In addition, monocular AR displays tend to have more artifacts than binocular displays with a large contrast difference in the two eyes. A better understanding of the range and likelihood of these perceptual phenomena can help inform design choices that support a high-quality user experience in AR.
{"title":"The effect of interocular contrast differences on the appearance of augmented reality imagery","authors":"Minqi Wang, Jian Ding, D. Levi, Emily Cooper","doi":"10.1145/3617684","DOIUrl":"https://doi.org/10.1145/3617684","url":null,"abstract":"Augmented reality (AR) devices seek to create compelling visual experiences that merge virtual imagery with the natural world. These devices often rely on wearable near-eye display systems that can optically overlay digital images to the left and right eyes of the user separately. Ideally, the two eyes should be shown images with minimal radiometric differences (e.g., the same overall luminance, contrast, and color in both eyes), but achieving this binocular equality can be challenging in wearable systems with stringent demands on weight and size. Basic vision research has shown that a spectrum of potentially detrimental perceptual effects can be elicited by imagery with radiometric differences between the eyes, but it is not clear whether and how these findings apply to the experience of modern AR devices. In this work, we first develop a testing paradigm for assessing multiple aspects of visual appearance at once, and characterize five key perceptual factors when participants viewed stimuli with interocular contrast differences. In a second experiment, we simulate optical see-through AR imagery using conventional desktop LCD monitors and use the same paradigm to evaluate the multifaceted perceptual implications when the AR display luminance differs between the two eyes. We also include simulations of monocular AR systems (i.e., systems in which only one eye sees the displayed image). Our results suggest that interocular contrast differences can drive several potentially detrimental perceptual effects in binocular AR systems, such as binocular luster, rivalry, and spurious depth differences. In addition, monocular AR displays tend to have more artifacts than binocular displays with a large contrast difference in the two eyes. A better understanding of the range and likelihood of these perceptual phenomena can help inform design choices that support a high-quality user experience in AR.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46456217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Holly C. Gagnon, Jeanine Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer
As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45 degrees and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.
{"title":"Calibrated passability perception in virtual reality transfers to augmented reality","authors":"Holly C. Gagnon, Jeanine Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer","doi":"10.1145/3613450","DOIUrl":"https://doi.org/10.1145/3613450","url":null,"abstract":"As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45 degrees and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48802143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moloud Nasiri, John R. Porter, Kristopher Kohm, Andrew C. Robb
Little research has studied how people use Virtual Reality (VR) changes as they experience VR. This paper reports the results of an experiment investigating how users’ behavior with two locomotion methods changed over four weeks: teleportation and joystick-based locomotion. Twenty novice VR users (no more than 1 hour prior experience with any form of walking in VR) were recruited. They loaned an Oculus Quest for four weeks on their own time, including an activity we provided them with. Results showed that the time required to complete the navigation task decreased faster for joystick-based locomotion. Spatial memory improved with time, particularly when using teleportation (which starts disadvantaged to joystick-based locomotion). Also, overall cybersickness decreased slightly over time; two dimensions of cybersickness (nausea and disorientation) increased notably over time using joystick-based navigation.
{"title":"Changes in Navigation Over Time: A Comparison of Teleportation and Joystick-based Locomotion","authors":"Moloud Nasiri, John R. Porter, Kristopher Kohm, Andrew C. Robb","doi":"10.1145/3613902","DOIUrl":"https://doi.org/10.1145/3613902","url":null,"abstract":"Little research has studied how people use Virtual Reality (VR) changes as they experience VR. This paper reports the results of an experiment investigating how users’ behavior with two locomotion methods changed over four weeks: teleportation and joystick-based locomotion. Twenty novice VR users (no more than 1 hour prior experience with any form of walking in VR) were recruited. They loaned an Oculus Quest for four weeks on their own time, including an activity we provided them with. Results showed that the time required to complete the navigation task decreased faster for joystick-based locomotion. Spatial memory improved with time, particularly when using teleportation (which starts disadvantaged to joystick-based locomotion). Also, overall cybersickness decreased slightly over time; two dimensions of cybersickness (nausea and disorientation) increased notably over time using joystick-based navigation.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42916637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Feijóo-García, Chase Wrenn, J. Stuart, A. G. de Siqueira, Benjamin C. Lok
Virtual humans (VHs) have the potential to support mental wellness among college computer science (CS) students. However, designing effective VHs for counseling purposes requires a clear understanding of students’ demographics, backgrounds, and expectations. To this end, we conducted two user studies with 216 CS students from a major university in North America. In the first study, we explored how students co-designed VHs to support mental wellness conversations and found that the VHs’ demographics, appearance, and voice closely resembled the characteristics of their designers. In the second study, we investigated how the interplay between the VH’s appearance and voice impacted the agent’s effectiveness in promoting CS students’ intentions toward gratitude journaling. Our findings suggest that the active participation of CS students in VH design leads to the creation of agents that closely resemble their designers. Moreover, we found that the interplay between the VH’s appearance and voice impacts the agent’s effectiveness in promoting CS students’ intentions toward mental wellness techniques.
{"title":"Participatory Design of Virtual Humans for Mental Health Support Among North American Computer Science Students: Voice, Appearance, and the Similarity-attraction Effect","authors":"P. Feijóo-García, Chase Wrenn, J. Stuart, A. G. de Siqueira, Benjamin C. Lok","doi":"10.1145/3613961","DOIUrl":"https://doi.org/10.1145/3613961","url":null,"abstract":"Virtual humans (VHs) have the potential to support mental wellness among college computer science (CS) students. However, designing effective VHs for counseling purposes requires a clear understanding of students’ demographics, backgrounds, and expectations. To this end, we conducted two user studies with 216 CS students from a major university in North America. In the first study, we explored how students co-designed VHs to support mental wellness conversations and found that the VHs’ demographics, appearance, and voice closely resembled the characteristics of their designers. In the second study, we investigated how the interplay between the VH’s appearance and voice impacted the agent’s effectiveness in promoting CS students’ intentions toward gratitude journaling. Our findings suggest that the active participation of CS students in VH design leads to the creation of agents that closely resemble their designers. Moreover, we found that the interplay between the VH’s appearance and voice impacts the agent’s effectiveness in promoting CS students’ intentions toward mental wellness techniques.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47083061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanhao Wang, Qian Zhang, Celine Aubuchon, Jovan T. Kemp, F. Domini, J. Tompkin
Depth estimation is fundamental to 3D perception, and humans are known to have biased estimates of depth. This study investigates whether convolutional neural networks (CNNs) can be biased when predicting the sign of curvature and depth of surfaces of textured surfaces under different viewing conditions (field of view) and surface parameters (slant and texture irregularity). This hypothesis is drawn from the idea that texture gradients described by local neighborhoods—a cue identified in human vision literature—are also representable within convolutional neural networks. To this end, we trained both unsupervised and supervised CNN models on the renderings of slanted surfaces with random Polka dot patterns and analyzed their internal latent representations. The results show that the unsupervised models have similar prediction biases as humans across all experiments, while supervised CNN models do not exhibit similar biases. The latent spaces of the unsupervised models can be linearly separated into axes representing field of view and optical slant. For supervised models, this ability varies substantially with model architecture and the kind of supervision (continuous slant vs. sign of slant). Even though this study says nothing of any shared mechanism, these findings suggest that unsupervised CNN models can share similar predictions to the human visual system. Code: github.com/brownvc/Slant-CNN-Biases
{"title":"On Human-like Biases in Convolutional Neural Networks for the Perception of Slant from Texture","authors":"Yuanhao Wang, Qian Zhang, Celine Aubuchon, Jovan T. Kemp, F. Domini, J. Tompkin","doi":"10.1145/3613451","DOIUrl":"https://doi.org/10.1145/3613451","url":null,"abstract":"Depth estimation is fundamental to 3D perception, and humans are known to have biased estimates of depth. This study investigates whether convolutional neural networks (CNNs) can be biased when predicting the sign of curvature and depth of surfaces of textured surfaces under different viewing conditions (field of view) and surface parameters (slant and texture irregularity). This hypothesis is drawn from the idea that texture gradients described by local neighborhoods—a cue identified in human vision literature—are also representable within convolutional neural networks. To this end, we trained both unsupervised and supervised CNN models on the renderings of slanted surfaces with random Polka dot patterns and analyzed their internal latent representations. The results show that the unsupervised models have similar prediction biases as humans across all experiments, while supervised CNN models do not exhibit similar biases. The latent spaces of the unsupervised models can be linearly separated into axes representing field of view and optical slant. For supervised models, this ability varies substantially with model architecture and the kind of supervision (continuous slant vs. sign of slant). Even though this study says nothing of any shared mechanism, these findings suggest that unsupervised CNN models can share similar predictions to the human visual system. Code: github.com/brownvc/Slant-CNN-Biases","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46819820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}