Errors that arise due to a mismatch in the dynamics of a person’s motion and the visualized movements of their avatar in virtual reality are termed ‘physicality errors’ to distinguish them from simple physical errors, such as footskate. Physicality errors involve plausible motions, but with dynamic inconsistencies. Even with perfect tracking and ideal virtual worlds, such errors are inevitable in virtual reality whenever a person adopts an avatar that does not match their own proportions or lifts a virtual object that appears heavier than the movement of their hand. This study investigates people’s sensitivity to physicality errors in order to understand when they are likely to be noticeable and need to be mitigated. It uses a simple, well-understood exercise of a dumbbell lift to explore the impact of motion kinematics and varied sources of visual information, including changing body size, changing the size of manipulated objects, and displaying muscular strain. Results suggest that kinematic (motion) information has a dominant impact on perception of effort, but visual information, particularly the visual size of the lifted object, has a strong impact on perceived weight. This can lead to perceptual mismatches which reduce perceived naturalness. Small errors may not be noticeable, but large errors reduce naturalness. Further results are discussed, which inform the requirements for animation algorithms.
{"title":"Understanding the Impact of Visual and Kinematic Information on the Perception of Physicality Errors","authors":"Goksu Yamac, Carol O’Sullivan, Michael Neff","doi":"10.1145/3660636","DOIUrl":"https://doi.org/10.1145/3660636","url":null,"abstract":"<p>Errors that arise due to a mismatch in the dynamics of a person’s motion and the visualized movements of their avatar in virtual reality are termed ‘physicality errors’ to distinguish them from simple physical errors, such as footskate. Physicality errors involve plausible motions, but with dynamic inconsistencies. Even with perfect tracking and ideal virtual worlds, such errors are inevitable in virtual reality whenever a person adopts an avatar that does not match their own proportions or lifts a virtual object that appears heavier than the movement of their hand. This study investigates people’s sensitivity to physicality errors in order to understand when they are likely to be noticeable and need to be mitigated. It uses a simple, well-understood exercise of a dumbbell lift to explore the impact of motion kinematics and varied sources of visual information, including changing body size, changing the size of manipulated objects, and displaying muscular strain. Results suggest that kinematic (motion) information has a dominant impact on perception of effort, but visual information, particularly the visual size of the lifted object, has a strong impact on perceived weight. This can lead to perceptual mismatches which reduce perceived naturalness. Small errors may not be noticeable, but large errors reduce naturalness. Further results are discussed, which inform the requirements for animation algorithms.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"219 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140626660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The identification of emotions is an open research area and has a potential leading role in the improvement of socio-emotional skills such as empathy, sensitivity, and emotion recognition in humans. The current study aimed to use Event Related Potential (ERP) components (N100, N200, P200, P300, early Late Positive Potential (LPP), middle LPP, and late LPP) of EEG data for the classification of emotional states (positive, negative, neutral). EEG data were collected from 62 healthy individuals over 18 electrodes. An emotional paradigm with pictures from the International Affective Picture System (IAPS) was used to record the EEG data. A linear Support Vector Machine (C=0.1) was used to classify emotions, and a forward feature selection approach was used to eliminate irrelevant features. The early LPP component, which was the most discriminative among all ERP components, had the highest classification accuracy (70.16%) for identifying negative and neutral stimuli. The classification of negative versus neutral stimuli had the best accuracy (79.84%) when all ERP components were used as a combined feature set, followed by positive versus negative stimuli (75.00%) and positive versus neutral stimuli (68.55%). Overall, the combined ERP component feature sets outperformed single ERP component feature sets for all stimulus pairings in terms of accuracy. These findings are promising for further research and development of EEG-based emotion recognition systems.
{"title":"Decoding Functional Brain Data for Emotion Recognition: A Machine Learning Approach","authors":"Emine Elif Tülay, Tuğçe Ballı","doi":"10.1145/3657638","DOIUrl":"https://doi.org/10.1145/3657638","url":null,"abstract":"<p>The identification of emotions is an open research area and has a potential leading role in the improvement of socio-emotional skills such as empathy, sensitivity, and emotion recognition in humans. The current study aimed to use Event Related Potential (ERP) components (N100, N200, P200, P300, early Late Positive Potential (LPP), middle LPP, and late LPP) of EEG data for the classification of emotional states (positive, negative, neutral). EEG data were collected from 62 healthy individuals over 18 electrodes. An emotional paradigm with pictures from the International Affective Picture System (IAPS) was used to record the EEG data. A linear Support Vector Machine (C=0.1) was used to classify emotions, and a forward feature selection approach was used to eliminate irrelevant features. The early LPP component, which was the most discriminative among all ERP components, had the highest classification accuracy (70.16%) for identifying negative and neutral stimuli. The classification of negative versus neutral stimuli had the best accuracy (79.84%) when all ERP components were used as a combined feature set, followed by positive versus negative stimuli (75.00%) and positive versus neutral stimuli (68.55%). Overall, the combined ERP component feature sets outperformed single ERP component feature sets for all stimulus pairings in terms of accuracy. These findings are promising for further research and development of EEG-based emotion recognition systems.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"221 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140611976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bennie Bendiksen, Nana Lin, JieHyun Kim, Funda Durupinar
Immersive virtual environments populated by real and virtual humans provide valuable insights into human decision-making processes under controlled conditions. Existing literature indicates elevated comfort, higher presence, and a more positive user experience when virtual humans exhibit rich behaviors. Based on this knowledge, we conducted a web-based, interactive study, in which participants were embodied within a virtual crowd with complex behaviors driven by an underlying psychological model. While participants interacted with a group of autonomous humanoid agents in a shopping scenario similar to Black Friday, the platform recorded their non-verbal behaviors. In this independent-subjects study, we investigated behavioral and emotional variances across participants with diverse backgrounds focusing on two conditions: perceived agency and the crowd’s emotional disposition. For perceived agency, one group of participants was told that the other crowd members were avatars controlled by humans, while another group was told that they were artificial agents. For emotional disposition, the crowd behaved either in a docile or hostile manner. The results suggest that the crowd’s disposition and specific participant traits significantly affected certain emotions and behaviors. For instance, participants collected fewer items and reported a higher increase of negative emotions when placed in a hostile crowd. However, perceived agency did not yield any statistically significant effects.
{"title":"Assessing Human Reactions in a Virtual Crowd Based on Crowd Disposition, Perceived Agency, and User Traits","authors":"Bennie Bendiksen, Nana Lin, JieHyun Kim, Funda Durupinar","doi":"10.1145/3658670","DOIUrl":"https://doi.org/10.1145/3658670","url":null,"abstract":"<p>Immersive virtual environments populated by real and virtual humans provide valuable insights into human decision-making processes under controlled conditions. Existing literature indicates elevated comfort, higher presence, and a more positive user experience when virtual humans exhibit rich behaviors. Based on this knowledge, we conducted a web-based, interactive study, in which participants were embodied within a virtual crowd with complex behaviors driven by an underlying psychological model. While participants interacted with a group of autonomous humanoid agents in a shopping scenario similar to Black Friday, the platform recorded their non-verbal behaviors. In this independent-subjects study, we investigated behavioral and emotional variances across participants with diverse backgrounds focusing on two conditions: perceived agency and the crowd’s emotional disposition. For perceived agency, one group of participants was told that the other crowd members were avatars controlled by humans, while another group was told that they were artificial agents. For emotional disposition, the crowd behaved either in a docile or hostile manner. The results suggest that the crowd’s disposition and specific participant traits significantly affected certain emotions and behaviors. For instance, participants collected fewer items and reported a higher increase of negative emotions when placed in a hostile crowd. However, perceived agency did not yield any statistically significant effects.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"65 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140594042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an end-to-end generative adversarial network that allows for controllable ink wash painting generation from sketches by specifying the colors via color hints. To the best of our knowledge, this is the first study for interactive Chinese ink wash painting colorization from sketches. To help our network understand the ink style and artistic conception, we introduced an ink style prediction mechanism for our discriminator, which enables the discriminator to accurately predict the style with the help of a pre-trained style encoder. We also designed our generator to receive multi-scale feature information from the feature pyramid network for detail reconstruction of ink wash painting. Experimental results and user study show that ink wash paintings generated by our network have higher realism and richer artistic conception than existing image generation methods.
{"title":"Color Hint-guided Ink Wash Painting Colorization with Ink Style Prediction Mechanism","authors":"Yao Zeng, Xiaoyu Liu, Yijun Wang, Junsong Zhang","doi":"10.1145/3657637","DOIUrl":"https://doi.org/10.1145/3657637","url":null,"abstract":"<p>We propose an end-to-end generative adversarial network that allows for controllable ink wash painting generation from sketches by specifying the colors via color hints. To the best of our knowledge, this is the first study for interactive Chinese ink wash painting colorization from sketches. To help our network understand the ink style and artistic conception, we introduced an ink style prediction mechanism for our discriminator, which enables the discriminator to accurately predict the style with the help of a pre-trained style encoder. We also designed our generator to receive multi-scale feature information from the feature pyramid network for detail reconstruction of ink wash painting. Experimental results and user study show that ink wash paintings generated by our network have higher realism and richer artistic conception than existing image generation methods.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"50 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matti Pouke, Elmeri Uotila, Evan G. Center, Kalle G. Timperi, Alexis P. Chambers, Timo Ojala, Steven M. LaValle
According to previous research, humans are generally poor at adapting to earth-discrepant gravity, especially in Virtual Reality (VR), which cannot simulate the effects of gravity on the physical body. Most of the previous VR research on gravity adaptation has used perceptual or interception tasks, although adaptation to these tasks seems to be especially challenging compared to tasks with a more pronounced motor component. This paper describes the results of two between-subjects studies (n = 60 and n = 42) that investigated adaptation to increased gravity simulated by an interactive VR experience. The experimental procedure was identical in both studies: In the adaptation phase, one group was trained to throw a ball at a target using Valve Index motion controllers in gravity that was simulated at five times of earth’s gravity (hypergravity group), whereas another group threw at a longer-distance target under normal gravity (normal gravity group) so that both groups had to exert the same amount of force when throwing (approximated manually in Study 1 and mathematically in Study 2). Then, in the measurement phase, both groups repeatedly threw a virtual ball at targets in normal gravity. In this phase, the trajectory of the ball was hidden at the moment of release so that the participants had to rely on their internal model of gravity to hit the targets rather than on visual feedback. Target distances were placed within the same range for both groups in the measurement phase. According to our preregistered hypotheses, we predicted that the hypergravity group would display worse overall throwing accuracy, and would specifically overshoot the target more often than the normal gravity group. Our experimental data supported both hypotheses in both studies. The findings indicate that training an interactive task in higher simulated gravity led participants in both studies to update their internal gravity models, and therefore, some adaptation to higher gravity did indeed occur. However, our exploratory analysis also indicates that the participants in the hypergravity group began to gradually regain their throwing accuracy throughout the course of the measurement phase.
根据以往的研究,人类适应地球稀薄重力的能力普遍较差,尤其是在无法模拟重力对人体影响的虚拟现实(VR)中。之前大多数有关重力适应的虚拟现实研究都使用了感知或拦截任务,尽管与运动成分更明显的任务相比,适应这些任务似乎特别具有挑战性。本文介绍了两项主体间研究(n = 60 和 n = 42)的结果,这些研究调查了对交互式 VR 体验模拟的重力增加的适应性。两项研究的实验过程完全相同:在适应阶段,一组受训者使用 Valve Index 运动控制器在模拟为地球重力五倍的重力条件下向目标投掷小球(超重力组),而另一组受训者则在正常重力条件下向距离较远的目标投掷小球(正常重力组),因此两组受训者在投掷小球时必须施加相同的力量(在研究 1 中为手动近似值,在研究 2 中为数学近似值)。然后,在测量阶段,两组人都在正常重力下反复向目标投掷虚拟球。在这一阶段,在释放球的瞬间,球的轨迹被隐藏起来,因此被试必须依靠内部重力模型而不是视觉反馈来击中目标。在测量阶段,两组受试者的目标距离保持一致。根据我们预先设定的假设,我们预测超重力组的总体投掷准确性会比正常重力组差,而且会比正常重力组更频繁地偏离目标。我们的实验数据支持了这两项研究中的假设。研究结果表明,在较高的模拟重力下进行交互式任务训练,会使两项研究中的参与者更新其内部重力模型,因此,确实发生了一些对较高重力的适应。不过,我们的探索性分析也表明,在整个测量阶段,超重力组的参与者开始逐渐恢复投掷准确性。
{"title":"Adaptation to Simulated Hypergravity in a Virtual Reality Throwing Task","authors":"Matti Pouke, Elmeri Uotila, Evan G. Center, Kalle G. Timperi, Alexis P. Chambers, Timo Ojala, Steven M. LaValle","doi":"10.1145/3643849","DOIUrl":"https://doi.org/10.1145/3643849","url":null,"abstract":"<p>According to previous research, humans are generally poor at adapting to earth-discrepant gravity, especially in Virtual Reality (VR), which cannot simulate the effects of gravity on the physical body. Most of the previous VR research on gravity adaptation has used perceptual or interception tasks, although adaptation to these tasks seems to be especially challenging compared to tasks with a more pronounced motor component. This paper describes the results of two between-subjects studies (<i>n</i> = 60 and <i>n</i> = 42) that investigated adaptation to increased gravity simulated by an interactive VR experience. The experimental procedure was identical in both studies: In the adaptation phase, one group was trained to throw a ball at a target using Valve Index motion controllers in gravity that was simulated at five times of earth’s gravity (hypergravity group), whereas another group threw at a longer-distance target under normal gravity (normal gravity group) so that both groups had to exert the same amount of force when throwing (approximated manually in Study 1 and mathematically in Study 2). Then, in the measurement phase, both groups repeatedly threw a virtual ball at targets in normal gravity. In this phase, the trajectory of the ball was hidden at the moment of release so that the participants had to rely on their internal model of gravity to hit the targets rather than on visual feedback. Target distances were placed within the same range for both groups in the measurement phase. According to our preregistered hypotheses, we predicted that the hypergravity group would display worse overall throwing accuracy, and would specifically overshoot the target more often than the normal gravity group. Our experimental data supported both hypotheses in both studies. The findings indicate that training an interactive task in higher simulated gravity led participants in both studies to update their internal gravity models, and therefore, some adaptation to higher gravity did indeed occur. However, our exploratory analysis also indicates that the participants in the hypergravity group began to gradually regain their throwing accuracy throughout the course of the measurement phase.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"63 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139690250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2023-12-09DOI: 10.1145/3618113
Snipta Mallick, Géraldine Jeckeln, Connor J Parde, Carlos D Castillo, Alice J O'Toole
Facial morphs created between two identities resemble both of the faces used to create the morph. Consequently, humans and machines are prone to mistake morphs made from two identities for either of the faces used to create the morph. This vulnerability has been exploited in "morph attacks" in security scenarios. Here, we asked whether the "other-race effect" (ORE)-the human advantage for identifying own- vs. other-race faces-exacerbates morph attack susceptibility for humans. We also asked whether face-identification performance in a deep convolutional neural network (DCNN) is affected by the race of morphed faces. Caucasian (CA) and East-Asian (EA) participants performed a face-identity matching task on pairs of CA and EA face images in two conditions. In the morph condition, different-identity pairs consisted of an image of identity "A" and a 50/50 morph between images of identity "A" and "B". In the baseline condition, morphs of different identities never appeared. As expected, morphs were identified mistakenly more often than original face images. Of primary interest, morph identification was substantially worse for cross-race faces than for own-race faces. Similar to humans, the DCNN performed more accurately for original face images than for morphed image pairs. Notably, the deep network proved substantially more accurate than humans in both cases. The results point to the possibility that DCNNs might be useful for improving face identification accuracy when morphed faces are presented. They also indicate the significance of the race of a face in morph attack susceptibility in applied settings.
{"title":"The Influence of the Other-Race Effect on Susceptibility to Face Morphing Attacks.","authors":"Snipta Mallick, Géraldine Jeckeln, Connor J Parde, Carlos D Castillo, Alice J O'Toole","doi":"10.1145/3618113","DOIUrl":"10.1145/3618113","url":null,"abstract":"<p><p>Facial morphs created between two identities resemble both of the faces used to create the morph. Consequently, humans and machines are prone to mistake morphs made from two identities for either of the faces used to create the morph. This vulnerability has been exploited in \"morph attacks\" in security scenarios. Here, we asked whether the \"other-race effect\" (ORE)-the human advantage for identifying own- vs. other-race faces-exacerbates morph attack susceptibility for humans. We also asked whether face-identification performance in a deep convolutional neural network (DCNN) is affected by the race of morphed faces. Caucasian (CA) and East-Asian (EA) participants performed a face-identity matching task on pairs of CA and EA face images in two conditions. In the morph condition, different-identity pairs consisted of an image of identity \"A\" and a 50/50 morph between images of identity \"A\" and \"B\". In the baseline condition, morphs of different identities never appeared. As expected, morphs were identified mistakenly more often than original face images. Of primary interest, morph identification was substantially worse for cross-race faces than for own-race faces. Similar to humans, the DCNN performed more accurately for original face images than for morphed image pairs. Notably, the deep network proved substantially more accurate than humans in both cases. The results point to the possibility that DCNNs might be useful for improving face identification accuracy when morphed faces are presented. They also indicate the significance of the race of a face in morph attack susceptibility in applied settings.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"1 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42985574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pontus Ebelin, Gyorgy Denes, Tomas Akenine-Möller, Kalle Åström, Magnus Oskarsson, William H. McIlhagga
Edge detection is an important process in human visual processing. However, as far as we know, few attempts have been made to map the temporal edge detection filters in human vision. To that end, we devised a user study and collected data from which we derived estimates of human temporal edge detection filters based on three different models, including the derivative of the infinite symmetric exponential function and temporal contrast sensitivity function. We analyze our findings using several different methods, including extending the filter to higher frequencies than were shown during the experiment. In addition, we show a proof of concept that our filter may be used in spatiotemporal image quality metrics by incorporating it into a flicker detection pipeline.
{"title":"Estimates of Temporal Edge Detection Filters in Human Vision","authors":"Pontus Ebelin, Gyorgy Denes, Tomas Akenine-Möller, Kalle Åström, Magnus Oskarsson, William H. McIlhagga","doi":"10.1145/3639052","DOIUrl":"https://doi.org/10.1145/3639052","url":null,"abstract":"<p>Edge detection is an important process in human visual processing. However, as far as we know, few attempts have been made to map the <i>temporal</i> edge detection filters in human vision. To that end, we devised a user study and collected data from which we derived estimates of human temporal edge detection filters based on three different models, including the derivative of the infinite symmetric exponential function and temporal contrast sensitivity function. We analyze our findings using several different methods, including extending the filter to higher frequencies than were shown during the experiment. In addition, we show a proof of concept that our filter may be used in spatiotemporal image quality metrics by incorporating it into a flicker detection pipeline.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"25 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139062172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to the SAP 2023 Special Issue","authors":"Alexandre Chapiro, Andrew Robb","doi":"10.1145/3629977","DOIUrl":"https://doi.org/10.1145/3629977","url":null,"abstract":"<p>No abstract available.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"5 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigated the ability of two fingers to discriminate stiffness with stochastic resonance. It is known that the haptic perception at the fingertip improves when vibrotactile noise propagates to the fingertip, which is a phenomenon called the stochastic resonance. The improvement in the haptic sensation of a fingertip depends on the intensity of the noise propagating to the fingertip. An improvement in the haptic sensation of multiple fingertips does not require multiple noise sources, such as vibrators, to be attached to multiple fingertips; i.e., even a single vibrator can propagate noise to multiple fingers. In this study, we focus on stiffness discrimination as a task using multiple fingers, in which the thumb and index finger are used to touch an object and perceive its stiffness. Subsequently, we demonstrate that the stiffness perception is improved by propagating sufficiently intense noise to the thumb and index finger using only a single vibrator. The findings indicate the possibility of improving the haptic sensation at multiple fingertips using one vibrator.
{"title":"Two-finger Stiffness Discrimination with the Stochastic Resonance Effect","authors":"Komi Chamnongthai, Takahiro Endo, Fumitoshi Matsuno","doi":"10.1145/3630254","DOIUrl":"https://doi.org/10.1145/3630254","url":null,"abstract":"We investigated the ability of two fingers to discriminate stiffness with stochastic resonance. It is known that the haptic perception at the fingertip improves when vibrotactile noise propagates to the fingertip, which is a phenomenon called the stochastic resonance. The improvement in the haptic sensation of a fingertip depends on the intensity of the noise propagating to the fingertip. An improvement in the haptic sensation of multiple fingertips does not require multiple noise sources, such as vibrators, to be attached to multiple fingertips; i.e., even a single vibrator can propagate noise to multiple fingers. In this study, we focus on stiffness discrimination as a task using multiple fingers, in which the thumb and index finger are used to touch an object and perceive its stiffness. Subsequently, we demonstrate that the stiffness perception is improved by propagating sufficiently intense noise to the thumb and index finger using only a single vibrator. The findings indicate the possibility of improving the haptic sensation at multiple fingertips using one vibrator.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"33 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136234256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The primary goals of this research are to strengthen the understanding of the mechanisms underlying presence and cybersickness in relation to the body position and locomotion method when navigating a virtual environment (VE). In this regard, we compared two body positions (standing and sitting) and four locomotion methods (steering + embodied control [EC], steering + instrumental control [IC], teleportation + EC, and teleportation + IC) to examine the association between body position, locomotion method, presence, and cybersickness in VR. The results of a two-way ANOVA revealed a main effect for locomotion method on presence, with the sense of presence significantly lower for the steering + IC condition. However, there was no main effect for body position on presence, nor was there an interaction between body position and locomotion method. For cybersickness, nonparametric tests were used due to non-normality. The results of Mann-Whitney U tests indicated a statistically significant effect of body position on cybersickness. In particular, the level of cybersickness was significantly higher for a standing position than for a sitting position. In addition, the results of Kruskal-Wallis tests revealed that the locomotion method had a meaningful effect on cybersickness, with participants in the steering conditions feeling stronger symptoms of cybersickness than those in the teleportation conditions. Overall, this study confirmed the relationship between body position, locomotion method, presence, and cybersickness when navigating a VE.
{"title":"Exploring the Relative Effects of Body Position and Locomotion Method on Presence and Cybersickness when Navigating a Virtual Environment","authors":"Aelee Kim, Jeong-Eun Lee, Kyoung-Min Lee","doi":"10.1145/3627706","DOIUrl":"https://doi.org/10.1145/3627706","url":null,"abstract":"The primary goals of this research are to strengthen the understanding of the mechanisms underlying presence and cybersickness in relation to the body position and locomotion method when navigating a virtual environment (VE). In this regard, we compared two body positions (standing and sitting) and four locomotion methods (steering + embodied control [EC], steering + instrumental control [IC], teleportation + EC, and teleportation + IC) to examine the association between body position, locomotion method, presence, and cybersickness in VR. The results of a two-way ANOVA revealed a main effect for locomotion method on presence, with the sense of presence significantly lower for the steering + IC condition. However, there was no main effect for body position on presence, nor was there an interaction between body position and locomotion method. For cybersickness, nonparametric tests were used due to non-normality. The results of Mann-Whitney U tests indicated a statistically significant effect of body position on cybersickness. In particular, the level of cybersickness was significantly higher for a standing position than for a sitting position. In addition, the results of Kruskal-Wallis tests revealed that the locomotion method had a meaningful effect on cybersickness, with participants in the steering conditions feeling stronger symptoms of cybersickness than those in the teleportation conditions. Overall, this study confirmed the relationship between body position, locomotion method, presence, and cybersickness when navigating a VE.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136079077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}