Pub Date : 2023-01-11DOI: https://dl.acm.org/doi/10.1145/3568311
Elena Arabadzhiyska, Cara Tursun, Hans-Peter Seidel, Piotr Didyk
Eye-tracking technology has started to become an integral component of new display devices such as virtual and augmented reality headsets. Applications of gaze information range from new interaction techniques that exploit eye patterns to gaze-contingent digital content creation. However, system latency is still a significant issue in many of these applications because it breaks the synchronization between the current and measured gaze positions. Consequently, it may lead to unwanted visual artifacts and degradation of the user experience. In this work, we focus on foveated rendering applications where the quality of an image is reduced towards the periphery for computational savings. In foveated rendering, the presence of system latency leads to delayed updates to the rendered frame, making the quality degradation visible to the user. To address this issue and to combat system latency, recent work proposes using saccade landing position prediction to extrapolate gaze information from delayed eye tracking samples. Although the benefits of such a strategy have already been demonstrated, the solutions range from simple and efficient ones, which make several assumptions about the saccadic eye movements, to more complex and costly ones, which use machine learning techniques. However, it is unclear to what extent the prediction can benefit from accounting for additional factors and how more complex predictions can be performed efficiently to respect the latency requirements. This paper presents a series of experiments investigating the importance of different factors for saccades prediction in common virtual and augmented reality applications. In particular, we investigate the effects of saccade orientation in 3D space and smooth pursuit eye-motion (SPEM) and how their influence compares to the variability across users. We also present a simple, yet efficient post-hoc correction method that adapts existing saccade prediction methods to handle these factors without performing extensive data collection. Furthermore, our investigation and the correction technique may also help future developments of machine-learning-based techniques by limiting the required amount of training data.
{"title":"Practical Saccade Prediction for Head-Mounted Displays: Towards a Comprehensive Model","authors":"Elena Arabadzhiyska, Cara Tursun, Hans-Peter Seidel, Piotr Didyk","doi":"https://dl.acm.org/doi/10.1145/3568311","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3568311","url":null,"abstract":"<p>Eye-tracking technology has started to become an integral component of new display devices such as virtual and augmented reality headsets. Applications of gaze information range from new interaction techniques that exploit eye patterns to gaze-contingent digital content creation. However, system latency is still a significant issue in many of these applications because it breaks the synchronization between the current and measured gaze positions. Consequently, it may lead to unwanted visual artifacts and degradation of the user experience. In this work, we focus on foveated rendering applications where the quality of an image is reduced towards the periphery for computational savings. In foveated rendering, the presence of system latency leads to delayed updates to the rendered frame, making the quality degradation visible to the user. To address this issue and to combat system latency, recent work proposes using saccade landing position prediction to extrapolate gaze information from delayed eye tracking samples. Although the benefits of such a strategy have already been demonstrated, the solutions range from simple and efficient ones, which make several assumptions about the saccadic eye movements, to more complex and costly ones, which use machine learning techniques. However, it is unclear to what extent the prediction can benefit from accounting for additional factors and how more complex predictions can be performed efficiently to respect the latency requirements. This paper presents a series of experiments investigating the importance of different factors for saccades prediction in common virtual and augmented reality applications. In particular, we investigate the effects of saccade orientation in 3D space and <b>smooth pursuit eye-motion (SPEM)</b> and how their influence compares to the variability across users. We also present a simple, yet efficient post-hoc correction method that adapts existing saccade prediction methods to handle these factors without performing extensive data collection. Furthermore, our investigation and the correction technique may also help future developments of machine-learning-based techniques by limiting the required amount of training data.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"50 4","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-11DOI: https://dl.acm.org/doi/10.1145/3564605
Rachel Brown, Vasha Dutell, Bruce Walter, Ruth Rosenholtz, Peter Shirley, Morgan McGuire, David Luebke
Computer graphics seeks to deliver compelling images, generated within a computing budget, targeted at a specific display device, and ultimately viewed by an individual user. The foveated nature of human vision offers an opportunity to efficiently allocate computation and compression to appropriate areas of the viewer’s visual field, of particular importance with the rise of high-resolution and wide field-of-view display devices. However, while variations in acuity and contrast sensitivity across the field of view have been well-studied and modeled, a more consequential variation concerns peripheral vision’s degradation in the face of clutter, known as crowding. Understanding of peripheral crowding has greatly advanced in recent years, in terms of both phenomenology and modeling. Accurately leveraging this knowledge is critical for many applications, as peripheral vision covers a majority of pixels in the image. We advance computational models for peripheral vision aimed toward their eventual use in computer graphics. In particular, researchers have recently developed high-performing models of peripheral crowding, known as “pooling” models, which predict a wide range of phenomena but are computationally inefficient. We reformulate the problem as a dataflow computation, which enables faster processing and operating on larger images. Further, we account for the explicit encoding of “end stopped” features in the image, which was missing from previous methods. We evaluate our model in the context of perception of textures in the periphery, including a novel texture dataset and updated textural descriptors. Our improved computational framework may simplify development and testing of more sophisticated, complete models in more robust and realistic settings relevant to computer graphics.
{"title":"Efficient Dataflow Modeling of Peripheral Encoding in the Human Visual System","authors":"Rachel Brown, Vasha Dutell, Bruce Walter, Ruth Rosenholtz, Peter Shirley, Morgan McGuire, David Luebke","doi":"https://dl.acm.org/doi/10.1145/3564605","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3564605","url":null,"abstract":"<p>Computer graphics seeks to deliver compelling images, generated within a computing budget, targeted at a specific display device, and ultimately viewed by an individual user. The foveated nature of human vision offers an opportunity to efficiently allocate computation and compression to appropriate areas of the viewer’s visual field, of particular importance with the rise of high-resolution and wide field-of-view display devices. However, while variations in acuity and contrast sensitivity across the field of view have been well-studied and modeled, a more consequential variation concerns peripheral vision’s degradation in the face of clutter, known as crowding. Understanding of peripheral crowding has greatly advanced in recent years, in terms of both phenomenology and modeling. Accurately leveraging this knowledge is critical for many applications, as peripheral vision covers a majority of pixels in the image. We advance computational models for peripheral vision aimed toward their eventual use in computer graphics. In particular, researchers have recently developed high-performing models of peripheral crowding, known as “pooling” models, which predict a wide range of phenomena but are computationally inefficient. We reformulate the problem as a dataflow computation, which enables faster processing and operating on larger images. Further, we account for the explicit encoding of “end stopped” features in the image, which was missing from previous methods. We evaluate our model in the context of perception of textures in the periphery, including a novel texture dataset and updated textural descriptors. Our improved computational framework may simplify development and testing of more sophisticated, complete models in more robust and realistic settings relevant to computer graphics.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: https://dl.acm.org/doi/10.1145/3560818
Andrew Robb, Kristopher Kohm, John Porter
Redirected walking techniques use rotational gains to guide users away from physical obstacles as they walk in a virtual world, effectively creating the illusion of a larger virtual space than is physically present. Designers often want to keep users unaware of this manipulation, which is made possible by limitations in human perception that render rotational gains imperceptible below a certain threshold. Many aspects of these thresholds have been studied; however, no research has yet considered whether these thresholds may change over time as users gain more experience with them. To study this, we recruited 20 novice VR users (no more than 1 hour of prior experience with an HMD) and provided them with an Oculus Quest to use for 4 weeks on their own time. They were tasked to complete an activity assessing their sensitivity to rotational gain once each week, in addition to whatever other activities they wanted to perform. No feedback was provided to participants about their performance during each activity, minimizing the possibility of learning effects accounting for any observed changes over time. We observed that participants became significantly more sensitive to rotation gains over time, underscoring the importance of considering prior user experience in applications involving rotational gain, as well as how prior user experience may affect other, broader applications of VR.
{"title":"Experience Matters: Longitudinal Changes in Sensitivity to Rotational Gains in Virtual Reality","authors":"Andrew Robb, Kristopher Kohm, John Porter","doi":"https://dl.acm.org/doi/10.1145/3560818","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3560818","url":null,"abstract":"<p>Redirected walking techniques use rotational gains to guide users away from physical obstacles as they walk in a virtual world, effectively creating the illusion of a larger virtual space than is physically present. Designers often want to keep users unaware of this manipulation, which is made possible by limitations in human perception that render rotational gains imperceptible below a certain threshold. Many aspects of these thresholds have been studied; however, no research has yet considered whether these thresholds may change over time as users gain more experience with them. To study this, we recruited 20 novice VR users (no more than 1 hour of prior experience with an HMD) and provided them with an Oculus Quest to use for 4 weeks on their own time. They were tasked to complete an activity assessing their sensitivity to rotational gain once each week, in addition to whatever other activities they wanted to perform. No feedback was provided to participants about their performance during each activity, minimizing the possibility of learning effects accounting for any observed changes over time. We observed that participants became significantly more sensitive to rotation gains over time, underscoring the importance of considering prior user experience in applications involving rotational gain, as well as how prior user experience may affect other, broader applications of VR.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"60 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: https://dl.acm.org/doi/10.1145/3554921
Minqi Wang, Emily A. Cooper
Near-eye display systems for augmented reality (AR) aim to seamlessly merge virtual content with the user’s view of the real-world. A substantial limitation of current systems is that they only present virtual content over a limited portion of the user’s natural field of view (FOV). This limitation reduces the immersion and utility of these systems. Thus, it is essential to quantify FOV coverage in AR systems and understand how to maximize it. It is straightforward to determine the FOV coverage for monocular AR systems based on the system architecture. However, stereoscopic AR systems that present 3D virtual content create a more complicated scenario because the two eyes’ views do not always completely overlap. The introduction of partial binocular overlap in stereoscopic systems can potentially expand the perceived horizontal FOV coverage, but it can also introduce perceptual nonuniformity artifacts. In this arrticle, we first review the principles of binocular FOV overlap for natural vision and for stereoscopic display systems. We report the results of a set of perceptual studies that examine how different amounts and types of horizontal binocular overlap in stereoscopic AR systems influence the perception of nonuniformity across the FOV. We then describe how to quantify the horizontal FOV in stereoscopic AR when taking 3D content into account. We show that all stereoscopic AR systems result in a variable horizontal FOV coverage and variable amounts of binocular overlap depending on fixation distance. Taken together, these results provide a framework for optimizing perceived FOV coverage and minimizing perceptual artifacts in stereoscopic AR systems for different use cases.
{"title":"Perceptual Guidelines for Optimizing Field of View in Stereoscopic Augmented Reality Displays","authors":"Minqi Wang, Emily A. Cooper","doi":"https://dl.acm.org/doi/10.1145/3554921","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3554921","url":null,"abstract":"<p>Near-eye display systems for augmented reality (AR) aim to seamlessly merge virtual content with the user’s view of the real-world. A substantial limitation of current systems is that they only present virtual content over a limited portion of the user’s natural field of view (FOV). This limitation reduces the immersion and utility of these systems. Thus, it is essential to quantify FOV coverage in AR systems and understand how to maximize it. It is straightforward to determine the FOV coverage for monocular AR systems based on the system architecture. However, stereoscopic AR systems that present 3D virtual content create a more complicated scenario because the two eyes’ views do not always completely overlap. The introduction of partial binocular overlap in stereoscopic systems can potentially expand the perceived horizontal FOV coverage, but it can also introduce perceptual nonuniformity artifacts. In this arrticle, we first review the principles of binocular FOV overlap for natural vision and for stereoscopic display systems. We report the results of a set of perceptual studies that examine how different amounts and types of horizontal binocular overlap in stereoscopic AR systems influence the perception of nonuniformity across the FOV. We then describe how to quantify the horizontal FOV in stereoscopic AR when taking 3D content into account. We show that all stereoscopic AR systems result in a variable horizontal FOV coverage and variable amounts of binocular overlap depending on fixation distance. Taken together, these results provide a framework for optimizing perceived FOV coverage and minimizing perceptual artifacts in stereoscopic AR systems for different use cases.</p><p></p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"51 4","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is increasing demand for tactile feedback functions for touch panels. We investigated whether virtual roughness texture quality can be improved through simultaneous use of vibrotactile and electrostatic-friction stimuli. This conjunctive use is expected to improve the perceptual quality of texture stimuli, because vibrotactile and electrostatic-friction stimuli have complementary characteristics. Our previous studies confirmed that these conjunct stimuli yield enhanced realism for simple grating roughness. In this study, we conducted experiments using simple and complex sinusoidal surface profiles consisting of one or two spatial wave components. Three different evaluation criteria were employed. The first criterion concerned the subjective realism, i.e., similarity with actual roughness textures, of virtual roughness textures. Participants compared the following three stimulus conditions: vibrotactile stimuli only, electrostatic-friction stimuli only, and their conjunct stimuli. The conjunct stimuli yielded the greatest realism. The second criterion concerned roughness texture identification under each of the three stimulus conditions for five different roughness textures. The highest identification accuracy rate was achieved under the conjunct stimulus condition; however, the performance difference was marginal. The third criterion concerned the discrimination threshold of the grating-scale spatial wavelength. There were no marked differences among the results for the three conditions. The findings of this study will improve virtual texture quality for touch-panel-type surface tactile displays.
{"title":"Tactile Texture Display Combining Vibrotactile and Electrostatic-friction Stimuli: Substantial Effects on Realism and Moderate Effects on Behavioral Responses","authors":"Kazuya Otake, Shogo Okamoto, Yasuhiro Akiyama, Yoji Yamada","doi":"https://dl.acm.org/doi/10.1145/3539733","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3539733","url":null,"abstract":"<p>There is increasing demand for tactile feedback functions for touch panels. We investigated whether virtual roughness texture quality can be improved through simultaneous use of vibrotactile and electrostatic-friction stimuli. This conjunctive use is expected to improve the perceptual quality of texture stimuli, because vibrotactile and electrostatic-friction stimuli have complementary characteristics. Our previous studies confirmed that these conjunct stimuli yield enhanced realism for simple grating roughness. In this study, we conducted experiments using simple and complex sinusoidal surface profiles consisting of one or two spatial wave components. Three different evaluation criteria were employed. The first criterion concerned the subjective realism, i.e., similarity with actual roughness textures, of virtual roughness textures. Participants compared the following three stimulus conditions: vibrotactile stimuli only, electrostatic-friction stimuli only, and their conjunct stimuli. The conjunct stimuli yielded the greatest realism. The second criterion concerned roughness texture identification under each of the three stimulus conditions for five different roughness textures. The highest identification accuracy rate was achieved under the conjunct stimulus condition; however, the performance difference was marginal. The third criterion concerned the discrimination threshold of the grating-scale spatial wavelength. There were no marked differences among the results for the three conditions. The findings of this study will improve virtual texture quality for touch-panel-type surface tactile displays.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"52 3","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans communicate by writing, often taking notes that assist thinking. With the growing popularity of collaborative Virtual Reality (VR) applications, it is imperative that we better understand aspects that affect writing in these virtual experiences. On-air writing in VR is a popular writing paradigm due to its simplicity in implementation without any explicit needs for specialized hardware. A host of factors can affect the efficacy of this writing paradigm and in this work, we delved into investigating the same. Along these lines, we investigated the effects of a combination of factors on users’ on-air writing performance, aiming to understand the circumstances under which users can both effectively and efficiently write in VR. We were interested in studying the effects of the following factors: (1) input modality: brush vs. near-field raycast vs. pointing gesture, (2) inking trigger method: haptic feedback vs. button based trigger, and (3) canvas geometry: plane vs. hemisphere. To evaluate the writing performance, we conducted an empirical evaluation with thirty participants, requiring them to write the words we indicated under different combinations of these factors. Dependent measures including the writing speed, accuracy rates, perceived workloads, and so on, were analyzed. Results revealed that the brush based input modality produced the best results in writing performance, that haptic feedback was not always effective over button based triggering, and that there are trade-offs associated with the different types of canvas geometries used. This work attempts at laying a foundation for future investigations that seek to understand and further improve the on-air writing experience in immersive virtual environments.
{"title":"Investigating a Combination of Input Modalities, Canvas Geometries, and Inking Triggers on On-Air Handwriting in Virtual Reality","authors":"Roshan Venkatakrishnan, Rohith Venkatakrishnan, Chih-Han Chung, Yu-Shuen Wang, Sabarish Babu","doi":"https://dl.acm.org/doi/10.1145/3560817","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3560817","url":null,"abstract":"<p>Humans communicate by writing, often taking notes that assist thinking. With the growing popularity of collaborative <b>Virtual Reality (VR)</b> applications, it is imperative that we better understand aspects that affect writing in these virtual experiences. On-air writing in VR is a popular writing paradigm due to its simplicity in implementation without any explicit needs for specialized hardware. A host of factors can affect the efficacy of this writing paradigm and in this work, we delved into investigating the same. Along these lines, we investigated the effects of a combination of factors on users’ on-air writing performance, aiming to understand the circumstances under which users can both effectively and efficiently write in VR. We were interested in studying the effects of the following factors: (1) input modality: brush vs. near-field raycast vs. pointing gesture, (2) inking trigger method: haptic feedback vs. button based trigger, and (3) canvas geometry: plane vs. hemisphere. To evaluate the writing performance, we conducted an empirical evaluation with thirty participants, requiring them to write the words we indicated under different combinations of these factors. Dependent measures including the writing speed, accuracy rates, perceived workloads, and so on, were analyzed. Results revealed that the brush based input modality produced the best results in writing performance, that haptic feedback was not always effective over button based triggering, and that there are trade-offs associated with the different types of canvas geometries used. This work attempts at laying a foundation for future investigations that seek to understand and further improve the on-air writing experience in immersive virtual environments.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"15 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-07DOI: https://dl.acm.org/doi/10.1145/3561055
Kristopher Kohm, John Porter, Andrew Robb
This work explored how users’ sensitivity to offsets in their avatars’ virtual hands changes as they gain exposure to virtual reality. We conducted an experiment using a two-alternative forced choice (2-AFC) design over the course of 4 weeks, split into four sessions. The trials in each session had a variety of eight offset distances paired with eight offset directions (across a two-dimensional plane). While we did not find evidence that users became more sensitive to the offsets over time, we did find evidence of behavioral changes. Specifically, participants’ head–hand coordination and completion time varied significantly as the sessions went on. We discuss the implications of both results and how they could influence our understanding of long-term calibration for perception-action coordination in virtual environments.
{"title":"Sensitivity to Hand Offsets and Related Behavior in Virtual Environments over Time","authors":"Kristopher Kohm, John Porter, Andrew Robb","doi":"https://dl.acm.org/doi/10.1145/3561055","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3561055","url":null,"abstract":"<p>This work explored how users’ sensitivity to offsets in their avatars’ virtual hands changes as they gain exposure to virtual reality. We conducted an experiment using a two-alternative forced choice (2-AFC) design over the course of 4 weeks, split into four sessions. The trials in each session had a variety of eight offset distances paired with eight offset directions (across a two-dimensional plane). While we did not find evidence that users became more sensitive to the offsets over time, we did find evidence of behavioral changes. Specifically, participants’ head–hand coordination and completion time varied significantly as the sessions went on. We discuss the implications of both results and how they could influence our understanding of long-term calibration for perception-action coordination in virtual environments.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"52 2","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-07DOI: https://dl.acm.org/doi/10.1145/3563136
Ana Serrano, Michael Barnett-Cowan
No abstract available.
没有摘要。
{"title":"Introduction to the Special Issue on SAP 2022","authors":"Ana Serrano, Michael Barnett-Cowan","doi":"https://dl.acm.org/doi/10.1145/3563136","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3563136","url":null,"abstract":"<p>No abstract available.</p>","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"52 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138504105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to the Special Issue on SAP 2022","authors":"Ana Serrano, M. Barnett-Cowan","doi":"10.1145/3563136","DOIUrl":"https://doi.org/10.1145/3563136","url":null,"abstract":"","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"19 1","pages":"1 - 2"},"PeriodicalIF":1.6,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43703119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work explored how users’ sensitivity to offsets in their avatars’ virtual hands changes as they gain exposure to virtual reality. We conducted an experiment using a two-alternative forced choice (2-AFC) design over the course of 4 weeks, split into four sessions. The trials in each session had a variety of eight offset distances paired with eight offset directions (across a two-dimensional plane). While we did not find evidence that users became more sensitive to the offsets over time, we did find evidence of behavioral changes. Specifically, participants’ head–hand coordination and completion time varied significantly as the sessions went on. We discuss the implications of both results and how they could influence our understanding of long-term calibration for perception-action coordination in virtual environments.
{"title":"Sensitivity to Hand Offsets and Related Behavior in Virtual Environments over Time","authors":"Kristopher Kohm, John R. Porter, Andrew C. Robb","doi":"10.1145/3561055","DOIUrl":"https://doi.org/10.1145/3561055","url":null,"abstract":"This work explored how users’ sensitivity to offsets in their avatars’ virtual hands changes as they gain exposure to virtual reality. We conducted an experiment using a two-alternative forced choice (2-AFC) design over the course of 4 weeks, split into four sessions. The trials in each session had a variety of eight offset distances paired with eight offset directions (across a two-dimensional plane). While we did not find evidence that users became more sensitive to the offsets over time, we did find evidence of behavioral changes. Specifically, participants’ head–hand coordination and completion time varied significantly as the sessions went on. We discuss the implications of both results and how they could influence our understanding of long-term calibration for perception-action coordination in virtual environments.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"19 1","pages":"1 - 15"},"PeriodicalIF":1.6,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45296022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}