Pub Date : 2021-01-01DOI: 10.2352/j.percept.imaging.2021.4.3.030501
Suk Kyoung Choi, S. DiPaola, Hannu Töyrylä
Recent developments in neural network image processing motivate the question, how these technologies might better serve visual artists. Research goals to date have largely focused on either pastiche interpretations of what is framed as artistic “style” or seek to divulge heretofore unimaginable dimensions of algorithmic “latent space,” but have failed to address the process an artist might actually pursue, when engaged in the reflective act of developing an image from imagination and lived experience. The tools, in other words, are constituted in research demonstrations rather than as tools of creative expression. In this article, the authors explore the phenomenology of the creative environment afforded by artificially intelligent image transformation and generation, drawn from autoethnographic reviews of the authors’ individual approaches to artificial intelligence (AI) art. They offer a post-phenomenology of “neural media” such that visual artists may begin to work with AI technologies in ways that support naturalistic processes of thinking about and interacting with computationally mediated interactive creation.
{"title":"Artistic Style Meets Artificial Intelligence","authors":"Suk Kyoung Choi, S. DiPaola, Hannu Töyrylä","doi":"10.2352/j.percept.imaging.2021.4.3.030501","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.3.030501","url":null,"abstract":"Recent developments in neural network image processing motivate the question, how these technologies might better serve visual artists. Research goals to date have largely focused on either pastiche interpretations of what is framed as artistic “style” or seek to divulge heretofore unimaginable dimensions of algorithmic “latent space,” but have failed to address the process an artist might actually pursue, when engaged in the reflective act of developing an image from imagination and lived experience. The tools, in other words, are constituted in research demonstrations rather than as tools of creative expression. In this article, the authors explore the phenomenology of the creative environment afforded by artificially intelligent image transformation and generation, drawn from autoethnographic reviews of the authors’ individual approaches to artificial intelligence (AI) art. They offer a post-phenomenology of “neural media” such that visual artists may begin to work with AI technologies in ways that support naturalistic processes of thinking about and interacting with computationally mediated interactive creation.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"284 1","pages":"20501-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79459101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.2352/J.PERCEPT.IMAGING.2021.4.1.010502
J. Glover, Praful Gupta, N. Paulter, A. Bovik
Abstract Portable X-ray imaging systems are routinely used by bomb squads throughout the world to image the contents of suspicious packages and explosive devices. The images are used by bomb technicians to determine whether or not packages contain explosive devices or device components. In events of positive detection, the images are also used to understand device design and to devise countermeasures. The quality of the images is considered to be of primary importance by users and manufacturers of these systems, since it affects the ability of the users to analyze the images and to detect potential threats. As such, there exist national standards that set minimum acceptable image-quality levels for the performance of these imaging systems. An implicit assumption is that better image quality leads to better user identification of components in explosive devices and, therefore, better informed plans to render them safe. However, there is no previously published experimental work investigating this.Toward advancing progress in this direction, the authors developed the new NIST-LIVE X-ray improvised explosive device (IED) image-quality database. The database consists of: a set of pristine X-ray images of IEDs and benign objects; a larger set of distorted images of varying quality of the same objects; ground-truth IED component labels for all images; and human task-performance results locating and identifying the IED components. More than 40 trained U.S. bomb technicians were recruited to generate the human task-performance data. They use the database to show that identification probabilities for IED components are strongly correlated with image quality. They also show how the results relate to the image-quality metrics described in the current U.S. national standard for these systems, and how their results can be used to inform the development of baseline performance requirements. They expect these results to directly affect future revisions of the standard.
{"title":"Study of Bomb Technician Threat Identification Performance on Degraded X-ray Images","authors":"J. Glover, Praful Gupta, N. Paulter, A. Bovik","doi":"10.2352/J.PERCEPT.IMAGING.2021.4.1.010502","DOIUrl":"https://doi.org/10.2352/J.PERCEPT.IMAGING.2021.4.1.010502","url":null,"abstract":"Abstract Portable X-ray imaging systems are routinely used by bomb squads throughout the world to image the contents of suspicious packages and explosive devices. The images are used by bomb technicians to determine whether or not packages contain explosive devices or device components. In events of positive detection, the images are also used to understand device design and to devise countermeasures. The quality of the images is considered to be of primary importance by users and manufacturers of these systems, since it affects the ability of the users to analyze the images and to detect potential threats. As such, there exist national standards that set minimum acceptable image-quality levels for the performance of these imaging systems. An implicit assumption is that better image quality leads to better user identification of components in explosive devices and, therefore, better informed plans to render them safe. However, there is no previously published experimental work investigating this.Toward advancing progress in this direction, the authors developed the new NIST-LIVE X-ray improvised explosive device (IED) image-quality database. The database consists of: a set of pristine X-ray images of IEDs and benign objects; a larger set of distorted images of varying quality of the same objects; ground-truth IED component labels for all images; and human task-performance results locating and identifying the IED components. More than 40 trained U.S. bomb technicians were recruited to generate the human task-performance data. They use the database to show that identification probabilities for IED components are strongly correlated with image quality. They also show how the results relate to the image-quality metrics described in the current U.S. national standard for these systems, and how their results can be used to inform the development of baseline performance requirements. They expect these results to directly affect future revisions of the standard.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"231 1","pages":"10502-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84060731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.2352/J.PERCEPT.IMAGING.2021.4.1.010401
Nicolai Behmann, Sousa Weddige, H. Blume
Abstract Aliasing effects due to time-discrete capturing of amplitude-modulated light with a digital image sensor are perceived as flicker by humans. Especially when observing these artifacts in digital mirror replacement systems, they are annoying and can pose a risk. Therefore, ISO 16505 requires flicker-free reproduction for 90 % of people in these systems. Various psychophysical studies investigate the influence of large-area flickering of displays, environmental light, or flickering in television applications on perception and concentration. However, no detailed knowledge of subjective annoyance/irritation due to flicker from camera-monitor systems as a mirror replacement in vehicles exist so far, but the number of these systems is constantly increasing. This psychophysical study used a novel data set from real-world driving scenes and synthetic simulation with synthetic flicker. More than 25 test persons were asked to quantify the subjective annoyance level of different flicker frequencies, amplitudes, mean values, sizes, and positions. The results show that for digital mirror replacement systems, human subjective annoyance due to flicker is greatest in the 15 Hz range with increasing amplitude and magnitude. Additionally, the sensitivity to flicker artifacts increases with the duration of observation.
{"title":"Psychophysical Study of Human Visual Perception of Flicker Artifacts in Automotive Digital Mirror Replacement Systems","authors":"Nicolai Behmann, Sousa Weddige, H. Blume","doi":"10.2352/J.PERCEPT.IMAGING.2021.4.1.010401","DOIUrl":"https://doi.org/10.2352/J.PERCEPT.IMAGING.2021.4.1.010401","url":null,"abstract":"Abstract Aliasing effects due to time-discrete capturing of amplitude-modulated light with a digital image sensor are perceived as flicker by humans. Especially when observing these artifacts in digital mirror replacement systems, they are annoying and can pose a risk. Therefore, ISO 16505 requires flicker-free reproduction for 90 % of people in these systems. Various psychophysical studies investigate the influence of large-area flickering of displays, environmental light, or flickering in television applications on perception and concentration. However, no detailed knowledge of subjective annoyance/irritation due to flicker from camera-monitor systems as a mirror replacement in vehicles exist so far, but the number of these systems is constantly increasing. This psychophysical study used a novel data set from real-world driving scenes and synthetic simulation with synthetic flicker. More than 25 test persons were asked to quantify the subjective annoyance level of different flicker frequencies, amplitudes, mean values, sizes, and positions. The results show that for digital mirror replacement systems, human subjective annoyance due to flicker is greatest in the 15 Hz range with increasing amplitude and magnitude. Additionally, the sensitivity to flicker artifacts increases with the duration of observation.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68835350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.2352/j.percept.imaging.2020.3.3.030501
C. Hung, Chloe Callahan-Flintoft, P. Fedele, Kim F. Fluitt, Barry D. Vaughan, Anthony J. Walker, Min Wei
Abstract Understanding and predicting outdoor visual performance in augmented reality (AR) requires characterizing and modeling vision under strong luminance dynamics, including luminance differences of 10000-to-1 in a single image (high dynamic range, HDR). Classic models of vision, based on displays with 100-to-1 luminance contrast, have limited ability to generalize to HDR environments. An important question is whether low-contrast visibility, potentially useful for titrating saliency for AR applications, is resilient to saccade-induced strong luminance dynamics. The authors developed an HDR display system with up to 100,000-to-1 contrast and assessed how strong luminance dynamics affect low-contrast visual acuity. They show that, immediately following flashes of 25× or 100× luminance, visual acuity is unaffected at 90% letter Weber contrast and only minimally affected at lower letter contrasts (up to +0.20 LogMAR for 10% contrast). The resilience of low-contrast acuity across luminance changes opens up research on divisive display AR (ddAR) to effectively titrate salience under naturalistic HDR luminance.
{"title":"Low-contrast Acuity Under Strong Luminance Dynamics and Potential Benefits of Divisive Display Augmented Reality","authors":"C. Hung, Chloe Callahan-Flintoft, P. Fedele, Kim F. Fluitt, Barry D. Vaughan, Anthony J. Walker, Min Wei","doi":"10.2352/j.percept.imaging.2020.3.3.030501","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2020.3.3.030501","url":null,"abstract":"Abstract Understanding and predicting outdoor visual performance in augmented reality (AR) requires characterizing and modeling vision under strong luminance dynamics, including luminance differences of 10000-to-1 in a single image (high dynamic range, HDR). Classic models of vision, based on displays with 100-to-1 luminance contrast, have limited ability to generalize to HDR environments. An important question is whether low-contrast visibility, potentially useful for titrating saliency for AR applications, is resilient to saccade-induced strong luminance dynamics. The authors developed an HDR display system with up to 100,000-to-1 contrast and assessed how strong luminance dynamics affect low-contrast visual acuity. They show that, immediately following flashes of 25× or 100× luminance, visual acuity is unaffected at 90% letter Weber contrast and only minimally affected at lower letter contrasts (up to +0.20 LogMAR for 10% contrast). The resilience of low-contrast acuity across luminance changes opens up research on divisive display AR (ddAR) to effectively titrate salience under naturalistic HDR luminance.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"46 1","pages":"10501-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78481700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.2352/J.PERCEPT.IMAGING.2021.4.1.010402
Philipp Grüning, E. Barth
Abstract Feature-Product networks (FP-nets) are a novel deep-network architecture inspired by principles of biological vision. These networks contain the so-called FP-blocks that learn two different filters for each input feature map, the outputs of which are then multiplied. Such an architecture is inspired by models of end-stopped neurons, which are common in cortical areas V1 and especially in V2. The authors here use FP-nets on three image quality assessment (IQA) benchmarks for blind IQA. They show that by using FP-nets, they can obtain networks that deliver state-of-the-art performance while being significantly more compact than competing models. A further improvement that they obtain is due to a simple attention mechanism. The good results that they report may be related to the fact that they employ bio-inspired design principles.
{"title":"FP-Nets for Blind Image Quality Assessment","authors":"Philipp Grüning, E. Barth","doi":"10.2352/J.PERCEPT.IMAGING.2021.4.1.010402","DOIUrl":"https://doi.org/10.2352/J.PERCEPT.IMAGING.2021.4.1.010402","url":null,"abstract":"Abstract Feature-Product networks (FP-nets) are a novel deep-network architecture inspired by principles of biological vision. These networks contain the so-called FP-blocks that learn two different filters for each input feature map, the outputs of which are then multiplied. Such an architecture is inspired by models of end-stopped neurons, which are common in cortical areas V1 and especially in V2. The authors here use FP-nets on three image quality assessment (IQA) benchmarks for blind IQA. They show that by using FP-nets, they can obtain networks that deliver state-of-the-art performance while being significantly more compact than competing models. A further improvement that they obtain is due to a simple attention mechanism. The good results that they report may be related to the fact that they employ bio-inspired design principles.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68835362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-01-18DOI: 10.2352/ISSN.2470-1173.2021.11.HVEI-156
Christopher W Tyler
The history of cartography has been marked by the endless search for the perfect form for the representation of the information on a spherical surface manifold into the flat planar format of the printed page or computer screen. Dozens of cartographic formats have been proposed over the centuries from ancient Greek times to the present. This is an issue not just for the mapping of the globe, but in all fields of science where spherical entities are found. The perceptual and representational advantages and drawbacks of many of these formats are considered, particularly in the tension between a unified representation, which is always distorted in some dimension, and a minimally distorted representation, which can only be obtained by segmentation into sectorial patches. The use of these same formats for the mapping of spherical manifolds are evaluated, from quantum physics through the mapping of the brain to the large-scale representation of the cosmos.
{"title":"Cartography as Spatial Representation: A new assessment of the competing advantages and drawbacks across fields of science.","authors":"Christopher W Tyler","doi":"10.2352/ISSN.2470-1173.2021.11.HVEI-156","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2021.11.HVEI-156","url":null,"abstract":"<p><p>The history of cartography has been marked by the endless search for the perfect form for the representation of the information on a spherical surface manifold into the flat planar format of the printed page or computer screen. Dozens of cartographic formats have been proposed over the centuries from ancient Greek times to the present. This is an issue not just for the mapping of the globe, but in all fields of science where spherical entities are found. The perceptual and representational advantages and drawbacks of many of these formats are considered, particularly in the tension between a unified representation, which is always distorted in some dimension, and a minimally distorted representation, which can only be obtained by segmentation into sectorial patches. The use of these same formats for the mapping of spherical manifolds are evaluated, from quantum physics through the mapping of the brain to the large-scale representation of the cosmos.</p>","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"Human Vision and Electronic Imaging 2021 ","pages":"1561-15610"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8562775/pdf/nihms-1716793.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39589029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.2352/j.percept.imaging.2021.4.2.020101
B. Rogowitz, T. Pappas
{"title":"From the Editors","authors":"B. Rogowitz, T. Pappas","doi":"10.2352/j.percept.imaging.2021.4.2.020101","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2021.4.2.020101","url":null,"abstract":"<jats:p> </jats:p>","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"104 1","pages":"20101-1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88554536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.2352/j.percept.imaging.2020.3.2.020501
Shuhei Watanabe, S. Tominaga, T. Horiuchi
{"title":"The Difference in Impression between Genuine and Artificial Leather: Quantifying the Feeling of Authenticity","authors":"Shuhei Watanabe, S. Tominaga, T. Horiuchi","doi":"10.2352/j.percept.imaging.2020.3.2.020501","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2020.3.2.020501","url":null,"abstract":"","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"86 1","pages":"20501-1"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85864829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.2352/j.percept.imaging.2020.3.2.020401
Luke Hellwig, M. Fairchild
. A new color space, IGPGTG, was developed. IGPGTG uses the same structure as IPT, an established hue-uniform color space utilized in gamut mapping applications. While IPT was fit to visual data on the perceived hue, IGPGTG was optimized based on evidence linking the peak wavelength of Gaussian-shaped light spectra to their perceived hues. The performance of IGPGTG on perceived hue data was compared to the performance of other established color spaces. Additionally, an experiment was run to directly compare the hue linearity of IGPGTG with those of other color spaces by using Case V of Thurstone's law of comparative judgment to generate hue-linearity scales. IGPGTG performed well in this experiment but poorly on extant visual data. The mixed results indicate that it is possible to derive a moderately hue-linear color space without visual data.
. 一种新的色彩空间IGPGTG被开发出来。IGPGTG使用与IPT相同的结构,IPT是在色域映射应用中使用的一种已建立的色调均匀的色彩空间。IPT适合感知色调的视觉数据,而IGPGTG则基于高斯形光谱的峰值波长与其感知色调的关联证据进行优化。将IGPGTG在感知色调数据上的表现与其他已建立的色彩空间的表现进行了比较。另外,利用Thurstone’s law of comparative judgment的Case V直接比较IGPGTG与其他色彩空间的色相线性度,生成色相线性度尺度。IGPGTG在本实验中表现良好,但在现有的视觉数据上表现不佳。混合结果表明,可以在没有视觉数据的情况下推导出适度的色调线性色彩空间。
{"title":"Using Gaussian Spectra to Derive a Hue-linear Color Space","authors":"Luke Hellwig, M. Fairchild","doi":"10.2352/j.percept.imaging.2020.3.2.020401","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2020.3.2.020401","url":null,"abstract":". A new color space, IGPGTG, was developed. IGPGTG uses the same structure as IPT, an established hue-uniform color space utilized in gamut mapping applications. While IPT was fit to visual data on the perceived\u0000 hue, IGPGTG was optimized based on evidence linking the peak wavelength of Gaussian-shaped light spectra to their perceived hues. The performance of IGPGTG on perceived hue data was compared to the performance\u0000 of other established color spaces. Additionally, an experiment was run to directly compare the hue linearity of IGPGTG with those of other color spaces by using Case V of Thurstone's law of comparative judgment to generate hue-linearity scales. IGPGTG\u0000 performed well in this experiment but poorly on extant visual data. The mixed results indicate that it is possible to derive a moderately hue-linear color space without visual data.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"1 1","pages":"244-251"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42717189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.2352/j.percept.imaging.2020.3.2.020502
Siavash Eftekharifar, A. Thaler, N. Troje
Abstract The sense of presence is defined as a subjective feeling of being situated in an environment and occupying a location therein. The sense of presence is a defining feature of virtual environments. In two experiments, we aimed at investigating the relative contribution of motion parallax and stereopsis to the sense of presence, using two versions of the classic pit room paradigm in virtual reality. In Experiment 1, participants were asked to cross a deep abyss between two platforms on a narrow plank. Participants completed the task under three experimental conditions: (1) when the lateral component of motion parallax was disabled, (2) when stereopsis was disabled, and (3) when both stereopsis and motion parallax were available. As a subjective measure of presence, participants completed a presence questionnaire after each condition. Additionally, electrodermal activity (EDA) was recorded as a measure of anxiety. In Experiment 1, EDA responses were significantly higher with restricted motion parallax as compared to the other two conditions. However, no difference was observed in terms of the subjective presence scores across the three conditions. To test whether these results were due to the nature of the environment, participants in Experiment 2 experienced a slightly less stressful environment, where they were asked to stand on a ledge and drop virtual balls to specified targets into the abyss. The same experimental manipulations were used as in Experiment 1. Again, the EDA responses were significantly higher when motion parallax was impaired as compared to when stereopsis was disabled. The results of the presence questionnaire revealed a reduced sense of presence with impaired motion parallax compared to the normal viewing condition. Across the two experiments, our results unexpectedly demonstrate that presence in the virtual environments is not necessarily linked to EDA responses elicited by affective situations as has been implied by earlier studies.
{"title":"Contribution of Motion Parallax and Stereopsis to the Sense of Presence in Virtual Reality","authors":"Siavash Eftekharifar, A. Thaler, N. Troje","doi":"10.2352/j.percept.imaging.2020.3.2.020502","DOIUrl":"https://doi.org/10.2352/j.percept.imaging.2020.3.2.020502","url":null,"abstract":"Abstract The sense of presence is defined as a subjective feeling of being situated in an environment and occupying a location therein. The sense of presence is a defining feature of virtual environments. In two experiments, we aimed at investigating the relative contribution\u0000 of motion parallax and stereopsis to the sense of presence, using two versions of the classic pit room paradigm in virtual reality. In Experiment 1, participants were asked to cross a deep abyss between two platforms on a narrow plank. Participants completed the task under three experimental\u0000 conditions: (1) when the lateral component of motion parallax was disabled, (2) when stereopsis was disabled, and (3) when both stereopsis and motion parallax were available. As a subjective measure of presence, participants completed a presence questionnaire after each condition.\u0000 Additionally, electrodermal activity (EDA) was recorded as a measure of anxiety. In Experiment 1, EDA responses were significantly higher with restricted motion parallax as compared to the other two conditions. However, no difference was observed in terms of the subjective presence scores\u0000 across the three conditions. To test whether these results were due to the nature of the environment, participants in Experiment 2 experienced a slightly less stressful environment, where they were asked to stand on a ledge and drop virtual balls to specified targets into the abyss. The same\u0000 experimental manipulations were used as in Experiment 1. Again, the EDA responses were significantly higher when motion parallax was impaired as compared to when stereopsis was disabled. The results of the presence questionnaire revealed a reduced sense of presence with impaired motion parallax\u0000 compared to the normal viewing condition. Across the two experiments, our results unexpectedly demonstrate that presence in the virtual environments is not necessarily linked to EDA responses elicited by affective situations as has been implied by earlier studies.","PeriodicalId":73895,"journal":{"name":"Journal of perceptual imaging","volume":"3 1","pages":"20502-1"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78336413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}