Pub Date : 2023-03-01DOI: 10.1177/20416695231165142
Nicholas J Wade
Pictorial portraits are viewed with two eyes despite the fact that they are mostly monocular: they have been produced from a single viewpoint (either by painters or photographers). The differences between the images on each eye are a consequence of the separation between them rather than differences in two pictorial images. Viewing with two eyes detracts from the monocular cues to depth within the singular portrait because of information for the flatness of the pictorial surface. Binocular portraits, on the other hand, incorporate differences between two pictorial images producing perceptual effects that cannot be seen by a single eye alone. The differences can consist of small disparities that yield stereoscopic depth or large ones that produce binocular rivalry. Binocular portraits require viewing with a stereoscope, many varieties of which exist. Those shown here are anaglyphs which can be observed through red/cyan filters. They are not conventional stereoscopic portraits where the sitter is imaged from two slightly different locations. Rather, the binocular processes of cooperation (stereoscopic depth perception) and competition (binocular rivalry) are manipulated in the binocular portraits. The subjects shown in the anaglyphic portraits have been involved in the science and art of binocular vision.
{"title":"Binocular portraiture.","authors":"Nicholas J Wade","doi":"10.1177/20416695231165142","DOIUrl":"https://doi.org/10.1177/20416695231165142","url":null,"abstract":"<p><p>Pictorial portraits are viewed with two eyes despite the fact that they are mostly monocular: they have been produced from a single viewpoint (either by painters or photographers). The differences between the images on each eye are a consequence of the separation between them rather than differences in two pictorial images. Viewing with two eyes detracts from the monocular cues to depth within the singular portrait because of information for the flatness of the pictorial surface. Binocular portraits, on the other hand, incorporate differences between two pictorial images producing perceptual effects that cannot be seen by a single eye alone. The differences can consist of small disparities that yield stereoscopic depth or large ones that produce binocular rivalry. Binocular portraits require viewing with a stereoscope, many varieties of which exist. Those shown here are anaglyphs which can be observed through red/cyan filters. They are not conventional stereoscopic portraits where the sitter is imaged from two slightly different locations. Rather, the binocular processes of cooperation (stereoscopic depth perception) and competition (binocular rivalry) are manipulated in the binocular portraits. The subjects shown in the anaglyphic portraits have been involved in the science and art of binocular vision.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231165142"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10116013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9757401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/20416695231162580
Justin A Chamberland, Charles A Collin
The Japanese and Caucasian Brief Affect Recognition Task (JACBART) has been proposed as a standardized method for measuring people's ability to accurately categorize briefly presented images of facial expressions. However, the factors that impact performance in this task are not entirely understood. The current study sought to explore the role of the forward mask's duration (i.e., fixed vs. variable) in brief affect categorization across expressions of the six basic emotions (i.e., anger, disgust, fear, happiness, sadness, and surprise) and three presentation times (i.e., 17, 67, and 500 ms). Current findings do not demonstrate evidence that a variable duration forward mask negatively impacts brief affect categorization. However, efficiency and necessity thresholds were observed to vary across the expressions of emotion. Further exploration of the temporal dynamics of facial affect categorization will therefore require a consideration of these differences.
{"title":"Effects of forward mask duration variability on the temporal dynamics of brief facial expression categorization.","authors":"Justin A Chamberland, Charles A Collin","doi":"10.1177/20416695231162580","DOIUrl":"https://doi.org/10.1177/20416695231162580","url":null,"abstract":"<p><p>The Japanese and Caucasian Brief Affect Recognition Task (JACBART) has been proposed as a standardized method for measuring people's ability to accurately categorize briefly presented images of facial expressions. However, the factors that impact performance in this task are not entirely understood. The current study sought to explore the role of the forward mask's duration (i.e., fixed vs. variable) in brief affect categorization across expressions of the six basic emotions (i.e., anger, disgust, fear, happiness, sadness, and surprise) and three presentation times (i.e., 17, 67, and 500 ms). Current findings do not demonstrate evidence that a variable duration forward mask negatively impacts brief affect categorization. However, efficiency and necessity thresholds were observed to vary across the expressions of emotion. Further exploration of the temporal dynamics of facial affect categorization will therefore require a consideration of these differences.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231162580"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10031613/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9192738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/20416695231162010
Claus-Christian Carbon
When we attend sculptures in museums, they might fascinate us due to the mastery of the material, the inherent dynamics of body language or due to contrapposto or the sheer size of some of these statues such as Michelangelo's David. What is less convincing, however, is the life-alikeness of the face. Actually, most visitors experience dead faces, dead eyes, and static expressions. By merely adding paraphernalia to a face (e.g., a facemask or sunglasses), such unalive sculptures gain vividness and liveliness. This striking effect is demonstrated by applying a facemask and sunglasses to a sculpture on public display in Bamberg, but it can easily be demonstrated on any available sculpture. This simple method might help connect people with sculptures or artworks, in general, to lower the barrier between the beholder and artwork and increase their interaction.
{"title":"Connecting the beholder with the artwork: Thoughts on gaining liveliness by the usage of paraphernalia.","authors":"Claus-Christian Carbon","doi":"10.1177/20416695231162010","DOIUrl":"https://doi.org/10.1177/20416695231162010","url":null,"abstract":"<p><p>When we attend sculptures in museums, they might fascinate us due to the mastery of the material, the inherent dynamics of body language or due to <i>contrapposto</i> or the sheer size of some of these statues such as Michelangelo's David. What is less convincing, however, is the life-alikeness of the face. Actually, most visitors experience dead faces, dead eyes, and static expressions. By merely adding paraphernalia to a face (e.g., a facemask or sunglasses), such unalive sculptures gain vividness and liveliness. This striking effect is demonstrated by applying a facemask and sunglasses to a sculpture on public display in Bamberg, but it can easily be demonstrated on any available sculpture. This simple method might help connect people with sculptures or artworks, in general, to lower the barrier between the beholder and artwork and increase their interaction.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231162010"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10009020/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9128923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/20416695231160420
Rika Oya, Akihiro Tanaka
Previous research has revealed that several emotions can be perceived via touch. What advantages does touch have over other nonverbal communication channels? In our study, we compared the perception of emotions from touch with that from voice to examine the advantages of each channel at the emotional valence level. In our experiment, the encoder expressed 12 different emotions by touching the decoder's arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that the categorical average accuracy of negative emotions was higher for voice than for touch, whereas that of positive emotions was marginally higher for touch than for voice. These results suggest that different channels (touch and voice) have different advantages for the perception of positive and negative emotions.
{"title":"Touch and voice have different advantages in perceiving positive and negative emotions.","authors":"Rika Oya, Akihiro Tanaka","doi":"10.1177/20416695231160420","DOIUrl":"https://doi.org/10.1177/20416695231160420","url":null,"abstract":"<p><p>Previous research has revealed that several emotions can be perceived via touch. What advantages does touch have over other nonverbal communication channels? In our study, we compared the perception of emotions from touch with that from voice to examine the advantages of each channel at the emotional valence level. In our experiment, the encoder expressed 12 different emotions by touching the decoder's arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that the categorical average accuracy of negative emotions was higher for voice than for touch, whereas that of positive emotions was marginally higher for touch than for voice. These results suggest that different channels (touch and voice) have different advantages for the perception of positive and negative emotions.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231160420"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10031610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9244870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/20416695231160402
Ryosuke Niimi
Humans perceive 3D shapes even from 2D images. A slant can be perceived from images of slanted rectangular objects, which include texture gradients and linear perspective contours. How does the visual system integrate and utilize these pictorial depth cues? A new visual illusion that provides some insights into this issue was examined. A box-like object with disk figures drawn on its upper surface was rendered in a linear perspective image. The length of the object's upper surface side line was overestimated, probably due to the foreshortened disks serving as slant cues. This illusory effect occurred even when observers estimated the line length on the image plane, suggesting that the slant perception from the disks was mandatory. Five experiments revealed that multiple depth cues were utilized for the slant perception; the aspect ratio of the disks, texture gradients, trapezium/parallelogram contours, and the side surfaces of the box-like object. However, foreshortened disks outside the object were not utilized as depth cues. These results suggested that various depth cues belonging to the target object are integrated for the slant perception.
{"title":"The contributions of surface features and contour shapes to object slant perception.","authors":"Ryosuke Niimi","doi":"10.1177/20416695231160402","DOIUrl":"https://doi.org/10.1177/20416695231160402","url":null,"abstract":"<p><p>Humans perceive 3D shapes even from 2D images. A slant can be perceived from images of slanted rectangular objects, which include texture gradients and linear perspective contours. How does the visual system integrate and utilize these pictorial depth cues? A new visual illusion that provides some insights into this issue was examined. A box-like object with disk figures drawn on its upper surface was rendered in a linear perspective image. The length of the object's upper surface side line was overestimated, probably due to the foreshortened disks serving as slant cues. This illusory effect occurred even when observers estimated the line length on the image plane, suggesting that the slant perception from the disks was mandatory. Five experiments revealed that multiple depth cues were utilized for the slant perception; the aspect ratio of the disks, texture gradients, trapezium/parallelogram contours, and the side surfaces of the box-like object. However, foreshortened disks outside the object were not utilized as depth cues. These results suggested that various depth cues belonging to the target object are integrated for the slant perception.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231160402"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10009043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9121998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/20416695231159182
Aravind Battaje, Oliver Brock, Martin Rolfs
We implement Adelson and Bergen's spatiotemporal energy model with extension to three-dimensional (x-y-t) in an interactive tool. It helps gain an easy understanding of early (first-order) visual motion perception. We demonstrate its usefulness in explaining an assortment of phenomena, including some that are typically not associated with the spatiotemporal energy model.
{"title":"An interactive motion perception tool for kindergarteners (and vision scientists).","authors":"Aravind Battaje, Oliver Brock, Martin Rolfs","doi":"10.1177/20416695231159182","DOIUrl":"https://doi.org/10.1177/20416695231159182","url":null,"abstract":"<p><p>We implement Adelson and Bergen's spatiotemporal energy model with extension to three-dimensional (x-y-t) in an interactive tool. It helps gain an easy understanding of early (first-order) visual motion perception. We demonstrate its usefulness in explaining an assortment of phenomena, including some that are typically not associated with the spatiotemporal energy model.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231159182"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10064475/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9241876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/20416695231163473
Eleftheria Pistolas, Johan Wagemans
In recent years, awareness of the influence of different modalities on taste perception has grown. Although previous research in crossmodal taste perception has touched upon the bipolar distinction between softness/smoothness and roughness/angularity, ambiguity largely remains surrounding other crossmodal correspondences between taste and other specific textures we regularly use to describe our food, such as crispy or crunchy. Sweetness has previously been found to be associated with soft textures but our current understanding does not exceed the basic distinction made between roughness and smoothness. Specifically, the role of texture in taste perception remains relatively understudied. The current study consisted of two parts. First, because of the lack of clarity concerning specific associations between basic tastes and textures, an online questionnaire served to assess whether consistent associations between texture words and taste words exist and how these arise intuitively. The second part consisted of a taste experiment with factorial combinations of four tastes and four textures. The results of the questionnaire study showed that consistent associations are made between soft and sweet and between crispy and salty at the conceptual level. The results of the taste experiment largely showed evidence in support of these findings at the perceptual level. In addition, the experiment allowed for a closer look into the complexity found regarding the association between sour and crunchy, and bitter and sandy.
{"title":"Crossmodal correspondences and interactions between texture and taste perception.","authors":"Eleftheria Pistolas, Johan Wagemans","doi":"10.1177/20416695231163473","DOIUrl":"https://doi.org/10.1177/20416695231163473","url":null,"abstract":"<p><p>In recent years, awareness of the influence of different modalities on taste perception has grown. Although previous research in crossmodal taste perception has touched upon the bipolar distinction between softness/smoothness and roughness/angularity, ambiguity largely remains surrounding other crossmodal correspondences between taste and other specific textures we regularly use to describe our food, such as crispy or crunchy. Sweetness has previously been found to be associated with soft textures but our current understanding does not exceed the basic distinction made between roughness and smoothness. Specifically, the role of texture in taste perception remains relatively understudied. The current study consisted of two parts. First, because of the lack of clarity concerning specific associations between basic tastes and textures, an online questionnaire served to assess whether consistent associations between texture words and taste words exist and how these arise intuitively. The second part consisted of a taste experiment with factorial combinations of four tastes and four textures. The results of the questionnaire study showed that consistent associations are made between soft and sweet and between crispy and salty at the conceptual level. The results of the taste experiment largely showed evidence in support of these findings at the perceptual level. In addition, the experiment allowed for a closer look into the complexity found regarding the association between sour and crunchy, and bitter and sandy.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231163473"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10069003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9626447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/20416695231165182
Peter U Tse, Vincent Hayward
A novel haptic illusion is described where deformations of the fingertip skin lead to subsequent misperceptions of an object's shape.
描述了一种新的触觉错觉,其中指尖皮肤的变形导致随后对物体形状的误解。
{"title":"The knobby ball illusion.","authors":"Peter U Tse, Vincent Hayward","doi":"10.1177/20416695231165182","DOIUrl":"https://doi.org/10.1177/20416695231165182","url":null,"abstract":"<p><p>A novel haptic illusion is described where deformations of the fingertip skin lead to subsequent misperceptions of an object's shape.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 2","pages":"20416695231165182"},"PeriodicalIF":1.9,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10034292/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9192736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-23eCollection Date: 2023-01-01DOI: 10.1177/20416695231157348
Yanna Ren, Hannan Li, Yan Li, Zhihan Xu, Rui Luo, Hang Ping, Xuan Ni, Jiajia Yang, Weiping Yang
Previous studies have shown that attention influences audiovisual integration (AVI) in multiple stages, but it remains unclear how AVI interacts with attentional load. In addition, while aging has been associated with sensory-functional decline, little is known about how older individuals integrate cross-modal information under attentional load. To investigate these issues twenty older adults and 20 younger adults were recruited to conduct a dual task including a multiple object tracking (MOT) task, which manipulated sustained visual attentional load, and an audiovisual discrimination task, which assesses AVI. The results showed that response times were shorter and hit rate was higher for audiovisual stimuli than for auditory or visual stimuli alone and in younger adults than in older adults. The race model analysis showed that AVI was higher under the load_3 condition (monitoring two targets of the MOT task) than under any other load condition (no-load [NL], one or three targets monitoring). This effect was found regardless of age. However, AVI was lower in older adults than younger adults under NL condition. Moreover, the peak latency was longer, and the time window of AVI was delayed in older adults compared to younger adults under all conditions. These results suggest that slight visual sustained attentional load increased AVI but that heavy visual sustained attentional load decreased AVI, which supports the claim that attention resource was limited, and we further proposed that AVI was positively modulated by attentional resource. Finally, there were substantial impacts of aging on AVI; AVI was delayed in older adults.
以往的研究表明,注意力在多个阶段影响视听整合(AVI),但视听整合如何与注意力负荷相互作用仍不清楚。此外,虽然衰老与感觉功能衰退有关,但人们对老年人如何在注意力负荷下整合跨模态信息却知之甚少。为了研究这些问题,研究人员招募了 20 名老年人和 20 名年轻人,让他们执行一项双重任务,包括一项多目标跟踪(MOT)任务和一项视听辨别任务,前者可操控持续的视觉注意负荷,后者则可评估视听综合能力。结果表明,与单独的听觉或视觉刺激相比,视听刺激的反应时间更短,命中率更高,而且年轻人的命中率高于老年人。竞赛模型分析表明,在负载_3 条件下(监控 MOT 任务的两个目标),AVI 比任何其他负载条件下(无负载 [NL]、监控一个或三个目标)都高。这种效应与年龄无关。然而,在 NL 条件下,老年人的 AVI 低于年轻人。此外,在所有条件下,老年人的峰值潜伏期都比年轻人长,而且 AVI 的时间窗口也比年轻人延迟。这些结果表明,轻微的视觉持续注意负荷会增加 AVI,但严重的视觉持续注意负荷会降低 AVI,这支持了注意资源有限的说法,我们进一步提出 AVI 受注意资源的正向调节。最后,老龄化对视差有很大影响;老年人的视差延迟。
{"title":"Sustained visual attentional load modulates audiovisual integration in older and younger adults.","authors":"Yanna Ren, Hannan Li, Yan Li, Zhihan Xu, Rui Luo, Hang Ping, Xuan Ni, Jiajia Yang, Weiping Yang","doi":"10.1177/20416695231157348","DOIUrl":"10.1177/20416695231157348","url":null,"abstract":"<p><p>Previous studies have shown that attention influences audiovisual integration (AVI) in multiple stages, but it remains unclear how AVI interacts with attentional load. In addition, while aging has been associated with sensory-functional decline, little is known about how older individuals integrate cross-modal information under attentional load. To investigate these issues twenty older adults and 20 younger adults were recruited to conduct a dual task including a multiple object tracking (MOT) task, which manipulated sustained visual attentional load, and an audiovisual discrimination task, which assesses AVI. The results showed that response times were shorter and hit rate was higher for audiovisual stimuli than for auditory or visual stimuli alone and in younger adults than in older adults. The race model analysis showed that AVI was higher under the load_3 condition (monitoring two targets of the MOT task) than under any other load condition (no-load [NL], one or three targets monitoring). This effect was found regardless of age. However, AVI was lower in older adults than younger adults under NL condition. Moreover, the peak latency was longer, and the time window of AVI was delayed in older adults compared to younger adults under all conditions. These results suggest that slight visual sustained attentional load increased AVI but that heavy visual sustained attentional load decreased AVI, which supports the claim that attention resource was limited, and we further proposed that AVI was positively modulated by attentional resource. Finally, there were substantial impacts of aging on AVI; AVI was delayed in older adults.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 1","pages":"20416695231157348"},"PeriodicalIF":2.4,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9950617/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10850314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-09eCollection Date: 2023-01-01DOI: 10.1177/20416695231152144
Misa Kobayashi, Makoto Ichikawa
We examined the effects of emotional response, with different levels of valence and arousal, on the temporal resolution of visual processing by using photos of various facial expressions. As an index of the temporal resolution of visual processing, we measured the minimum lengths of the noticeable durations for desaturated photographs using the method of constant stimuli by switching colorful facial expression photographs to desaturated versions of the same photographs. Experiments 1 and 2 used facial photographs that evoke various degrees of arousal and valence. Those photographs were prepared not only in an upright orientation but also in an inverted orientation to reduce emotional response without changing the photographs' image properties. Results showed that the minimum duration to notice monochrome photographs for anger, fear, and joy was shorter than that for a neutral face when viewing upright face photographs but not when viewing inverted face photographs. For Experiment 3, we used facial expression photographs to evoke various degrees of arousal. Results showed that the temporal resolution of visual processing increased with the degree of arousal. These results suggest that the arousal of emotional responses evoked by viewing facial expressions might increase the temporal resolution of visual processing.
{"title":"Emotional response evoked by viewing facial expression pictures leads to higher temporal resolution.","authors":"Misa Kobayashi, Makoto Ichikawa","doi":"10.1177/20416695231152144","DOIUrl":"10.1177/20416695231152144","url":null,"abstract":"<p><p>We examined the effects of emotional response, with different levels of valence and arousal, on the temporal resolution of visual processing by using photos of various facial expressions. As an index of the temporal resolution of visual processing, we measured the minimum lengths of the noticeable durations for desaturated photographs using the method of constant stimuli by switching colorful facial expression photographs to desaturated versions of the same photographs. Experiments 1 and 2 used facial photographs that evoke various degrees of arousal and valence. Those photographs were prepared not only in an upright orientation but also in an inverted orientation to reduce emotional response without changing the photographs' image properties. Results showed that the minimum duration to notice monochrome photographs for anger, fear, and joy was shorter than that for a neutral face when viewing upright face photographs but not when viewing inverted face photographs. For Experiment 3, we used facial expression photographs to evoke various degrees of arousal. Results showed that the temporal resolution of visual processing increased with the degree of arousal. These results suggest that the arousal of emotional responses evoked by viewing facial expressions might increase the temporal resolution of visual processing.</p>","PeriodicalId":47194,"journal":{"name":"I-Perception","volume":"14 1","pages":"20416695231152144"},"PeriodicalIF":2.4,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9943968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9341326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}