Daria Kvasova, Llucia Coll, Travis Stewart, Salvador Soto-Faraco
{"title":"跨模态语义一致性引导真实场景中的自发定向。","authors":"Daria Kvasova, Llucia Coll, Travis Stewart, Salvador Soto-Faraco","doi":"10.1007/s00426-024-02018-8","DOIUrl":null,"url":null,"abstract":"<p><p>In real-world scenes, the different objects and events are often interconnected within a rich web of semantic relationships. These semantic links help parse information efficiently and make sense of the sensory environment. It has been shown that, during goal-directed search, hearing the characteristic sound of an everyday life object helps finding the affiliate objects in artificial visual search arrays as well as in naturalistic, real-life videoclips. However, whether crossmodal semantic congruence also triggers orienting during spontaneous, not goal-directed observation is unknown. Here, we investigated this question addressing whether crossmodal semantic congruence can attract spontaneous, overt visual attention when viewing naturalistic, dynamic scenes. We used eye-tracking whilst participants (N = 45) watched video clips presented alongside sounds of varying semantic relatedness with objects present within the scene. We found that characteristic sounds increased the probability of looking at, the number of fixations to, and the total dwell time on semantically corresponding visual objects, in comparison to when the same scenes were presented with semantically neutral sounds or just with background noise only. Interestingly, hearing object sounds not met with an object in the scene led to increased visual exploration. These results suggest that crossmodal semantic information has an impact on spontaneous gaze on realistic scenes, and therefore on how information is sampled. Our findings extend beyond known effects of object-based crossmodal interactions with simple stimuli arrays and shed new light on the role that audio-visual semantic relationships out in the perception of everyday life scenarios.</p>","PeriodicalId":48184,"journal":{"name":"Psychological Research-Psychologische Forschung","volume":" ","pages":"2138-2148"},"PeriodicalIF":2.2000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Crossmodal semantic congruence guides spontaneous orienting in real-life scenes.\",\"authors\":\"Daria Kvasova, Llucia Coll, Travis Stewart, Salvador Soto-Faraco\",\"doi\":\"10.1007/s00426-024-02018-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In real-world scenes, the different objects and events are often interconnected within a rich web of semantic relationships. These semantic links help parse information efficiently and make sense of the sensory environment. It has been shown that, during goal-directed search, hearing the characteristic sound of an everyday life object helps finding the affiliate objects in artificial visual search arrays as well as in naturalistic, real-life videoclips. However, whether crossmodal semantic congruence also triggers orienting during spontaneous, not goal-directed observation is unknown. Here, we investigated this question addressing whether crossmodal semantic congruence can attract spontaneous, overt visual attention when viewing naturalistic, dynamic scenes. We used eye-tracking whilst participants (N = 45) watched video clips presented alongside sounds of varying semantic relatedness with objects present within the scene. We found that characteristic sounds increased the probability of looking at, the number of fixations to, and the total dwell time on semantically corresponding visual objects, in comparison to when the same scenes were presented with semantically neutral sounds or just with background noise only. Interestingly, hearing object sounds not met with an object in the scene led to increased visual exploration. These results suggest that crossmodal semantic information has an impact on spontaneous gaze on realistic scenes, and therefore on how information is sampled. Our findings extend beyond known effects of object-based crossmodal interactions with simple stimuli arrays and shed new light on the role that audio-visual semantic relationships out in the perception of everyday life scenarios.</p>\",\"PeriodicalId\":48184,\"journal\":{\"name\":\"Psychological Research-Psychologische Forschung\",\"volume\":\" \",\"pages\":\"2138-2148\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological Research-Psychologische Forschung\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1007/s00426-024-02018-8\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/8/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological Research-Psychologische Forschung","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s00426-024-02018-8","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/6 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Crossmodal semantic congruence guides spontaneous orienting in real-life scenes.
In real-world scenes, the different objects and events are often interconnected within a rich web of semantic relationships. These semantic links help parse information efficiently and make sense of the sensory environment. It has been shown that, during goal-directed search, hearing the characteristic sound of an everyday life object helps finding the affiliate objects in artificial visual search arrays as well as in naturalistic, real-life videoclips. However, whether crossmodal semantic congruence also triggers orienting during spontaneous, not goal-directed observation is unknown. Here, we investigated this question addressing whether crossmodal semantic congruence can attract spontaneous, overt visual attention when viewing naturalistic, dynamic scenes. We used eye-tracking whilst participants (N = 45) watched video clips presented alongside sounds of varying semantic relatedness with objects present within the scene. We found that characteristic sounds increased the probability of looking at, the number of fixations to, and the total dwell time on semantically corresponding visual objects, in comparison to when the same scenes were presented with semantically neutral sounds or just with background noise only. Interestingly, hearing object sounds not met with an object in the scene led to increased visual exploration. These results suggest that crossmodal semantic information has an impact on spontaneous gaze on realistic scenes, and therefore on how information is sampled. Our findings extend beyond known effects of object-based crossmodal interactions with simple stimuli arrays and shed new light on the role that audio-visual semantic relationships out in the perception of everyday life scenarios.
期刊介绍:
Psychological Research/Psychologische Forschung publishes articles that contribute to a basic understanding of human perception, attention, memory, and action. The Journal is devoted to the dissemination of knowledge based on firm experimental ground, but not to particular approaches or schools of thought. Theoretical and historical papers are welcome to the extent that they serve this general purpose; papers of an applied nature are acceptable if they contribute to basic understanding or serve to bridge the often felt gap between basic and applied research in the field covered by the Journal.