{"title":"单对象一致性促进多感官配对学习:统一的证据","authors":"Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz","doi":"10.1163/187847612X646343","DOIUrl":null,"url":null,"abstract":"Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"87 1","pages":"11-11"},"PeriodicalIF":0.0000,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646343","citationCount":"0","resultStr":"{\"title\":\"Single-object consistency facilitates multisensory pair learning: Evidence for unitization\",\"authors\":\"Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz\",\"doi\":\"10.1163/187847612X646343\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.\",\"PeriodicalId\":49553,\"journal\":{\"name\":\"Seeing and Perceiving\",\"volume\":\"87 1\",\"pages\":\"11-11\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1163/187847612X646343\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Seeing and Perceiving\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1163/187847612X646343\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Seeing and Perceiving","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/187847612X646343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Single-object consistency facilitates multisensory pair learning: Evidence for unitization
Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.