Pub Date : 2025-12-22DOI: 10.1038/s41539-025-00392-5
Orel Levy, Tal Shadi, Adi Korisky, Martin G Bleichner, Elana Zion Golumbic
Attending a lecture requires remaining focused for extended periods, which is particularly difficult in noisy environments or when lecture content is less engaging. Yet little is known about how these external (noise) and internal (interest) factors affect learners' neurophysiology. We measured brain activity (electroencephalogram; EEG) and physiological responses (skin conductance) during video-based learning, and assessed how neurophysiological responses were modulated by the presence of realistic background noise and by varying levels of interest throughout the lecture. Interest-level showed pronounced neurophysiological effects, with low-interest segments associated with reduced neural speech tracking, elevated alpha-power, reduced beta-power, and increased arousal, a pattern consistent with lower engagement and increased listening effort. Interestingly, background noise had comparatively limited effects on neurophysiological responses. These dissociated impacts of internal and external factors on speech processing during learning, emphasize the profound impact of content-engagement on neurophysiological measures associated with learner's attention, beyond the sensory burden of noise.
{"title":"Differential effects of external noise and situational interest on neurophysiological responses during video based learning.","authors":"Orel Levy, Tal Shadi, Adi Korisky, Martin G Bleichner, Elana Zion Golumbic","doi":"10.1038/s41539-025-00392-5","DOIUrl":"10.1038/s41539-025-00392-5","url":null,"abstract":"<p><p>Attending a lecture requires remaining focused for extended periods, which is particularly difficult in noisy environments or when lecture content is less engaging. Yet little is known about how these external (noise) and internal (interest) factors affect learners' neurophysiology. We measured brain activity (electroencephalogram; EEG) and physiological responses (skin conductance) during video-based learning, and assessed how neurophysiological responses were modulated by the presence of realistic background noise and by varying levels of interest throughout the lecture. Interest-level showed pronounced neurophysiological effects, with low-interest segments associated with reduced neural speech tracking, elevated alpha-power, reduced beta-power, and increased arousal, a pattern consistent with lower engagement and increased listening effort. Interestingly, background noise had comparatively limited effects on neurophysiological responses. These dissociated impacts of internal and external factors on speech processing during learning, emphasize the profound impact of content-engagement on neurophysiological measures associated with learner's attention, beyond the sensory burden of noise.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":" ","pages":"92"},"PeriodicalIF":3.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12748724/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145811888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1038/s41539-025-00381-8
Ana Zappa, Mel Slater, Antoni Rodriguez-Fornells
Social interaction can play a crucial role in how a second language (L2) is learned. In the current review, we examine theoretical frameworks and empirical studies demonstrating how social factors influence L2 learning, but we also identify gaps in the current literature. We propose using virtual reality (VR) as a methodology to fill these gaps with controlled, ecologically valid social simulations that can elucidate how social factors shape L2 learning.
{"title":"Social interaction shapes and boosts second language learning: virtual reality can show us how.","authors":"Ana Zappa, Mel Slater, Antoni Rodriguez-Fornells","doi":"10.1038/s41539-025-00381-8","DOIUrl":"10.1038/s41539-025-00381-8","url":null,"abstract":"<p><p>Social interaction can play a crucial role in how a second language (L2) is learned. In the current review, we examine theoretical frameworks and empirical studies demonstrating how social factors influence L2 learning, but we also identify gaps in the current literature. We propose using virtual reality (VR) as a methodology to fill these gaps with controlled, ecologically valid social simulations that can elucidate how social factors shape L2 learning.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":"10 1","pages":"90"},"PeriodicalIF":3.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12715243/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audio-visual (AV) associations are central to many aspects of behavior, including the initial steps of learning to read. The acquisition of AV pairings has been explored in individuals with varying literacy skills, including children with developmental dyslexia. Most previous studies examined performance in AV associative tasks looking at the pairings between linguistic auditory material and visual stimuli, thus confounding AV learning with phonological and/or verbal abilities. In the present study, we introduce an AV learning paradigm relying on non-linguistic auditory stimuli and novel visual shapes. We fit trial-by-trial performance and compare the response patterns of 52 Italian-speaking children with developmental dyslexia (DD) with those of age-matched (N = 54) and of younger, reading-matched (N = 51) typically-developing children. All groups showed increasing accuracy across trials, but children with DD learned less efficiently than their peers. These findings suggest that difficulties in forming AV associations through repeated exposure may underlie dyslexia, even when linguistic demands are minimized.
{"title":"Impaired audio-visual associations in dyslexia: evidence beyond linguistic processing.","authors":"Angela Pasqualotto, Aaron Cochrane, Paola Venuti, Daphne Bavelier, Irene Altarelli","doi":"10.1038/s41539-025-00382-7","DOIUrl":"10.1038/s41539-025-00382-7","url":null,"abstract":"<p><p>Audio-visual (AV) associations are central to many aspects of behavior, including the initial steps of learning to read. The acquisition of AV pairings has been explored in individuals with varying literacy skills, including children with developmental dyslexia. Most previous studies examined performance in AV associative tasks looking at the pairings between linguistic auditory material and visual stimuli, thus confounding AV learning with phonological and/or verbal abilities. In the present study, we introduce an AV learning paradigm relying on non-linguistic auditory stimuli and novel visual shapes. We fit trial-by-trial performance and compare the response patterns of 52 Italian-speaking children with developmental dyslexia (DD) with those of age-matched (N = 54) and of younger, reading-matched (N = 51) typically-developing children. All groups showed increasing accuracy across trials, but children with DD learned less efficiently than their peers. These findings suggest that difficulties in forming AV associations through repeated exposure may underlie dyslexia, even when linguistic demands are minimized.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":" ","pages":"1"},"PeriodicalIF":3.0,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12764848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145775869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1038/s41539-025-00384-5
Anne T Park, Joseph Colantonio, Lourdes Delgado Reyes, Sophie D S Sharp, Andrew E Koepp, Elizabeth Bonawitz, Allyson P Mackey
Children who are more curious learn more in school, but little is known about how to promote curiosity-driven behaviors. In a preregistered experiment, 103 children (54 boys, 49 girls, ages 5-7 years) were randomly assigned to a condition in which they were encouraged to ask questions, or to listen carefully, during eight one-on-one science lessons over 2 weeks. Children in the question-asking condition valued new science information significantly more than children in the listening condition (Wilcoxon r = 0.23). Children with less background knowledge, as measured by their baseline vocabulary and science achievement, showed greater curiosity and learning benefits from question-asking. These results suggest that practice with question-asking can boost some aspects of curiosity and learning in science domains.
更好奇的孩子在学校里学得更多,但人们对如何促进好奇心驱动的行为知之甚少。在一项预先登记的实验中,103名儿童(54名男孩,49名女孩,年龄在5-7岁之间)被随机分配到一个条件,在两周的时间里,他们被鼓励在8个一对一的科学课程中提问或仔细倾听。提问组儿童对新科学信息的重视程度显著高于聆听组儿童(Wilcoxon r = 0.23)。根据基础词汇量和科学成就来衡量,背景知识较少的孩子表现出更大的好奇心,并从提问中获得学习益处。这些结果表明,练习提问可以提高科学领域的某些方面的好奇心和学习能力。
{"title":"Question asking practice fosters aspects of curiosity in science content in young children.","authors":"Anne T Park, Joseph Colantonio, Lourdes Delgado Reyes, Sophie D S Sharp, Andrew E Koepp, Elizabeth Bonawitz, Allyson P Mackey","doi":"10.1038/s41539-025-00384-5","DOIUrl":"10.1038/s41539-025-00384-5","url":null,"abstract":"<p><p>Children who are more curious learn more in school, but little is known about how to promote curiosity-driven behaviors. In a preregistered experiment, 103 children (54 boys, 49 girls, ages 5-7 years) were randomly assigned to a condition in which they were encouraged to ask questions, or to listen carefully, during eight one-on-one science lessons over 2 weeks. Children in the question-asking condition valued new science information significantly more than children in the listening condition (Wilcoxon r = 0.23). Children with less background knowledge, as measured by their baseline vocabulary and science achievement, showed greater curiosity and learning benefits from question-asking. These results suggest that practice with question-asking can boost some aspects of curiosity and learning in science domains.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":" ","pages":"2"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1038/s41539-025-00379-2
Pamela Villavicencio, Jonathan S Tsay, Cristina de la Malla
Motor adaptation is essential for keeping our actions well-calibrated. However, the role of training context-specifically, the configuration of targets-in shaping motor adaptation remains poorly understood. We tested this by exposing participants to a visuomotor gain perturbation under two contexts: The Extent Group, which trained with targets at two amplitudes in a fixed angular direction, and the Angular Group, which trained with targets at equal amplitude in two angular directions. Strikingly, the groups differed in how they learned: the Angular Group relied predominantly on implicit adaptation, whereas the Extent Group relied more on explicit strategies. Additionally, the two groups differed in what they learned: the Angular Group acquired a translation rule, whereas the Extent Group captured the true gain rule. These findings underscore that training context determines both the processes engaged and the representations formed, underscoring its importance in shaping both how and what we learn.
{"title":"Target configuration determines how and what we learn during sensorimotor adaptation.","authors":"Pamela Villavicencio, Jonathan S Tsay, Cristina de la Malla","doi":"10.1038/s41539-025-00379-2","DOIUrl":"10.1038/s41539-025-00379-2","url":null,"abstract":"<p><p>Motor adaptation is essential for keeping our actions well-calibrated. However, the role of training context-specifically, the configuration of targets-in shaping motor adaptation remains poorly understood. We tested this by exposing participants to a visuomotor gain perturbation under two contexts: The Extent Group, which trained with targets at two amplitudes in a fixed angular direction, and the Angular Group, which trained with targets at equal amplitude in two angular directions. Strikingly, the groups differed in how they learned: the Angular Group relied predominantly on implicit adaptation, whereas the Extent Group relied more on explicit strategies. Additionally, the two groups differed in what they learned: the Angular Group acquired a translation rule, whereas the Extent Group captured the true gain rule. These findings underscore that training context determines both the processes engaged and the representations formed, underscoring its importance in shaping both how and what we learn.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":"10 1","pages":"89"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12708642/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1038/s41539-025-00387-2
Sean Devine, James Goulding, John Harvey, Anya Skatova, A Ross Otto
{"title":"Publisher Correction: How decoy options ferment choice biases in real-world consumer decision-making.","authors":"Sean Devine, James Goulding, John Harvey, Anya Skatova, A Ross Otto","doi":"10.1038/s41539-025-00387-2","DOIUrl":"10.1038/s41539-025-00387-2","url":null,"abstract":"","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":"10 1","pages":"88"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12708634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145769668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1038/s41539-025-00390-7
Ming Song, Luan Li, Danni He, Qing Cai
Language and reading experience foster interpersonal alignment in communication, yet their impact on shared and idiosyncratic neural patterns in language comprehension remains underexplored. In this study, we investigate how individual differences in reading experience influence neural similarity across readers. We used a topicalized Author Recognition Test to profile participants' reading experience across diverse topics and used an fMRI task to measure neural activity while participants read narrative and expository texts. We found that greater print exposure was associated with enhanced alignment with others in bilateral semantic regions during narrative reading. In contrast, during expository comprehension, higher print exposure was related to more idiosyncratic patterns in the frontoparietal control network (FPN). Inter-subject representational similarity analysis further revealed shared brain-behavior patterns between distributed reading experience and activities in the default mode network (DMN). These findings highlight how accumulative reading experience is related to both shared and individualized neural dynamics during language comprehension.
{"title":"Reading experience reveals shared and idiosyncratic neural patterns during text comprehension.","authors":"Ming Song, Luan Li, Danni He, Qing Cai","doi":"10.1038/s41539-025-00390-7","DOIUrl":"10.1038/s41539-025-00390-7","url":null,"abstract":"<p><p>Language and reading experience foster interpersonal alignment in communication, yet their impact on shared and idiosyncratic neural patterns in language comprehension remains underexplored. In this study, we investigate how individual differences in reading experience influence neural similarity across readers. We used a topicalized Author Recognition Test to profile participants' reading experience across diverse topics and used an fMRI task to measure neural activity while participants read narrative and expository texts. We found that greater print exposure was associated with enhanced alignment with others in bilateral semantic regions during narrative reading. In contrast, during expository comprehension, higher print exposure was related to more idiosyncratic patterns in the frontoparietal control network (FPN). Inter-subject representational similarity analysis further revealed shared brain-behavior patterns between distributed reading experience and activities in the default mode network (DMN). These findings highlight how accumulative reading experience is related to both shared and individualized neural dynamics during language comprehension.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":" ","pages":"8"},"PeriodicalIF":3.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12804784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1038/s41539-025-00388-1
Jing Teng, Xinuo Qiao, Kelong Lu, Tuo Liu, Xinyue Wang, Zhenni Gao, Tingting Yu, Ning Hao
Visual arts education has been linked to cognitive and neural benefits, yet the neural mechanisms associated with creativity remain unclear. This study examined how long-term engagement in design-related visual arts education relates to creative performance and brain function. Using a quasi-experimental design with propensity score matching, we compared design majors to matched non-design majors. Participants completed visual art creative tasks (product and book cover design) and divergent thinking tasks (AUT, TTCT-figural) during fNIRS recording. The design group outperformed peers across tasks and showed greater left dorsolateral prefrontal activation during early idea generation, while non-design peers relied more on sensory and motor regions. Functional connectivity revealed reduced coupling within task-relevant circuits, indicating greater neural efficiency. Dynamic network analysis showed design majors spent more time in efficient states and switched between states more flexibly. These findings suggest that design-related visual arts education may support creativity through efficient and flexible brain network engagement.
{"title":"Neural mechanisms underpinning the association between visual arts education and creativity.","authors":"Jing Teng, Xinuo Qiao, Kelong Lu, Tuo Liu, Xinyue Wang, Zhenni Gao, Tingting Yu, Ning Hao","doi":"10.1038/s41539-025-00388-1","DOIUrl":"10.1038/s41539-025-00388-1","url":null,"abstract":"<p><p>Visual arts education has been linked to cognitive and neural benefits, yet the neural mechanisms associated with creativity remain unclear. This study examined how long-term engagement in design-related visual arts education relates to creative performance and brain function. Using a quasi-experimental design with propensity score matching, we compared design majors to matched non-design majors. Participants completed visual art creative tasks (product and book cover design) and divergent thinking tasks (AUT, TTCT-figural) during fNIRS recording. The design group outperformed peers across tasks and showed greater left dorsolateral prefrontal activation during early idea generation, while non-design peers relied more on sensory and motor regions. Functional connectivity revealed reduced coupling within task-relevant circuits, indicating greater neural efficiency. Dynamic network analysis showed design majors spent more time in efficient states and switched between states more flexibly. These findings suggest that design-related visual arts education may support creativity through efficient and flexible brain network engagement.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":" ","pages":"6"},"PeriodicalIF":3.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12804192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145716342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The effect of mindfulness training on working memory is unclear. The current study sought to confirm the impact of mindfulness training on working memory for facial stimuli and to reveal the cognitive mechanisms underlying this effect by using drift-diffusion modeling (DDM). Using a delayed match-to-sample task with facial stimuli, we measured memory performance across five emotional categories (happy, sad, fearful, angry, neutral). Sixty participants received five-week emotion-targeted mindfulness training versus 60 waitlist controls. Assessments pre-training, post-training, and at one-month follow-up revealed significantly improved memory accuracy for all emotions except fear, with the effects persisting for one month. More importantly, drift-diffusion modeling showed increased drift rates across emotional categories post-training. Furthermore, accuracy improvements strongly correlated with drift rate enhancements within each emotion category. These findings demonstrate that mindfulness training induces lasting improvements in both accuracy and processing efficiency of visual working memory, independent of facial emotions, clarifying its cognitive mechanisms.
{"title":"Mindfulness training enhances face working memory: evidence from the drift-diffusion model.","authors":"Hui Kou, Wei Luo, Xiaodong Li, Jia Wu, Qianguo Xiao, Taiyong Bi","doi":"10.1038/s41539-025-00389-0","DOIUrl":"10.1038/s41539-025-00389-0","url":null,"abstract":"<p><p>The effect of mindfulness training on working memory is unclear. The current study sought to confirm the impact of mindfulness training on working memory for facial stimuli and to reveal the cognitive mechanisms underlying this effect by using drift-diffusion modeling (DDM). Using a delayed match-to-sample task with facial stimuli, we measured memory performance across five emotional categories (happy, sad, fearful, angry, neutral). Sixty participants received five-week emotion-targeted mindfulness training versus 60 waitlist controls. Assessments pre-training, post-training, and at one-month follow-up revealed significantly improved memory accuracy for all emotions except fear, with the effects persisting for one month. More importantly, drift-diffusion modeling showed increased drift rates across emotional categories post-training. Furthermore, accuracy improvements strongly correlated with drift rate enhancements within each emotion category. These findings demonstrate that mindfulness training induces lasting improvements in both accuracy and processing efficiency of visual working memory, independent of facial emotions, clarifying its cognitive mechanisms.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":" ","pages":"7"},"PeriodicalIF":3.0,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12804977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1038/s41539-025-00385-4
Asa Kucinkas, Chrysa Retsa, Peter B L Meijer, Mark T Wallace, Monica Gori, Micah M Murray
Visual-to-auditory sensory substitution devices (SSDs) translate images to sounds. One SSD, The vOICe, translates a pixel's vertical position into pitch and horizontal position into time. This mapping is primarily based on technical considerations for preserving image content in human-audible sounds without presupposing intuitiveness, although some literature also invokes crossmodal correspondences in perception, such as pitch for elevation. We investigated these presuppositions and the efficacy of learning a traditional algorithm (i.e., pitch indicating elevation and time indicating azimuth) versus a reversed algorithm (i.e., pitch indicating azimuth and time indicating elevation), or an arbitrary single-tone control mapping (i.e., each visual stimulus was represented by a single non-systematic pitch-time pairing without structured spatial correspondences). Sixty sighted adults participated with random assignment to the Traditional, Reversed, or Control groups. They completed learning and evaluation sessions using simplified black-and-white visual stimuli. Both the Traditional and Reversed groups learned mappings within 30 minutes and demonstrated successful recognition of novel stimuli, outperforming the Control group but not differing between them. Structured mappings facilitate SSD learning. Mapping pixel position onto spectral-temporal acoustic axes appears flexible, rather than anchored to cross-modal correspondences. These findings reveal how SSDs may be rendered bespoke across user, stimuli, and functionality levels.
{"title":"Learning visual to auditory sensory substitution reveals flexibility in image to sound mapping.","authors":"Asa Kucinkas, Chrysa Retsa, Peter B L Meijer, Mark T Wallace, Monica Gori, Micah M Murray","doi":"10.1038/s41539-025-00385-4","DOIUrl":"10.1038/s41539-025-00385-4","url":null,"abstract":"<p><p>Visual-to-auditory sensory substitution devices (SSDs) translate images to sounds. One SSD, The vOICe, translates a pixel's vertical position into pitch and horizontal position into time. This mapping is primarily based on technical considerations for preserving image content in human-audible sounds without presupposing intuitiveness, although some literature also invokes crossmodal correspondences in perception, such as pitch for elevation. We investigated these presuppositions and the efficacy of learning a traditional algorithm (i.e., pitch indicating elevation and time indicating azimuth) versus a reversed algorithm (i.e., pitch indicating azimuth and time indicating elevation), or an arbitrary single-tone control mapping (i.e., each visual stimulus was represented by a single non-systematic pitch-time pairing without structured spatial correspondences). Sixty sighted adults participated with random assignment to the Traditional, Reversed, or Control groups. They completed learning and evaluation sessions using simplified black-and-white visual stimuli. Both the Traditional and Reversed groups learned mappings within 30 minutes and demonstrated successful recognition of novel stimuli, outperforming the Control group but not differing between them. Structured mappings facilitate SSD learning. Mapping pixel position onto spectral-temporal acoustic axes appears flexible, rather than anchored to cross-modal correspondences. These findings reveal how SSDs may be rendered bespoke across user, stimuli, and functionality levels.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":" ","pages":"4"},"PeriodicalIF":3.0,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12783664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145670348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}