Charlotte Pouw, Marianne de Heer Kloots, Afra Alishahi, Willem Zuidema
{"title":"Perception of Phonological Assimilation by Neural Speech Recognition Models","authors":"Charlotte Pouw, Marianne de Heer Kloots, Afra Alishahi, Willem Zuidema","doi":"10.1162/coli_a_00526","DOIUrl":null,"url":null,"abstract":"Human listeners effortlessly compensate for phonological changes during speech perception, often unconsciously inferring the intended sounds. For example, listeners infer the underlying /n/ when hearing an utterance such as “clea[m] pan”, where [m] arises from place assimilation to the following labial [p]. This article explores how the neural speech recognition model Wav2Vec2 perceives assimilated sounds, and identifies the linguistic knowledge that is implemented by the model to compensate for assimilation during Automatic Speech Recognition (ASR). Using psycholinguistic stimuli, we systematically analyze how various linguistic context cues influence compensation patterns in the model’s output. Complementing these behavioral experiments, our probing experiments indicate that the model shifts its interpretation of assimilated sounds from their acoustic form to their underlying form in its final layers. Finally, our causal intervention experiments suggest that the model relies on minimal phonological context cues to accomplish this shift. These findings represent a step towards better understanding the similarities and differences in phonological processing between neural ASR models and humans.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"41 1","pages":""},"PeriodicalIF":9.3000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Linguistics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/coli_a_00526","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human listeners effortlessly compensate for phonological changes during speech perception, often unconsciously inferring the intended sounds. For example, listeners infer the underlying /n/ when hearing an utterance such as “clea[m] pan”, where [m] arises from place assimilation to the following labial [p]. This article explores how the neural speech recognition model Wav2Vec2 perceives assimilated sounds, and identifies the linguistic knowledge that is implemented by the model to compensate for assimilation during Automatic Speech Recognition (ASR). Using psycholinguistic stimuli, we systematically analyze how various linguistic context cues influence compensation patterns in the model’s output. Complementing these behavioral experiments, our probing experiments indicate that the model shifts its interpretation of assimilated sounds from their acoustic form to their underlying form in its final layers. Finally, our causal intervention experiments suggest that the model relies on minimal phonological context cues to accomplish this shift. These findings represent a step towards better understanding the similarities and differences in phonological processing between neural ASR models and humans.
在语音感知过程中,人类听者会毫不费力地对语音变化进行补偿,往往会不自觉地推断出想要表达的声音。例如,听者在听到 "clea[m] pan "这样的语音时会推断出潜在的/n/,其中的[m]是通过与后面的唇音[p]同化而产生的。本文探讨了神经语音识别模型 Wav2Vec2 如何感知同化音,并确定了该模型在自动语音识别(ASR)过程中用于补偿同化的语言知识。利用心理语言刺激,我们系统分析了各种语言语境线索如何影响模型输出中的补偿模式。与这些行为实验相辅相成的是,我们的探测实验表明,该模型在其最终层中对同化声音的解释从声音形式转向了底层形式。最后,我们的因果干预实验表明,该模型依赖于最小的语音语境线索来完成这一转变。这些发现为更好地理解神经 ASR 模型与人类在语音处理方面的异同迈出了一步。
期刊介绍:
Computational Linguistics is the longest-running publication devoted exclusively to the computational and mathematical properties of language and the design and analysis of natural language processing systems. This highly regarded quarterly offers university and industry linguists, computational linguists, artificial intelligence and machine learning investigators, cognitive scientists, speech specialists, and philosophers the latest information about the computational aspects of all the facets of research on language.