{"title":"The role of long-distance phonological processes in spoken word recognition: A preliminary investigation","authors":"Phillip Burness, Kevin McMullin, T. Zamuner","doi":"10.33137/twpl.v41i1.32756","DOIUrl":null,"url":null,"abstract":"Previous work has demonstrated that during spoken word recognition, listeners can use a variety of cues to anticipate an upcoming sound before the sound is encountered. However, this vein of research has largely focused on local phenomena that hold between adjacent sounds. In order to fill this gap, we combine the Visual World Paradigm with an Artificial Language Learning methodology to investigate whether knowledge of a long-distance pattern of sibilant harmony can be utilized during spoken word recognition. The hypothesis was that participants trained on sibilant harmony could more quickly identify a target word from among a set of competitors when that target contained a prefix which had undergone regressive sibilant harmony. Participants tended to behave as expected for the subset of items that they saw during training, but the effect did not reach statistical significance and did not extend to novel items. This suggests that participants did not learn the rule of sibilant harmony and may have been memorizing which base went with which alternant. Failure to learn the pattern may have been due to certain aspects of the design, which will be addressed in future iterations of the experiment.","PeriodicalId":442006,"journal":{"name":"Toronto Working Papers in Linguistics","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Toronto Working Papers in Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33137/twpl.v41i1.32756","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Previous work has demonstrated that during spoken word recognition, listeners can use a variety of cues to anticipate an upcoming sound before the sound is encountered. However, this vein of research has largely focused on local phenomena that hold between adjacent sounds. In order to fill this gap, we combine the Visual World Paradigm with an Artificial Language Learning methodology to investigate whether knowledge of a long-distance pattern of sibilant harmony can be utilized during spoken word recognition. The hypothesis was that participants trained on sibilant harmony could more quickly identify a target word from among a set of competitors when that target contained a prefix which had undergone regressive sibilant harmony. Participants tended to behave as expected for the subset of items that they saw during training, but the effect did not reach statistical significance and did not extend to novel items. This suggests that participants did not learn the rule of sibilant harmony and may have been memorizing which base went with which alternant. Failure to learn the pattern may have been due to certain aspects of the design, which will be addressed in future iterations of the experiment.