{"title":"学习单音节和双音节命名模式下的正字法和音系表征","authors":"Daragh E. Sibley, C. Kello, Mark S. Seidenberg","doi":"10.1080/09541440903080583","DOIUrl":null,"url":null,"abstract":"Most current models of word naming are restricted to processing monosyllabic words and pseudowords. This limitation stems from difficulties in representing the orthographic and phonological codes for words varying substantially in length. Sibley, Kello, Plaut, and Elman (2008) described an extension of the simple recurrent network architecture, called the sequence encoder, that learned orthographic and phonological representations of variable-length words. The present research explored the use of sequence encoders in models of monosyllabic and bisyllabic word naming. Performance in these models is comparable to other models in terms of word and pseudoword naming accuracy, as well as accounting for naming latency phenomena. Although the models do not address all naming phenomena, the results suggest that sequence encoders can learn orthographic and phonological representations, making it easier to create models that scale up to larger vocabularies, while accounting for behavioural data.","PeriodicalId":88321,"journal":{"name":"The European journal of cognitive psychology","volume":"27 1","pages":"650 - 668"},"PeriodicalIF":0.0000,"publicationDate":"2010-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Learning orthographic and phonological representations in models of monosyllabic and bisyllabic naming\",\"authors\":\"Daragh E. Sibley, C. Kello, Mark S. Seidenberg\",\"doi\":\"10.1080/09541440903080583\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most current models of word naming are restricted to processing monosyllabic words and pseudowords. This limitation stems from difficulties in representing the orthographic and phonological codes for words varying substantially in length. Sibley, Kello, Plaut, and Elman (2008) described an extension of the simple recurrent network architecture, called the sequence encoder, that learned orthographic and phonological representations of variable-length words. The present research explored the use of sequence encoders in models of monosyllabic and bisyllabic word naming. Performance in these models is comparable to other models in terms of word and pseudoword naming accuracy, as well as accounting for naming latency phenomena. Although the models do not address all naming phenomena, the results suggest that sequence encoders can learn orthographic and phonological representations, making it easier to create models that scale up to larger vocabularies, while accounting for behavioural data.\",\"PeriodicalId\":88321,\"journal\":{\"name\":\"The European journal of cognitive psychology\",\"volume\":\"27 1\",\"pages\":\"650 - 668\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The European journal of cognitive psychology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/09541440903080583\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The European journal of cognitive psychology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/09541440903080583","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning orthographic and phonological representations in models of monosyllabic and bisyllabic naming
Most current models of word naming are restricted to processing monosyllabic words and pseudowords. This limitation stems from difficulties in representing the orthographic and phonological codes for words varying substantially in length. Sibley, Kello, Plaut, and Elman (2008) described an extension of the simple recurrent network architecture, called the sequence encoder, that learned orthographic and phonological representations of variable-length words. The present research explored the use of sequence encoders in models of monosyllabic and bisyllabic word naming. Performance in these models is comparable to other models in terms of word and pseudoword naming accuracy, as well as accounting for naming latency phenomena. Although the models do not address all naming phenomena, the results suggest that sequence encoders can learn orthographic and phonological representations, making it easier to create models that scale up to larger vocabularies, while accounting for behavioural data.