Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0011
Vsevolod Kapatsinski
This chapter reviews the hypotheses about learning, processing, and mental representation advanced in the rest of this book, and brings them together to explain some recurrent patterns in language change, including changes involving phonetics, semantics, and morphology. It also discusses some general principles that recur throughout the book, including the functional value of redundancy (degeneracy), the ubiquity of evolution (variation and selection) as a mechanism of change, and domain-general learning mechanisms. Promising future directions and gaps in the literature are outlined. The chapter concluded that domain-general learning mechanisms provide valuable insights into the central issues of language acquisition and explanations for recurrent patterns in language change, which in turn explain why languages are the way they are, including not only language universals but also the emergence of specific typological rarities.
{"title":"Bringing It All Together","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0011","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0011","url":null,"abstract":"This chapter reviews the hypotheses about learning, processing, and mental representation advanced in the rest of this book, and brings them together to explain some recurrent patterns in language change, including changes involving phonetics, semantics, and morphology. It also discusses some general principles that recur throughout the book, including the functional value of redundancy (degeneracy), the ubiquity of evolution (variation and selection) as a mechanism of change, and domain-general learning mechanisms. Promising future directions and gaps in the literature are outlined. The chapter concluded that domain-general learning mechanisms provide valuable insights into the central issues of language acquisition and explanations for recurrent patterns in language change, which in turn explain why languages are the way they are, including not only language universals but also the emergence of specific typological rarities.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117230180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0003
Vsevolod Kapatsinski
This chapter reviews sources of regularity in language, including maximizing (vs. probability matching) in decision making and positive feedback (rich-get-richer) loops within and between individuals. It argues that gradual learning can manifest itself in abrupt changes in behaviour, and languages can look somewhat regular and systematic in everyday use despite being represented as networks of competing associations. The chapter then reviews the kinds of structures found in language, distinguishing between syntagmatic structure (sequencing, serial order), schematic structure (form-meaning mappings, constructions) and paradigmatic structure, which is argued to be necessary only for learning morphological paradigms. Two controversial issues are discussed. First, it is argued that associations in language are ‘bidirectional by default’ in that an experienced language learner tries to form associations in both directions but may fail in doing so. Second, learning is argued to often proceed in the general-to-specific directions, especially at the level of cues (predictors) as opposed to outputs (behaviours).
{"title":"From Associative Learning to Language Structure","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0003","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0003","url":null,"abstract":"This chapter reviews sources of regularity in language, including maximizing (vs. probability matching) in decision making and positive feedback (rich-get-richer) loops within and between individuals. It argues that gradual learning can manifest itself in abrupt changes in behaviour, and languages can look somewhat regular and systematic in everyday use despite being represented as networks of competing associations. The chapter then reviews the kinds of structures found in language, distinguishing between syntagmatic structure (sequencing, serial order), schematic structure (form-meaning mappings, constructions) and paradigmatic structure, which is argued to be necessary only for learning morphological paradigms. Two controversial issues are discussed. First, it is argued that associations in language are ‘bidirectional by default’ in that an experienced language learner tries to form associations in both directions but may fail in doing so. Second, learning is argued to often proceed in the general-to-specific directions, especially at the level of cues (predictors) as opposed to outputs (behaviours).","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117243260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0007
Vsevolod Kapatsinski
This chapter aims to explain some trends in semantic change with Hebbian learning. Semantic broadening observed in grammaticalization is argued to be seeded by speakers when they select frequent forms for production over less accessible competitors, even though the meaning they are trying to express is merely similar to the meanings the frequent form was experienced in. Extension of frequent forms in production co-exists with entrenchment (the suspicious coincidence effect) in comprehension. The entrenchment effect in comprehension rules out a habituation account of the semantic change. The form a speaker is most likely to extend to a new meaning in production is often the form they are least likely to map onto that meaning in comprehension. A range of Hebbian models of these processes is developed. All such models are shown to predict the comprehension-production dissociation under default assumptions regarding salience differences between absent and present cues. Certain aspects of the results are shown to be problematic for error-driven models (Rescorla-Wagner), at least if learning rate is fast enough to give rise to their signature blocking effect. Finally, an account of accessibility in an associative framework is developed.
{"title":"Schematic Structure, Hebbian Learning, and Semantic Change","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0007","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0007","url":null,"abstract":"This chapter aims to explain some trends in semantic change with Hebbian learning. Semantic broadening observed in grammaticalization is argued to be seeded by speakers when they select frequent forms for production over less accessible competitors, even though the meaning they are trying to express is merely similar to the meanings the frequent form was experienced in. Extension of frequent forms in production co-exists with entrenchment (the suspicious coincidence effect) in comprehension. The entrenchment effect in comprehension rules out a habituation account of the semantic change. The form a speaker is most likely to extend to a new meaning in production is often the form they are least likely to map onto that meaning in comprehension. A range of Hebbian models of these processes is developed. All such models are shown to predict the comprehension-production dissociation under default assumptions regarding salience differences between absent and present cues. Certain aspects of the results are shown to be problematic for error-driven models (Rescorla-Wagner), at least if learning rate is fast enough to give rise to their signature blocking effect. Finally, an account of accessibility in an associative framework is developed.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128158021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0004
Vsevolod Kapatsinski
This chapter introduces the debate between elemental and configural learning models. Configural models represent both a whole pattern and its parts as separate nodes, which are then both associable, i.e. available for wiring with other nodes. This necessitates a kind of hierarchical inference at the timescale of learning and motivates a dual-route approach at the timescale of processing. Some patterns of language change (semanticization and frequency-in-a-favourable-context effects) are argued to be attributable to hierarchical inference. The most prominent configural pattern in language is argued to be a superadditive interaction. However, such interactions are argued to often be unstable in comprehension due to selective attention and incremental processing. Selective attention causes the learner to focus on one part of a configuration over others. Incremental processing favors the initial part, which can then overshadow other parts and drive the recognition decision. Only with extensive experience, can one can learn to integrate multiple cues. When cues are integrated, the weaker cue can cue the outcome directly or can serve as an occasion-setter to the relationship between the outcome and the primary cue. The conditions under which occasion-setting arises in language acquisition is a promising area for future research.
{"title":"What Are the Nodes? Unitization and Configural Learning vs. Selective Attention","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0004","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0004","url":null,"abstract":"This chapter introduces the debate between elemental and configural learning models. Configural models represent both a whole pattern and its parts as separate nodes, which are then both associable, i.e. available for wiring with other nodes. This necessitates a kind of hierarchical inference at the timescale of learning and motivates a dual-route approach at the timescale of processing. Some patterns of language change (semanticization and frequency-in-a-favourable-context effects) are argued to be attributable to hierarchical inference. The most prominent configural pattern in language is argued to be a superadditive interaction. However, such interactions are argued to often be unstable in comprehension due to selective attention and incremental processing. Selective attention causes the learner to focus on one part of a configuration over others. Incremental processing favors the initial part, which can then overshadow other parts and drive the recognition decision. Only with extensive experience, can one can learn to integrate multiple cues. When cues are integrated, the weaker cue can cue the outcome directly or can serve as an occasion-setter to the relationship between the outcome and the primary cue. The conditions under which occasion-setting arises in language acquisition is a promising area for future research.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129159372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0009
Vsevolod Kapatsinski
This chapter is a step towards developing an associationist framework for an account of productive morphology. Specifically, the aim is to address the paradigm cell filling problem, how speakers produce novel forms of words they know, often studied using elicited production. Learning is assumed to follow the Rescorla-Wagner rule. The model is applied to miniature artificial language learning data from several experiments by the author. Paradigmatic and syntagmatic associations and an operation, copying of an activated memory representation into the production plan, are argued to be necessary to account for the full pattern of results. Furthermore, learning rate must be low enough for the model not to fall prey to accidentally exceptionless generalizations. At these learning rates, an error-driven model closely resembles a Hebbian model. Limitations of the model are identified, including the use of the strict teacher signal in the Rescorla-Wagner learning rule.
{"title":"The Interplay of Syntagmatic, Schematic, and Paradigmatic Structure","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0009","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0009","url":null,"abstract":"This chapter is a step towards developing an associationist framework for an account of productive morphology. Specifically, the aim is to address the paradigm cell filling problem, how speakers produce novel forms of words they know, often studied using elicited production. Learning is assumed to follow the Rescorla-Wagner rule. The model is applied to miniature artificial language learning data from several experiments by the author. Paradigmatic and syntagmatic associations and an operation, copying of an activated memory representation into the production plan, are argued to be necessary to account for the full pattern of results. Furthermore, learning rate must be low enough for the model not to fall prey to accidentally exceptionless generalizations. At these learning rates, an error-driven model closely resembles a Hebbian model. Limitations of the model are identified, including the use of the strict teacher signal in the Rescorla-Wagner learning rule.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128967506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0002
Vsevolod Kapatsinski
This chapter provides an overview of basic learning mechanisms proposed within associationist learning theory: error-driven learning, Hebbian learning, and chunking. It takes the complementary learning systems perspective, which is contrasted with a Bayesian perspective in which the learner is an ‘ideal observer’. The discussion focuses on two issues. First, what is a learning mechanism? It is argued that two brain areas implement two different learning mechanisms if they would learn different things from the same input. The available data from neuroscience suggests that the brain contains multiple learning mechanisms in this sense but each learning mechanism is domain-general in applying to many different types of input. Second, what are the sources of bias that influence what a learner acquires from a certain experience? Bayesian theorists have distinguished between inductive bias implemented in prior beliefs and channel bias implemented in the translation from input to intake and output to behaviour. Given the intake and prior beliefs, belief updating in Bayesian models is unbiased, following Bayes Theorem. However, biased belief updating may be another source of bias in biological learning mechanisms.
{"title":"The Web in the Spider: Associative Learning Theory","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0002","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0002","url":null,"abstract":"This chapter provides an overview of basic learning mechanisms proposed within associationist learning theory: error-driven learning, Hebbian learning, and chunking. It takes the complementary learning systems perspective, which is contrasted with a Bayesian perspective in which the learner is an ‘ideal observer’. The discussion focuses on two issues. First, what is a learning mechanism? It is argued that two brain areas implement two different learning mechanisms if they would learn different things from the same input. The available data from neuroscience suggests that the brain contains multiple learning mechanisms in this sense but each learning mechanism is domain-general in applying to many different types of input. Second, what are the sources of bias that influence what a learner acquires from a certain experience? Bayesian theorists have distinguished between inductive bias implemented in prior beliefs and channel bias implemented in the translation from input to intake and output to behaviour. Given the intake and prior beliefs, belief updating in Bayesian models is unbiased, following Bayes Theorem. However, biased belief updating may be another source of bias in biological learning mechanisms.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126058706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0008
Vsevolod Kapatsinski
This chapter reviews research on the acquisition of paradigmatic structure (including research on canonical antonyms, morphological paradigms, associative inference, grammatical gender and noun classes). It discusses the second-order schema hypothesis, which views paradigmatic structure as mappings between constructions. New evidence from miniature artificial language learning of morphology is reported, which suggests that paradigmatic mappings involve paradigmatic associations between corresponding structures as well as an operation, copying an activated representation into the production plan. Producing a novel form of a known word is argued to involve selecting a prosodic template and filling it out with segmental material using form-meaning connections, syntagmatic and paradigmatic form-form connections and copying, which is itself an outcome cued by both semantics and phonology.
{"title":"Learning Paradigmatic Structure","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0008","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0008","url":null,"abstract":"This chapter reviews research on the acquisition of paradigmatic structure (including research on canonical antonyms, morphological paradigms, associative inference, grammatical gender and noun classes). It discusses the second-order schema hypothesis, which views paradigmatic structure as mappings between constructions. New evidence from miniature artificial language learning of morphology is reported, which suggests that paradigmatic mappings involve paradigmatic associations between corresponding structures as well as an operation, copying an activated representation into the production plan. Producing a novel form of a known word is argued to involve selecting a prosodic template and filling it out with segmental material using form-meaning connections, syntagmatic and paradigmatic form-form connections and copying, which is itself an outcome cued by both semantics and phonology.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130624904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0005
Vsevolod Kapatsinski
This chapter reviews the main ideas of Bayesian approaches to learning, compared to associationist approaches. It reviews and discusses Bayesian criticisms of associationist learning theory. In particular, Bayesian theorists have argued that associative models fail to represent confidence in belief and update confidence with experience. The chapter discusses whether updating confidence is necessary to capture entrenchment, suspicious coincidence, and category variability effects. The evidence is argued to be somewhat inconclusive at present, as simulated annealing can often suffice. Furthermore, when confidence updating is suggested by the data, the updating suggested by the data may be non-normative, contrary to the Bayesian notion of the learner as an ideal observer. Following Kruschke, learned selective attention is argued to explain many ways in which human learning departs from that of the ideal observer, most crucially including the weakness of backward relative to forward blocking. Other departures from the ideal observer may be due to biological organisms taking into account factors other than belief accuracy. Finally, generative and discriminative learning models are compared. Generative models are argued to be particularly likely when active learning is a possibility and when reversing the observed mappings may be required.
{"title":"Bayes, Rationality, and Rashionality","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0005","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0005","url":null,"abstract":"This chapter reviews the main ideas of Bayesian approaches to learning, compared to associationist approaches. It reviews and discusses Bayesian criticisms of associationist learning theory. In particular, Bayesian theorists have argued that associative models fail to represent confidence in belief and update confidence with experience. The chapter discusses whether updating confidence is necessary to capture entrenchment, suspicious coincidence, and category variability effects. The evidence is argued to be somewhat inconclusive at present, as simulated annealing can often suffice. Furthermore, when confidence updating is suggested by the data, the updating suggested by the data may be non-normative, contrary to the Bayesian notion of the learner as an ideal observer. Following Kruschke, learned selective attention is argued to explain many ways in which human learning departs from that of the ideal observer, most crucially including the weakness of backward relative to forward blocking. Other departures from the ideal observer may be due to biological organisms taking into account factors other than belief accuracy. Finally, generative and discriminative learning models are compared. Generative models are argued to be particularly likely when active learning is a possibility and when reversing the observed mappings may be required.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125352474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0010
Vsevolod Kapatsinski
This chapter reviews research on automatization, both in the domain of action execution and in the domain of perception / comprehension. In comprehension, automatization is argued to lead to inability to direct conscious attention towards frequently used intermediate steps on the way from sound to meaning (leading to findings such as the missing letter effect). As a result, the cues we use to access meaning may be the cues we are least aware of. Chain and hierarchical representations of action sequences are compared. The chain model is argued to be under-appreciated as an execution-level representation for well-practiced sequences. Automatization of a sequence repeated in a fixed order is argued to turn a hierarchy into a chain. Execution-level representations for familiar words are argued to be networks of interlinked chains (connected through propagation filters) rather than hierarchies. Much of sound change is argued to be the result of automatization of word execution, throughout life, tempered by reinforcement learning (selection by consequences).
{"title":"Automatization and Sound Change","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0010","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0010","url":null,"abstract":"This chapter reviews research on automatization, both in the domain of action execution and in the domain of perception / comprehension. In comprehension, automatization is argued to lead to inability to direct conscious attention towards frequently used intermediate steps on the way from sound to meaning (leading to findings such as the missing letter effect). As a result, the cues we use to access meaning may be the cues we are least aware of. Chain and hierarchical representations of action sequences are compared. The chain model is argued to be under-appreciated as an execution-level representation for well-practiced sequences. Automatization of a sequence repeated in a fixed order is argued to turn a hierarchy into a chain. Execution-level representations for familiar words are argued to be networks of interlinked chains (connected through propagation filters) rather than hierarchies. Much of sound change is argued to be the result of automatization of word execution, throughout life, tempered by reinforcement learning (selection by consequences).","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124039791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-07-06DOI: 10.7551/mitpress/9780262037860.003.0006
Vsevolod Kapatsinski
This chapter describes the evidence for the existence of dimensions, focusing on the difference between the difficulty of attention shifts to a previously relevant vs. irrelevant dimension. It discusses the representation of continuous dimensions in the associationist framework. including population coding and thermometer coding, as well as the idea that learning can adjust the breadth of adjustable receptive fields. In phonetics, continuous dimensions have been argued to be split into categories via distributional learning. This chapter reviews what we know about distributional learning and argues that it relies on several distinct learning mechanisms, including error-driven learning at two distinct levels and building a generative model of the speaker. The emergence of perceptual equivalence regions from error-driven learning is discussed, and implications for language change briefly noted with an iterated learning simulation.
{"title":"Continuous Dimensions and Distributional Learning","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0006","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0006","url":null,"abstract":"This chapter describes the evidence for the existence of dimensions, focusing on the difference between the difficulty of attention shifts to a previously relevant vs. irrelevant dimension. It discusses the representation of continuous dimensions in the associationist framework. including population coding and thermometer coding, as well as the idea that learning can adjust the breadth of adjustable receptive fields. In phonetics, continuous dimensions have been argued to be split into categories via distributional learning. This chapter reviews what we know about distributional learning and argues that it relies on several distinct learning mechanisms, including error-driven learning at two distinct levels and building a generative model of the speaker. The emergence of perceptual equivalence regions from error-driven learning is discussed, and implications for language change briefly noted with an iterated learning simulation.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130224361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}