首页 > 最新文献

Changing Minds Changing Tools最新文献

英文 中文
Bringing It All Together 将一切结合在一起
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0011
Vsevolod Kapatsinski
This chapter reviews the hypotheses about learning, processing, and mental representation advanced in the rest of this book, and brings them together to explain some recurrent patterns in language change, including changes involving phonetics, semantics, and morphology. It also discusses some general principles that recur throughout the book, including the functional value of redundancy (degeneracy), the ubiquity of evolution (variation and selection) as a mechanism of change, and domain-general learning mechanisms. Promising future directions and gaps in the literature are outlined. The chapter concluded that domain-general learning mechanisms provide valuable insights into the central issues of language acquisition and explanations for recurrent patterns in language change, which in turn explain why languages are the way they are, including not only language universals but also the emergence of specific typological rarities.
本章回顾了在本书其余部分中提出的关于学习、加工和心理表征的假设,并将它们结合起来解释语言变化中的一些循环模式,包括涉及语音、语义和形态的变化。它还讨论了贯穿全书的一些一般原则,包括冗余(退化)的功能价值,作为变化机制的无所不在的进化(变异和选择),以及领域通用学习机制。概述了有希望的未来方向和文献中的差距。本章的结论是,领域通用学习机制为语言习得的核心问题提供了有价值的见解,并解释了语言变化的循环模式,这反过来解释了为什么语言是这样的,不仅包括语言共性,还包括特定类型稀有物的出现。
{"title":"Bringing It All Together","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0011","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0011","url":null,"abstract":"This chapter reviews the hypotheses about learning, processing, and mental representation advanced in the rest of this book, and brings them together to explain some recurrent patterns in language change, including changes involving phonetics, semantics, and morphology. It also discusses some general principles that recur throughout the book, including the functional value of redundancy (degeneracy), the ubiquity of evolution (variation and selection) as a mechanism of change, and domain-general learning mechanisms. Promising future directions and gaps in the literature are outlined. The chapter concluded that domain-general learning mechanisms provide valuable insights into the central issues of language acquisition and explanations for recurrent patterns in language change, which in turn explain why languages are the way they are, including not only language universals but also the emergence of specific typological rarities.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117230180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Associative Learning to Language Structure 从联想学习到语言结构
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0003
Vsevolod Kapatsinski
This chapter reviews sources of regularity in language, including maximizing (vs. probability matching) in decision making and positive feedback (rich-get-richer) loops within and between individuals. It argues that gradual learning can manifest itself in abrupt changes in behaviour, and languages can look somewhat regular and systematic in everyday use despite being represented as networks of competing associations. The chapter then reviews the kinds of structures found in language, distinguishing between syntagmatic structure (sequencing, serial order), schematic structure (form-meaning mappings, constructions) and paradigmatic structure, which is argued to be necessary only for learning morphological paradigms. Two controversial issues are discussed. First, it is argued that associations in language are ‘bidirectional by default’ in that an experienced language learner tries to form associations in both directions but may fail in doing so. Second, learning is argued to often proceed in the general-to-specific directions, especially at the level of cues (predictors) as opposed to outputs (behaviours).
本章回顾了语言规则的来源,包括决策中的最大化(vs.概率匹配)和个体内部和个体之间的正反馈(富得更富)循环。它认为,渐进式学习可以在行为的突然变化中表现出来,语言在日常使用中看起来有些规律和系统,尽管它被表现为相互竞争的联想网络。然后,本章回顾了语言中发现的各种结构,区分了组合结构(顺序,序列顺序),图式结构(形式-意义映射,结构)和范式结构,范式结构被认为是学习形态范式所必需的。讨论了两个有争议的问题。首先,有人认为语言中的联想“默认是双向的”,因为有经验的语言学习者试图在两个方向上形成联想,但可能会失败。其次,人们认为学习通常是在一般到特定的方向上进行的,特别是在线索(预测因素)的层面上,而不是在输出(行为)的层面上。
{"title":"From Associative Learning to Language Structure","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0003","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0003","url":null,"abstract":"This chapter reviews sources of regularity in language, including maximizing (vs. probability matching) in decision making and positive feedback (rich-get-richer) loops within and between individuals. It argues that gradual learning can manifest itself in abrupt changes in behaviour, and languages can look somewhat regular and systematic in everyday use despite being represented as networks of competing associations. The chapter then reviews the kinds of structures found in language, distinguishing between syntagmatic structure (sequencing, serial order), schematic structure (form-meaning mappings, constructions) and paradigmatic structure, which is argued to be necessary only for learning morphological paradigms. Two controversial issues are discussed. First, it is argued that associations in language are ‘bidirectional by default’ in that an experienced language learner tries to form associations in both directions but may fail in doing so. Second, learning is argued to often proceed in the general-to-specific directions, especially at the level of cues (predictors) as opposed to outputs (behaviours).","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117243260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Schematic Structure, Hebbian Learning, and Semantic Change 图式结构、希伯来语学习与语义变化
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0007
Vsevolod Kapatsinski
This chapter aims to explain some trends in semantic change with Hebbian learning. Semantic broadening observed in grammaticalization is argued to be seeded by speakers when they select frequent forms for production over less accessible competitors, even though the meaning they are trying to express is merely similar to the meanings the frequent form was experienced in. Extension of frequent forms in production co-exists with entrenchment (the suspicious coincidence effect) in comprehension. The entrenchment effect in comprehension rules out a habituation account of the semantic change. The form a speaker is most likely to extend to a new meaning in production is often the form they are least likely to map onto that meaning in comprehension. A range of Hebbian models of these processes is developed. All such models are shown to predict the comprehension-production dissociation under default assumptions regarding salience differences between absent and present cues. Certain aspects of the results are shown to be problematic for error-driven models (Rescorla-Wagner), at least if learning rate is fast enough to give rise to their signature blocking effect. Finally, an account of accessibility in an associative framework is developed.
本章旨在解释Hebbian学习中语义变化的一些趋势。在语法化中观察到的语义扩展被认为是由说话者在选择频繁形式而不是不易接近的竞争者时播下的种子,即使他们试图表达的意思与频繁形式所经历的意思相似。生产中频繁形式的延伸与理解中的堑壕(可疑的巧合效应)共存。理解中的巩固效应排除了语义变化的习惯化解释。说话者在生产中最有可能扩展到新意义的形式,往往是他们在理解中最不可能映射到该意义上的形式。开发了这些过程的一系列Hebbian模型。所有这些模型都被证明可以在默认假设下预测关于缺席和在场线索之间显著性差异的理解-生产分离。结果的某些方面对于错误驱动的模型来说是有问题的(Rescorla-Wagner),至少如果学习率足够快,足以产生它们的标志性阻塞效应。最后,提出了一种联想框架下的可访问性描述。
{"title":"Schematic Structure, Hebbian Learning, and Semantic Change","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0007","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0007","url":null,"abstract":"This chapter aims to explain some trends in semantic change with Hebbian learning. Semantic broadening observed in grammaticalization is argued to be seeded by speakers when they select frequent forms for production over less accessible competitors, even though the meaning they are trying to express is merely similar to the meanings the frequent form was experienced in. Extension of frequent forms in production co-exists with entrenchment (the suspicious coincidence effect) in comprehension. The entrenchment effect in comprehension rules out a habituation account of the semantic change. The form a speaker is most likely to extend to a new meaning in production is often the form they are least likely to map onto that meaning in comprehension. A range of Hebbian models of these processes is developed. All such models are shown to predict the comprehension-production dissociation under default assumptions regarding salience differences between absent and present cues. Certain aspects of the results are shown to be problematic for error-driven models (Rescorla-Wagner), at least if learning rate is fast enough to give rise to their signature blocking effect. Finally, an account of accessibility in an associative framework is developed.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128158021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Are the Nodes? Unitization and Configural Learning vs. Selective Attention 什么是节点?统一性、构形学习与选择性注意
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0004
Vsevolod Kapatsinski
This chapter introduces the debate between elemental and configural learning models. Configural models represent both a whole pattern and its parts as separate nodes, which are then both associable, i.e. available for wiring with other nodes. This necessitates a kind of hierarchical inference at the timescale of learning and motivates a dual-route approach at the timescale of processing. Some patterns of language change (semanticization and frequency-in-a-favourable-context effects) are argued to be attributable to hierarchical inference. The most prominent configural pattern in language is argued to be a superadditive interaction. However, such interactions are argued to often be unstable in comprehension due to selective attention and incremental processing. Selective attention causes the learner to focus on one part of a configuration over others. Incremental processing favors the initial part, which can then overshadow other parts and drive the recognition decision. Only with extensive experience, can one can learn to integrate multiple cues. When cues are integrated, the weaker cue can cue the outcome directly or can serve as an occasion-setter to the relationship between the outcome and the primary cue. The conditions under which occasion-setting arises in language acquisition is a promising area for future research.
本章介绍了基本学习模型和配置学习模型之间的争论。配置模型将整个模式及其各个部分表示为独立的节点,然后它们都是可关联的,即可用于与其他节点连接。这就需要在学习的时间尺度上采用一种层次推理,并在处理的时间尺度上采用双路径方法。一些语言变化模式(语义化和有利语境中的频率效应)被认为可归因于层次推理。语言中最突出的构形模式被认为是一种超加性相互作用。然而,由于选择性注意和增量处理,这种相互作用通常在理解上不稳定。选择性注意使学习者将注意力集中在配置的一部分而不是其他部分。增量处理有利于初始部分,然后它可以掩盖其他部分并驱动识别决策。只有丰富的经验,一个人才能学会整合多种线索。当线索被整合时,较弱的线索可以直接提示结果,也可以作为结果与主要线索之间关系的场合设定者。语言习得过程中场合设置的产生条件是一个有前景的研究领域。
{"title":"What Are the Nodes? Unitization and Configural Learning vs. Selective Attention","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0004","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0004","url":null,"abstract":"This chapter introduces the debate between elemental and configural learning models. Configural models represent both a whole pattern and its parts as separate nodes, which are then both associable, i.e. available for wiring with other nodes. This necessitates a kind of hierarchical inference at the timescale of learning and motivates a dual-route approach at the timescale of processing. Some patterns of language change (semanticization and frequency-in-a-favourable-context effects) are argued to be attributable to hierarchical inference. The most prominent configural pattern in language is argued to be a superadditive interaction. However, such interactions are argued to often be unstable in comprehension due to selective attention and incremental processing. Selective attention causes the learner to focus on one part of a configuration over others. Incremental processing favors the initial part, which can then overshadow other parts and drive the recognition decision. Only with extensive experience, can one can learn to integrate multiple cues. When cues are integrated, the weaker cue can cue the outcome directly or can serve as an occasion-setter to the relationship between the outcome and the primary cue. The conditions under which occasion-setting arises in language acquisition is a promising area for future research.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129159372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Interplay of Syntagmatic, Schematic, and Paradigmatic Structure 组合结构、图式结构和聚合结构的相互作用
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0009
Vsevolod Kapatsinski
This chapter is a step towards developing an associationist framework for an account of productive morphology. Specifically, the aim is to address the paradigm cell filling problem, how speakers produce novel forms of words they know, often studied using elicited production. Learning is assumed to follow the Rescorla-Wagner rule. The model is applied to miniature artificial language learning data from several experiments by the author. Paradigmatic and syntagmatic associations and an operation, copying of an activated memory representation into the production plan, are argued to be necessary to account for the full pattern of results. Furthermore, learning rate must be low enough for the model not to fall prey to accidentally exceptionless generalizations. At these learning rates, an error-driven model closely resembles a Hebbian model. Limitations of the model are identified, including the use of the strict teacher signal in the Rescorla-Wagner learning rule.
这一章是朝着发展生产形态的联想主义框架迈出的一步。具体来说,目的是解决范式细胞填充问题,说话者如何产生他们所知道的新形式的单词,通常使用引出生产来研究。人们认为学习遵循Rescorla-Wagner规则。作者将该模型应用于几个实验的微型人工语言学习数据。聚合和组合关联以及将激活的记忆表示复制到生产计划中的操作,被认为是解释结果的完整模式所必需的。此外,学习率必须足够低,以使模型不会成为意外的无例外泛化的牺牲品。在这样的学习率下,错误驱动模型与赫比模型非常相似。发现了该模型的局限性,包括在Rescorla-Wagner学习规则中使用严格的教师信号。
{"title":"The Interplay of Syntagmatic, Schematic, and Paradigmatic Structure","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0009","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0009","url":null,"abstract":"This chapter is a step towards developing an associationist framework for an account of productive morphology. Specifically, the aim is to address the paradigm cell filling problem, how speakers produce novel forms of words they know, often studied using elicited production. Learning is assumed to follow the Rescorla-Wagner rule. The model is applied to miniature artificial language learning data from several experiments by the author. Paradigmatic and syntagmatic associations and an operation, copying of an activated memory representation into the production plan, are argued to be necessary to account for the full pattern of results. Furthermore, learning rate must be low enough for the model not to fall prey to accidentally exceptionless generalizations. At these learning rates, an error-driven model closely resembles a Hebbian model. Limitations of the model are identified, including the use of the strict teacher signal in the Rescorla-Wagner learning rule.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128967506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Web in the Spider: Associative Learning Theory 蜘蛛中的网:联想学习理论
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0002
Vsevolod Kapatsinski
This chapter provides an overview of basic learning mechanisms proposed within associationist learning theory: error-driven learning, Hebbian learning, and chunking. It takes the complementary learning systems perspective, which is contrasted with a Bayesian perspective in which the learner is an ‘ideal observer’. The discussion focuses on two issues. First, what is a learning mechanism? It is argued that two brain areas implement two different learning mechanisms if they would learn different things from the same input. The available data from neuroscience suggests that the brain contains multiple learning mechanisms in this sense but each learning mechanism is domain-general in applying to many different types of input. Second, what are the sources of bias that influence what a learner acquires from a certain experience? Bayesian theorists have distinguished between inductive bias implemented in prior beliefs and channel bias implemented in the translation from input to intake and output to behaviour. Given the intake and prior beliefs, belief updating in Bayesian models is unbiased, following Bayes Theorem. However, biased belief updating may be another source of bias in biological learning mechanisms.
本章概述了联想学习理论中提出的基本学习机制:错误驱动学习、Hebbian学习和分块学习。它采用互补学习系统的观点,这与贝叶斯的观点形成对比,在贝叶斯的观点中,学习者是一个“理想的观察者”。讨论集中在两个问题上。首先,什么是学习机制?有人认为,如果两个大脑区域从相同的输入中学习不同的东西,它们就会实现两种不同的学习机制。神经科学的现有数据表明,从这个意义上讲,大脑包含多种学习机制,但每种学习机制在应用于许多不同类型的输入时都是通用的。第二,影响学习者从某种经验中获得的东西的偏见来源是什么?贝叶斯理论家区分了在先验信念中实现的归纳偏差和在从输入到摄入和输出到行为的转换中实现的通道偏差。在给定摄入信念和先验信念的情况下,贝叶斯模型中的信念更新遵循贝叶斯定理是无偏的。然而,偏见信念更新可能是生物学习机制中偏见的另一个来源。
{"title":"The Web in the Spider: Associative Learning Theory","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0002","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0002","url":null,"abstract":"This chapter provides an overview of basic learning mechanisms proposed within associationist learning theory: error-driven learning, Hebbian learning, and chunking. It takes the complementary learning systems perspective, which is contrasted with a Bayesian perspective in which the learner is an ‘ideal observer’. The discussion focuses on two issues. First, what is a learning mechanism? It is argued that two brain areas implement two different learning mechanisms if they would learn different things from the same input. The available data from neuroscience suggests that the brain contains multiple learning mechanisms in this sense but each learning mechanism is domain-general in applying to many different types of input. Second, what are the sources of bias that influence what a learner acquires from a certain experience? Bayesian theorists have distinguished between inductive bias implemented in prior beliefs and channel bias implemented in the translation from input to intake and output to behaviour. Given the intake and prior beliefs, belief updating in Bayesian models is unbiased, following Bayes Theorem. However, biased belief updating may be another source of bias in biological learning mechanisms.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126058706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Paradigmatic Structure 学习范式结构
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0008
Vsevolod Kapatsinski
This chapter reviews research on the acquisition of paradigmatic structure (including research on canonical antonyms, morphological paradigms, associative inference, grammatical gender and noun classes). It discusses the second-order schema hypothesis, which views paradigmatic structure as mappings between constructions. New evidence from miniature artificial language learning of morphology is reported, which suggests that paradigmatic mappings involve paradigmatic associations between corresponding structures as well as an operation, copying an activated representation into the production plan. Producing a novel form of a known word is argued to involve selecting a prosodic template and filling it out with segmental material using form-meaning connections, syntagmatic and paradigmatic form-form connections and copying, which is itself an outcome cued by both semantics and phonology.
本章综述了范式结构习得的研究,包括范式反义词、形态范式、联想推理、语法性别和名词类的研究。它讨论了二阶图式假说,该假说认为范式结构是构式之间的映射。来自形态学的微型人工语言学习的新证据表明,范式映射涉及相应结构和操作之间的范式关联,将激活的表示复制到生产计划中。已知单词的新形式的产生涉及选择韵律模板,并使用形式-意义连接、组合形式-形式连接和聚合形式-形式连接以及复制来填充分段材料,这本身是语义和音韵学共同导致的结果。
{"title":"Learning Paradigmatic Structure","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0008","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0008","url":null,"abstract":"This chapter reviews research on the acquisition of paradigmatic structure (including research on canonical antonyms, morphological paradigms, associative inference, grammatical gender and noun classes). It discusses the second-order schema hypothesis, which views paradigmatic structure as mappings between constructions. New evidence from miniature artificial language learning of morphology is reported, which suggests that paradigmatic mappings involve paradigmatic associations between corresponding structures as well as an operation, copying an activated representation into the production plan. Producing a novel form of a known word is argued to involve selecting a prosodic template and filling it out with segmental material using form-meaning connections, syntagmatic and paradigmatic form-form connections and copying, which is itself an outcome cued by both semantics and phonology.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130624904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayes, Rationality, and Rashionality 贝叶斯,理性和非理性
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0005
Vsevolod Kapatsinski
This chapter reviews the main ideas of Bayesian approaches to learning, compared to associationist approaches. It reviews and discusses Bayesian criticisms of associationist learning theory. In particular, Bayesian theorists have argued that associative models fail to represent confidence in belief and update confidence with experience. The chapter discusses whether updating confidence is necessary to capture entrenchment, suspicious coincidence, and category variability effects. The evidence is argued to be somewhat inconclusive at present, as simulated annealing can often suffice. Furthermore, when confidence updating is suggested by the data, the updating suggested by the data may be non-normative, contrary to the Bayesian notion of the learner as an ideal observer. Following Kruschke, learned selective attention is argued to explain many ways in which human learning departs from that of the ideal observer, most crucially including the weakness of backward relative to forward blocking. Other departures from the ideal observer may be due to biological organisms taking into account factors other than belief accuracy. Finally, generative and discriminative learning models are compared. Generative models are argued to be particularly likely when active learning is a possibility and when reversing the observed mappings may be required.
本章回顾了贝叶斯学习方法的主要思想,并与联想方法进行了比较。它回顾并讨论了贝叶斯对联想学习理论的批评。特别是,贝叶斯理论家认为,联想模型不能代表对信念的信心,也不能用经验来更新信心。本章讨论更新信心是否有必要捕捉壕沟,可疑的巧合和类别变异性效应。证据被认为是目前有些不确定的,因为模拟退火通常已经足够了。此外,当数据提示信心更新时,数据提示的更新可能是非规范的,这与贝叶斯关于学习者是理想观察者的概念相反。继Kruschke之后,习得性选择性注意被认为可以解释人类学习偏离理想观察者的许多方式,其中最关键的包括相对于前向阻塞的后向阻塞的弱点。与理想观察者的其他偏离可能是由于生物有机体考虑了信念准确性以外的因素。最后,对生成学习模型和判别学习模型进行了比较。当主动学习是可能的,并且需要逆转观察到的映射时,生成模型被认为是特别可能的。
{"title":"Bayes, Rationality, and Rashionality","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0005","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0005","url":null,"abstract":"This chapter reviews the main ideas of Bayesian approaches to learning, compared to associationist approaches. It reviews and discusses Bayesian criticisms of associationist learning theory. In particular, Bayesian theorists have argued that associative models fail to represent confidence in belief and update confidence with experience. The chapter discusses whether updating confidence is necessary to capture entrenchment, suspicious coincidence, and category variability effects. The evidence is argued to be somewhat inconclusive at present, as simulated annealing can often suffice. Furthermore, when confidence updating is suggested by the data, the updating suggested by the data may be non-normative, contrary to the Bayesian notion of the learner as an ideal observer. Following Kruschke, learned selective attention is argued to explain many ways in which human learning departs from that of the ideal observer, most crucially including the weakness of backward relative to forward blocking. Other departures from the ideal observer may be due to biological organisms taking into account factors other than belief accuracy. Finally, generative and discriminative learning models are compared. Generative models are argued to be particularly likely when active learning is a possibility and when reversing the observed mappings may be required.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125352474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatization and Sound Change 自动化和声音变化
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0010
Vsevolod Kapatsinski
This chapter reviews research on automatization, both in the domain of action execution and in the domain of perception / comprehension. In comprehension, automatization is argued to lead to inability to direct conscious attention towards frequently used intermediate steps on the way from sound to meaning (leading to findings such as the missing letter effect). As a result, the cues we use to access meaning may be the cues we are least aware of. Chain and hierarchical representations of action sequences are compared. The chain model is argued to be under-appreciated as an execution-level representation for well-practiced sequences. Automatization of a sequence repeated in a fixed order is argued to turn a hierarchy into a chain. Execution-level representations for familiar words are argued to be networks of interlinked chains (connected through propagation filters) rather than hierarchies. Much of sound change is argued to be the result of automatization of word execution, throughout life, tempered by reinforcement learning (selection by consequences).
本章回顾了自动化的研究,包括行动执行领域和感知/理解领域。在理解中,自动化被认为导致无法将有意识的注意力引导到从声音到意义的过程中经常使用的中间步骤(导致诸如缺失字母效应的发现)。因此,我们用来获取意义的线索可能是我们最不了解的线索。比较了动作序列的链式表示和分层表示。链模型被认为是被低估的执行层表示为良好的实践序列。以固定顺序重复的序列的自动化被认为是将层次结构变成链。熟悉单词的执行级表示被认为是相互连接的链(通过传播过滤器连接)的网络,而不是层次结构。许多声音的变化被认为是单词执行的自动化的结果,在整个生命过程中,通过强化学习(结果选择)得到缓和。
{"title":"Automatization and Sound Change","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0010","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0010","url":null,"abstract":"This chapter reviews research on automatization, both in the domain of action execution and in the domain of perception / comprehension. In comprehension, automatization is argued to lead to inability to direct conscious attention towards frequently used intermediate steps on the way from sound to meaning (leading to findings such as the missing letter effect). As a result, the cues we use to access meaning may be the cues we are least aware of. Chain and hierarchical representations of action sequences are compared. The chain model is argued to be under-appreciated as an execution-level representation for well-practiced sequences. Automatization of a sequence repeated in a fixed order is argued to turn a hierarchy into a chain. Execution-level representations for familiar words are argued to be networks of interlinked chains (connected through propagation filters) rather than hierarchies. Much of sound change is argued to be the result of automatization of word execution, throughout life, tempered by reinforcement learning (selection by consequences).","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124039791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Dimensions and Distributional Learning 连续维数和分布学习
Pub Date : 2018-07-06 DOI: 10.7551/mitpress/9780262037860.003.0006
Vsevolod Kapatsinski
This chapter describes the evidence for the existence of dimensions, focusing on the difference between the difficulty of attention shifts to a previously relevant vs. irrelevant dimension. It discusses the representation of continuous dimensions in the associationist framework. including population coding and thermometer coding, as well as the idea that learning can adjust the breadth of adjustable receptive fields. In phonetics, continuous dimensions have been argued to be split into categories via distributional learning. This chapter reviews what we know about distributional learning and argues that it relies on several distinct learning mechanisms, including error-driven learning at two distinct levels and building a generative model of the speaker. The emergence of perceptual equivalence regions from error-driven learning is discussed, and implications for language change briefly noted with an iterated learning simulation.
本章描述了维度存在的证据,重点关注注意力转移到先前相关维度和不相关维度的难度之间的差异。讨论了连续维在联想主义框架中的表示。包括人口编码和温度计编码,以及学习可以调节可调节接受域宽度的想法。在语音学中,连续维度被认为是通过分布学习来划分类别的。本章回顾了我们对分布式学习的了解,并认为它依赖于几种不同的学习机制,包括两个不同层次的错误驱动学习和建立说话人的生成模型。本文讨论了错误驱动学习中感知对等区域的出现,并通过迭代学习模拟简要说明了对语言变化的影响。
{"title":"Continuous Dimensions and Distributional Learning","authors":"Vsevolod Kapatsinski","doi":"10.7551/mitpress/9780262037860.003.0006","DOIUrl":"https://doi.org/10.7551/mitpress/9780262037860.003.0006","url":null,"abstract":"This chapter describes the evidence for the existence of dimensions, focusing on the difference between the difficulty of attention shifts to a previously relevant vs. irrelevant dimension. It discusses the representation of continuous dimensions in the associationist framework. including population coding and thermometer coding, as well as the idea that learning can adjust the breadth of adjustable receptive fields. In phonetics, continuous dimensions have been argued to be split into categories via distributional learning. This chapter reviews what we know about distributional learning and argues that it relies on several distinct learning mechanisms, including error-driven learning at two distinct levels and building a generative model of the speaker. The emergence of perceptual equivalence regions from error-driven learning is discussed, and implications for language change briefly noted with an iterated learning simulation.","PeriodicalId":142675,"journal":{"name":"Changing Minds Changing Tools","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130224361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Changing Minds Changing Tools
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1