{"title":"分类学习中选择性学习的计算模型","authors":"Lingyun Zhang, G. Cottrell","doi":"10.1109/DEVLRN.2005.1490981","DOIUrl":null,"url":null,"abstract":"Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A Computational Model which Learns to Selectively Attend in Category Learning\",\"authors\":\"Lingyun Zhang, G. Cottrell\",\"doi\":\"10.1109/DEVLRN.2005.1490981\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories\",\"PeriodicalId\":297121,\"journal\":{\"name\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2005.1490981\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2005.1490981","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
摘要
Shepard et al.(1961)利用学习和记忆任务对不同类型分类的难度进行了实证和理论研究。由于难度等级反映了与类别相关的特征维度的数量,后来的研究者将其作为证据,认为类别学习包括学习如何选择性地只关注有用的特征,即学习如何将注意力最佳地分配到相对于类别的这些维度上(Rosch和Mervis, 1975)。我们建立了一个递归神经网络模型,按顺序处理各个特征。一次只有一个特征是明确可用的(如Rehder和Hoffman的眼动追踪设置(Render and Hoffman, 2003)),之前的信息隐式地表示在网络中。眼球从一个特征移动到下一个特征的概率被保存为注视转移表。注视开始是随机的,对任何特定的特征或任何运动没有太多的偏见。网络学习相关的特征,并通过顺序地关注这些特征来进行分类。学习时间的排名与类别的难度有质的匹配
A Computational Model which Learns to Selectively Attend in Category Learning
Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories