首页 > 最新文献

Implicit Learning最新文献

英文 中文
Implicit learning under attentional load 注意负荷下的内隐学习
Pub Date : 2019-03-20 DOI: 10.4324/9781315628905-11
M. Wierzchoń, M. Derda
{"title":"Implicit learning under attentional load","authors":"M. Wierzchoń, M. Derda","doi":"10.4324/9781315628905-11","DOIUrl":"https://doi.org/10.4324/9781315628905-11","url":null,"abstract":"","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133193018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstraction in sequence learning 序列学习中的抽象
Pub Date : 2019-03-20 DOI: 10.4324/9781315628905-8
Ferenc Kemény, Á. Lukács
{"title":"Abstraction in sequence learning","authors":"Ferenc Kemény, Á. Lukács","doi":"10.4324/9781315628905-8","DOIUrl":"https://doi.org/10.4324/9781315628905-8","url":null,"abstract":"","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124083399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The verbalization effect on implicit learning 言语化对内隐学习的影响
Pub Date : 2019-03-20 DOI: 10.4324/9781315628905-9
N. Moroshkina, I. Ivanchei, A. D. Karpov, I. Ovchinnikova
{"title":"The verbalization effect on implicit learning","authors":"N. Moroshkina, I. Ivanchei, A. D. Karpov, I. Ovchinnikova","doi":"10.4324/9781315628905-9","DOIUrl":"https://doi.org/10.4324/9781315628905-9","url":null,"abstract":"","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116515591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Can unconscious structural knowledge be strategically controlled? 无意识的结构性知识能被策略控制吗?
Pub Date : 2019-03-20 DOI: 10.4324/9781315628905-7
Elisabeth Norman, R. Scott, Mark C. Price, Emma Jones, Z. Dienes
{"title":"Can unconscious structural knowledge be strategically controlled?","authors":"Elisabeth Norman, R. Scott, Mark C. Price, Emma Jones, Z. Dienes","doi":"10.4324/9781315628905-7","DOIUrl":"https://doi.org/10.4324/9781315628905-7","url":null,"abstract":"","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122788994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Implicit learning from one’s mistakes 从错误中隐性学习
Pub Date : 2019-03-20 DOI: 10.4324/9781315628905-5
Maria Kuvaldina, A. Chetverikov, Alexandr S. Odainic, M. Filippova, N. Andriyanova
{"title":"Implicit learning from one’s mistakes","authors":"Maria Kuvaldina, A. Chetverikov, Alexandr S. Odainic, M. Filippova, N. Andriyanova","doi":"10.4324/9781315628905-5","DOIUrl":"https://doi.org/10.4324/9781315628905-5","url":null,"abstract":"","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131871969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The mind is deep 心灵是深邃的
Pub Date : 2019-03-20 DOI: 10.4324/9781315628905-3
Axel Cleeremans
{"title":"The mind is deep","authors":"Axel Cleeremans","doi":"10.4324/9781315628905-3","DOIUrl":"https://doi.org/10.4324/9781315628905-3","url":null,"abstract":"","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125514013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consciousness, learning, and control 意识、学习和控制
Pub Date : 2019-03-20 DOI: 10.4324/9781315628905-4
Viktor Allakhverdov, M. Filippova, V. Gershkovich, V. Y. Karpinskaia, T. Scott, N. Vladykina
{"title":"Consciousness, learning, and control","authors":"Viktor Allakhverdov, M. Filippova, V. Gershkovich, V. Y. Karpinskaia, T. Scott, N. Vladykina","doi":"10.4324/9781315628905-4","DOIUrl":"https://doi.org/10.4324/9781315628905-4","url":null,"abstract":"","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114923193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Implicit Concept Formation 内隐概念形成
Pub Date : 1900-01-01 DOI: 10.4324/9781315791227-3
Z. Dienes
 This thesis provides a conceptual and empirical analysis of implicit concept formation. A review of concept formation studies highlights the need for improving existing methodology in establish- ing the claim for implicit concept formation. Eight experiments are reported that address this aim. A review of theoretical issues highlights the need for computational modelling to elucidate the nature of implicit learning. Two chapters address the feasibility of different exemplar and Connectionist models in accounting for how subjects perform on tasks typically employed in the implicit learn- ing literature. The first five experiments use a concept formation task that involves classifying "computer people" as belonging to a particular town or income category. A number of manipulations are made of the underlying rule to be learned and of the cover task given subjects. In all cases, the knowledge underlying classification performance can be elicited both by free recall and by forced choice tasks. The final three experiments employ Reber's (e.g., 1989) grammar learning paradigm. More rigorous methods for eliciting the knowledge underlying classification performance are employed than have been used previously by Reber. The knowledge underlying clas- sification performance is not elicited by free recall, but is elicited by a forced-choice measure. The robustness of the learning in this paradigm is investigated by using a secondary task methodol- ogy. Concurrent random number generation interferes with all knowledge measures. A number of parameter-free Connectionist and exemplar models of artificial grammar learning are tested against the experimental data. The importance of different assumptions regarding the coding of features and the learning rule used is investigated by determin- ing the performance of the model with and without each assumption. Only one class of Connectionist model passes all the tests. Fur- ther, this class of model can simulate subject performance in a different task domain. The relevance of these empirical and theoretical results for understanding implicit learning is discussed, and suggestions are made for future research.
本文对内隐概念形成进行了概念分析和实证分析。对概念形成研究的回顾强调了在建立内隐概念形成主张时需要改进现有的方法。本文报道了八个针对这一目标的实验。对理论问题的回顾强调了计算建模来阐明内隐学习本质的必要性。两章讨论了不同的范例和联结主义模型在解释受试者如何执行内隐学习文献中典型使用的任务方面的可行性。前五个实验使用概念形成任务,包括将“计算机人”归类为属于特定城镇或收入类别。许多操作是由要学习的基本规则和给定受试者的覆盖任务组成的。在所有情况下,分类表现的基础知识都可以通过自由回忆和强制选择任务来引出。最后三个实验采用了Reber(例如,1989)的语法学习范式。与Reber之前使用的方法相比,采用了更严格的方法来提取分类性能的基础知识。分类性能的基础知识不是由自由回忆引起的,而是由强制选择措施引起的。在这种范式中,学习的鲁棒性是通过使用二级任务方法来研究的。并发随机数生成干扰所有的知识度量。针对实验数据,对人工语法学习的一些无参数连接主义模型和范例模型进行了测试。通过确定有和没有每个假设的模型的性能,研究了关于特征编码和所使用的学习规则的不同假设的重要性。只有一类联结主义模型通过了所有的测试。此外,这类模型可以模拟受试者在不同任务域的表现。讨论了这些实证和理论结果对理解内隐学习的相关性,并对未来的研究提出了建议。
{"title":"Implicit Concept Formation","authors":"Z. Dienes","doi":"10.4324/9781315791227-3","DOIUrl":"https://doi.org/10.4324/9781315791227-3","url":null,"abstract":" This thesis provides a conceptual and empirical analysis of implicit concept formation. A review of concept formation studies highlights the need for improving existing methodology in establish- ing the claim for implicit concept formation. Eight experiments are reported that address this aim. A review of theoretical issues highlights the need for computational modelling to elucidate the nature of implicit learning. Two chapters address the feasibility of different exemplar and Connectionist models in accounting for how subjects perform on tasks typically employed in the implicit learn- ing literature. The first five experiments use a concept formation task that involves classifying \"computer people\" as belonging to a particular town or income category. A number of manipulations are made of the underlying rule to be learned and of the cover task given subjects. In all cases, the knowledge underlying classification performance can be elicited both by free recall and by forced choice tasks. The final three experiments employ Reber's (e.g., 1989) grammar learning paradigm. More rigorous methods for eliciting the knowledge underlying classification performance are employed than have been used previously by Reber. The knowledge underlying clas- sification performance is not elicited by free recall, but is elicited by a forced-choice measure. The robustness of the learning in this paradigm is investigated by using a secondary task methodol- ogy. Concurrent random number generation interferes with all knowledge measures. A number of parameter-free Connectionist and exemplar models of artificial grammar learning are tested against the experimental data. The importance of different assumptions regarding the coding of features and the learning rule used is investigated by determin- ing the performance of the model with and without each assumption. Only one class of Connectionist model passes all the tests. Fur- ther, this class of model can simulate subject performance in a different task domain. The relevance of these empirical and theoretical results for understanding implicit learning is discussed, and suggestions are made for future research.","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126481265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Implicit Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1