关联网络稳态下的潜在关系

IF 4.6 Q2 MATERIALS SCIENCE, BIOMATERIALS ACS Applied Bio Materials Pub Date : 2024-09-16 DOI:10.1111/cogs.13494
Kevin D. Shabahang, Hyungwook Yim, Simon J. Dennis
{"title":"关联网络稳态下的潜在关系","authors":"Kevin D. Shabahang,&nbsp;Hyungwook Yim,&nbsp;Simon J. Dennis","doi":"10.1111/cogs.13494","DOIUrl":null,"url":null,"abstract":"<p>Models of word meaning that exploit patterns of word usage across large text corpora to capture semantic relations, like the topic model and word2vec, condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy, etc.). However, their reliance on latent representations leaves them vulnerable to interference, makes them slow learners, and commits to a dual-systems account of episodic and semantic memory. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement a spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains like natural language text. We show that spreading activation using a one-hot coded Dynamic-Eigen-Net outperforms the topic model and reaches similar levels of performance as word2vec when predicting human free associations and word similarity ratings. Latent Semantic Analysis vectors reached similar levels of performance when constructed by applying dimensionality reduction to the Shifted Positive Pointwise Mutual Information but showed poorer predictability for free associations when using an entropy-based normalization. An analysis of the rate at which the Dynamic-Eigen-Net reaches asymptotic performance shows that it learns faster than word2vec. We argue in favor of the Dynamic-Eigen-Net as a fast learner, with a single-store, that is not subject to catastrophic interference. We present it as an alternative to instance models when delegating the induction of latent relationships to process assumptions instead of assumptions about representation.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13494","citationCount":"0","resultStr":"{\"title\":\"Latent Relations at Steady-state with Associative Nets\",\"authors\":\"Kevin D. Shabahang,&nbsp;Hyungwook Yim,&nbsp;Simon J. Dennis\",\"doi\":\"10.1111/cogs.13494\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Models of word meaning that exploit patterns of word usage across large text corpora to capture semantic relations, like the topic model and word2vec, condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy, etc.). However, their reliance on latent representations leaves them vulnerable to interference, makes them slow learners, and commits to a dual-systems account of episodic and semantic memory. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement a spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains like natural language text. We show that spreading activation using a one-hot coded Dynamic-Eigen-Net outperforms the topic model and reaches similar levels of performance as word2vec when predicting human free associations and word similarity ratings. Latent Semantic Analysis vectors reached similar levels of performance when constructed by applying dimensionality reduction to the Shifted Positive Pointwise Mutual Information but showed poorer predictability for free associations when using an entropy-based normalization. An analysis of the rate at which the Dynamic-Eigen-Net reaches asymptotic performance shows that it learns faster than word2vec. We argue in favor of the Dynamic-Eigen-Net as a fast learner, with a single-store, that is not subject to catastrophic interference. We present it as an alternative to instance models when delegating the induction of latent relationships to process assumptions instead of assumptions about representation.</p>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13494\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/cogs.13494\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cogs.13494","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0

摘要

利用大型文本语料库中的词语使用模式来捕捉语义关系的词义模型(如主题模型和 word2vec),通过按上下文进行词语共现统计来诱导表征,从而按照语义相关维度(如同义词、反义词、次同义词等)组织词语。然而,他们对潜在表征的依赖使他们容易受到干扰,学习速度较慢,并且陷入了对外显记忆和语义记忆的双系统解释。我们展示了如何在检索过程中在线构建词义,以避免这些限制。我们在联想网中实现了词义的扩散激活法,这是一种单层高度递归的联想网络,被称为动态特征网,我们开发这种网络是为了解决联想网早期变体在扩展以处理自然语言文本等非结构化输入域时的局限性。我们的研究表明,在预测人类自由联想和单词相似度评级时,使用单击编码动态特征网的传播激活效果优于主题模型,并达到了与 word2vec 相似的性能水平。潜语义分析向量是通过对移位正点式互信息进行降维处理而构建的,在预测自由联想时,其性能水平与word2vec相似,但在使用基于熵的归一化时,其预测能力较差。对动态特征网络达到渐进性能的速度进行的分析表明,它的学习速度比 word2vec 快。我们认为动态-特征-网络是一种不受灾难性干扰的单存储快速学习器。在将潜在关系的归纳委托给过程假设而不是表征假设时,我们将其作为实例模型的替代方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Latent Relations at Steady-state with Associative Nets

Models of word meaning that exploit patterns of word usage across large text corpora to capture semantic relations, like the topic model and word2vec, condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy, etc.). However, their reliance on latent representations leaves them vulnerable to interference, makes them slow learners, and commits to a dual-systems account of episodic and semantic memory. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement a spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains like natural language text. We show that spreading activation using a one-hot coded Dynamic-Eigen-Net outperforms the topic model and reaches similar levels of performance as word2vec when predicting human free associations and word similarity ratings. Latent Semantic Analysis vectors reached similar levels of performance when constructed by applying dimensionality reduction to the Shifted Positive Pointwise Mutual Information but showed poorer predictability for free associations when using an entropy-based normalization. An analysis of the rate at which the Dynamic-Eigen-Net reaches asymptotic performance shows that it learns faster than word2vec. We argue in favor of the Dynamic-Eigen-Net as a fast learner, with a single-store, that is not subject to catastrophic interference. We present it as an alternative to instance models when delegating the induction of latent relationships to process assumptions instead of assumptions about representation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACS Applied Bio Materials
ACS Applied Bio Materials Chemistry-Chemistry (all)
CiteScore
9.40
自引率
2.10%
发文量
464
期刊最新文献
A Systematic Review of Sleep Disturbance in Idiopathic Intracranial Hypertension. Advancing Patient Education in Idiopathic Intracranial Hypertension: The Promise of Large Language Models. Anti-Myelin-Associated Glycoprotein Neuropathy: Recent Developments. Approach to Managing the Initial Presentation of Multiple Sclerosis: A Worldwide Practice Survey. Association Between LACE+ Index Risk Category and 90-Day Mortality After Stroke.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1