首页 > 最新文献

Computational brain & behavior最新文献

英文 中文
Neural habituation enhances novelty detection: an EEG study of rapidly presented words. 神经习惯化可增强新奇感检测:快速呈现单词的脑电图研究。
Pub Date : 2020-06-01 Epub Date: 2019-12-18 DOI: 10.1007/s42113-019-00071-w
Len P L Jacob, David E Huber

Huber and O'Reilly (2003) proposed that neural habituation aids perceptual processing, separating neural responses to currently viewed objects from recently viewed objects. However, synaptic depression has costs, producing repetition deficits. Prior work confirmed the transition from repetition benefits to deficits with increasing duration of a prime object, but the prediction of enhanced novelty detection was not tested. The current study examined this prediction with a same/different word priming task, using support vector machine (SVM) classification of EEG data, ERP analyses focused on the N400, and dynamic neural network simulations fit to behavioral data to provide a priori predictions of the ERP effects. Subjects made same/different judgements to a response word in relation to an immediately preceding brief target word; prime durations were short (50ms) or long (400ms), and long durations decreased P100/N170 responses to the target word, suggesting that this manipulation increased habituation. Following long duration primes, correct "different" judgments of primed response words increased, evidencing enhanced novelty detection. An SVM classifier predicted trial-by-trial behavior with 66.34% accuracy on held-out data, with greatest predictive power at a time pattern consistent with the N400. The habituation model was augmented with a maintained semantics layer (i.e., working memory) to generate behavior and N400 predictions. A second experiment used response-locked ERPs, confirming the model's assumption that residual activation in working memory is the basis of novelty decisions. These results support the theory that neural habituation enhances novelty detection, and the model assumption that the N400 reflects updating of semantic information in working memory.

Huber 和 O'Reilly(2003 年)提出,神经习惯化有助于感知处理,将神经对当前观看物体和最近观看物体的反应分离开来。然而,突触抑制是有代价的,它会产生重复障碍。先前的研究证实,随着主要对象持续时间的增加,重复的益处会转变为缺陷,但对新奇事物检测增强的预测却没有进行测试。本研究利用支持向量机(SVM)对脑电图数据进行分类,对 N400 进行ERP分析,并对行为数据进行动态神经网络模拟,从而对ERP效应进行先验预测。受试者根据紧随其后的简短目标词对反应词做出相同/不同的判断;prime 持续时间有短(50 毫秒)和长(400 毫秒)之分,长持续时间会降低目标词的 P100/N170 反应,表明这种操作会增加习惯性。在长持续时间引物之后,对引物反应词的正确 "不同 "判断增加了,这证明新颖性检测增强了。SVM 分类器对保留数据的逐次试验行为预测准确率为 66.34%,在与 N400 一致的时间模式下预测能力最强。习惯化模型通过一个保持语义层(即工作记忆)来生成行为和 N400 预测结果。第二个实验使用了反应锁定的 ERPs,证实了该模型的假设,即工作记忆中的残余激活是新奇决定的基础。这些结果支持了神经习惯性增强新奇事物检测的理论,以及 N400 反映了工作记忆中语义信息更新的模型假设。
{"title":"Neural habituation enhances novelty detection: an EEG study of rapidly presented words.","authors":"Len P L Jacob, David E Huber","doi":"10.1007/s42113-019-00071-w","DOIUrl":"10.1007/s42113-019-00071-w","url":null,"abstract":"<p><p>Huber and O'Reilly (2003) proposed that neural habituation aids perceptual processing, separating neural responses to currently viewed objects from recently viewed objects. However, synaptic depression has costs, producing repetition deficits. Prior work confirmed the transition from repetition benefits to deficits with increasing duration of a prime object, but the prediction of enhanced novelty detection was not tested. The current study examined this prediction with a same/different word priming task, using support vector machine (SVM) classification of EEG data, ERP analyses focused on the N400, and dynamic neural network simulations fit to behavioral data to provide a priori predictions of the ERP effects. Subjects made same/different judgements to a response word in relation to an immediately preceding brief target word; prime durations were short (50ms) or long (400ms), and long durations decreased P100/N170 responses to the target word, suggesting that this manipulation increased habituation. Following long duration primes, correct \"different\" judgments of primed response words increased, evidencing enhanced novelty detection. An SVM classifier predicted trial-by-trial behavior with 66.34% accuracy on held-out data, with greatest predictive power at a time pattern consistent with the N400. The habituation model was augmented with a maintained semantics layer (i.e., working memory) to generate behavior and N400 predictions. A second experiment used response-locked ERPs, confirming the model's assumption that residual activation in working memory is the basis of novelty decisions. These results support the theory that neural habituation enhances novelty detection, and the model assumption that the N400 reflects updating of semantic information in working memory.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"3 2","pages":"208-227"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7447193/pdf/nihms-1546975.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38414587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explanation or Modeling: a Reply to Kellen and Klauer 解释还是建模:对Kellen和Klauer的回复
Pub Date : 2020-04-15 DOI: 10.1007/s42113-020-00077-9
Marco Ragni, P. Johnson-Laird
{"title":"Explanation or Modeling: a Reply to Kellen and Klauer","authors":"Marco Ragni, P. Johnson-Laird","doi":"10.1007/s42113-020-00077-9","DOIUrl":"https://doi.org/10.1007/s42113-020-00077-9","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"4 1","pages":"354 - 361"},"PeriodicalIF":0.0,"publicationDate":"2020-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74798285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Beyond Rescorla–Wagner: the Ups and Downs of Learning 超越Rescorla-Wagner:学习的起起落落
Pub Date : 2020-04-10 DOI: 10.1007/s42113-021-00103-4
G. Calcagni, Justin A. Harris, R. Pellón
{"title":"Beyond Rescorla–Wagner: the Ups and Downs of Learning","authors":"G. Calcagni, Justin A. Harris, R. Pellón","doi":"10.1007/s42113-021-00103-4","DOIUrl":"https://doi.org/10.1007/s42113-021-00103-4","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"94 1","pages":"355 - 379"},"PeriodicalIF":0.0,"publicationDate":"2020-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74241732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time Adaptive Design Optimization Within Functional MRI Experiments 功能MRI实验中的实时自适应设计优化
Pub Date : 2020-04-02 DOI: 10.1007/s42113-020-00079-7
Giwon Bahg, P. Sederberg, Jay I. Myung, Xiangrui Li, M. Pitt, Zhong-Lin Lu, Brandon M. Turner
{"title":"Real-time Adaptive Design Optimization Within Functional MRI Experiments","authors":"Giwon Bahg, P. Sederberg, Jay I. Myung, Xiangrui Li, M. Pitt, Zhong-Lin Lu, Brandon M. Turner","doi":"10.1007/s42113-020-00079-7","DOIUrl":"https://doi.org/10.1007/s42113-020-00079-7","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"9 1","pages":"400 - 429"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73140436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modeling the Wason Selection Task: a Response to Ragni and Johnson-Laird (2020) 建模沃森选择任务:对Ragni和Johnson-Laird(2020)的回应
Pub Date : 2020-04-01 DOI: 10.1007/s42113-020-00086-8
David Kellen, K. C. Klauer
{"title":"Modeling the Wason Selection Task: a Response to Ragni and Johnson-Laird (2020)","authors":"David Kellen, K. C. Klauer","doi":"10.1007/s42113-020-00086-8","DOIUrl":"https://doi.org/10.1007/s42113-020-00086-8","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"25 1","pages":"362 - 367"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83226915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Cautionary Note on Evidence-Accumulation Models of Response Inhibition in the Stop-Signal Paradigm 关于停止-信号范式中反应抑制的证据积累模型的警告
Pub Date : 2020-03-30 DOI: 10.1007/s42113-020-00075-x
D. Matzke, G. Logan, A. Heathcote
{"title":"A Cautionary Note on Evidence-Accumulation Models of Response Inhibition in the Stop-Signal Paradigm","authors":"D. Matzke, G. Logan, A. Heathcote","doi":"10.1007/s42113-020-00075-x","DOIUrl":"https://doi.org/10.1007/s42113-020-00075-x","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"40 1","pages":"269 - 288"},"PeriodicalIF":0.0,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75736645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Modeling Preference Reversals in Context Effects over Time 随着时间的推移,环境影响下偏好逆转的建模
Pub Date : 2020-03-27 DOI: 10.1007/s42113-020-00078-8
Andrea M. Cataldo, A. Cohen
{"title":"Modeling Preference Reversals in Context Effects over Time","authors":"Andrea M. Cataldo, A. Cohen","doi":"10.1007/s42113-020-00078-8","DOIUrl":"https://doi.org/10.1007/s42113-020-00078-8","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"46 1","pages":"101 - 123"},"PeriodicalIF":0.0,"publicationDate":"2020-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80400723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Hierarchical Hidden Markov Models for Response Time Data 响应时间数据的层次隐马尔可夫模型
Pub Date : 2020-03-26 DOI: 10.1007/s42113-020-00076-w
D. Kunkel, Zhifei Yan, P. Craigmile, M. Peruggia, T. Van Zandt
{"title":"Hierarchical Hidden Markov Models for Response Time Data","authors":"D. Kunkel, Zhifei Yan, P. Craigmile, M. Peruggia, T. Van Zandt","doi":"10.1007/s42113-020-00076-w","DOIUrl":"https://doi.org/10.1007/s42113-020-00076-w","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"419 1","pages":"70 - 86"},"PeriodicalIF":0.0,"publicationDate":"2020-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76629596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generalization at Retrieval Using Associative Networks with Transient Weight Changes 基于瞬态权值变化的关联网络的检索泛化
Pub Date : 2020-03-21 DOI: 10.31234/osf.io/3nzgh
Kevin D. Shabahang, H. Yim, S. Dennis
Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets , and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.
即使没有见过像“她的水牛”这样的双字母组合,你也可以很容易地看出它是一致的,因为“水牛”可以与更常见的名词,如“猫”或“狗”排列在一起,这些名词在“她的猫”或“她的狗”等语境中出现过——这种新的双字母组合在结构上与记忆中的表征一致。我们提出了一类新的关联网络,我们称之为动态特征网络,并提供了模拟,展示了它们如何推广到与训练域结构一致的模式。无论输入是什么,线性关联网络都以相同的模式响应,从而激发了饱和度的引入,以促进其他响应状态。然而,使用饱和的模型不能很容易地推广到新的,但结构一致的模式。动态特征网络通过使用临时权重变化动态地使特征谱向外部输入偏置来解决这个问题。我们展示了在文本语料库上训练的双槽动态特征网络如何提供双语法判断和词汇决策任务,表明与脑状态-盒子和线性关联网络相比,它可以更好地从语料库中捕获语法规律。我们以模拟结束,展示了Dynamic-Eigen-Net如何对双元图中引入的语法违规敏感,即使在编码这些双元图的关联从内存中删除之后也是如此。在所有的模拟中,动态特征网络可靠地优于大脑状态-盒子和线性关联网络。我们提出动态特征网络作为联想网络,在检索时进行泛化,而不是通过循环反馈进行编码。
{"title":"Generalization at Retrieval Using Associative Networks with Transient Weight Changes","authors":"Kevin D. Shabahang, H. Yim, S. Dennis","doi":"10.31234/osf.io/3nzgh","DOIUrl":"https://doi.org/10.31234/osf.io/3nzgh","url":null,"abstract":"Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets , and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgment-of-grammaticality and lexical decision tasks, showing it can better capture syntactic regularities from the corpus compared to the Brain-State-in-a-Box and the Linear-Associative-Net. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the associations that encode those bigrams are deleted from memory. Over all simulations, the Dynamic-Eigen-Net reliably outperforms the Brain-State-in-a-Box and the Linear-Associative-Net. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback.","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"6 1","pages":"124-155"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88739560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Hierarchical Latent Space Network Model for Population Studies of Functional Connectivity 功能连通性人口研究的层次潜空间网络模型
Pub Date : 2020-03-19 DOI: 10.1007/s42113-020-00080-0
James D. Wilson, S. Cranmer, Zhonglin Lu
{"title":"A Hierarchical Latent Space Network Model for Population Studies of Functional Connectivity","authors":"James D. Wilson, S. Cranmer, Zhonglin Lu","doi":"10.1007/s42113-020-00080-0","DOIUrl":"https://doi.org/10.1007/s42113-020-00080-0","url":null,"abstract":"","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"70 1","pages":"384 - 399"},"PeriodicalIF":0.0,"publicationDate":"2020-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86748658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Computational brain & behavior
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1