首页 > 最新文献

arXiv: Learning最新文献

英文 中文
Sampled Weighted Min-Hashing for Large-Scale Topic Mining 大规模主题挖掘的抽样加权最小哈希
Pub Date : 2015-06-24 DOI: 10.1007/978-3-319-19264-2_20
Gibran Fuentes-Pineda, Ivan Vladimir Meza Ruiz
{"title":"Sampled Weighted Min-Hashing for Large-Scale Topic Mining","authors":"Gibran Fuentes-Pineda, Ivan Vladimir Meza Ruiz","doi":"10.1007/978-3-319-19264-2_20","DOIUrl":"https://doi.org/10.1007/978-3-319-19264-2_20","url":null,"abstract":"","PeriodicalId":8468,"journal":{"name":"arXiv: Learning","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87121769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Elicitation complexity of statistical properties 统计性质的引出复杂性
Pub Date : 2015-06-23 DOI: 10.1093/biomet/asaa093
Rafael M. Frongillo, Ian A. Kash
A property, or statistical functional, is said to be elicitable if it minimizes expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limitations of point estimation and empirical risk minimization. While recent work asks which properties are elicitable, we instead advocate for a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of the property. We lay the foundation for a general theory of elicitation complexity, including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest. Building on this foundation, our main result gives tight complexity bounds for the broad class of Bayes risks. We apply these results to several properties of interest, including variance, entropy, norms, and several classes of financial risk measures. We conclude with discussion and open directions.
如果一种性质或统计泛函使某些损失函数的预期损失最小化,就说它是可得的。对哪些属性是可引出的研究揭示了点估计和经验风险最小化的能力和局限性。虽然最近的工作问哪些属性是可引出的,但我们提倡一个更微妙的问题:需要多少维度才能间接引出给定的属性?这个数字称为属性的引出复杂度。我们为引出复杂性的一般理论奠定了基础,包括引出复杂性如何表现的几个基本结果,以及感兴趣的标准性质的复杂性。在此基础上,我们的主要结果给出了广义贝叶斯风险的严格复杂度界限。我们将这些结果应用于几个感兴趣的属性,包括方差、熵、规范和几类金融风险度量。我们以讨论和开放的方向结束。
{"title":"Elicitation complexity of statistical properties","authors":"Rafael M. Frongillo, Ian A. Kash","doi":"10.1093/biomet/asaa093","DOIUrl":"https://doi.org/10.1093/biomet/asaa093","url":null,"abstract":"A property, or statistical functional, is said to be elicitable if it minimizes expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limitations of point estimation and empirical risk minimization. While recent work asks which properties are elicitable, we instead advocate for a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of the property. We lay the foundation for a general theory of elicitation complexity, including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest. Building on this foundation, our main result gives tight complexity bounds for the broad class of Bayes risks. We apply these results to several properties of interest, including variance, entropy, norms, and several classes of financial risk measures. We conclude with discussion and open directions.","PeriodicalId":8468,"journal":{"name":"arXiv: Learning","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89115875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Multi-View Factorization Machines 多视图分解机
Pub Date : 2015-06-03 DOI: 10.1145/2835776.2835777
Bokai Cao, Hucheng Zhou, Guoqiang Li, Philip S. Yu
For a learning task, data can usually be collected from different sources or be represented from multiple views. For example, laboratory results from different medical examinations are available for disease diagnosis, and each of them can only reflect the health state of a person from a particular aspect/view. Therefore, different views provide complementary information for learning tasks. An effective integration of the multi-view information is expected to facilitate the learning performance. In this paper, we propose a general predictor, named multi-view machines (MVMs), that can effectively include all the possible interactions between features from multiple views. A joint factorization is embedded for the full-order interaction parameters which allows parameter estimation under sparsity. Moreover, MVMs can work in conjunction with different loss functions for a variety of machine learning tasks. A stochastic gradient descent method is presented to learn the MVM model. We further illustrate the advantages of MVMs through comparison with other methods for multi-view classification, including support vector machines (SVMs), support tensor machines (STMs) and factorization machines (FMs).
对于学习任务,通常可以从不同的来源收集数据或从多个视图表示数据。例如,不同医学检查的实验室结果可用于疾病诊断,每一项检查只能从特定方面/角度反映一个人的健康状况。因此,不同的观点为学习任务提供了互补的信息。多视角信息的有效整合有助于提高学习效果。在本文中,我们提出了一个通用的预测器,称为多视图机器(MVMs),它可以有效地包括来自多个视图的特征之间的所有可能的交互。对全阶交互参数嵌入了一个联合因子分解,使得在稀疏性条件下参数估计成为可能。此外,mvm可以与不同的损失函数一起工作,用于各种机器学习任务。提出了一种随机梯度下降法来学习MVM模型。通过与支持向量机(svm)、支持张量机(STMs)和分解机(FMs)等其他多视图分类方法的比较,进一步说明了MVMs的优势。
{"title":"Multi-View Factorization Machines","authors":"Bokai Cao, Hucheng Zhou, Guoqiang Li, Philip S. Yu","doi":"10.1145/2835776.2835777","DOIUrl":"https://doi.org/10.1145/2835776.2835777","url":null,"abstract":"For a learning task, data can usually be collected from different sources or be represented from multiple views. For example, laboratory results from different medical examinations are available for disease diagnosis, and each of them can only reflect the health state of a person from a particular aspect/view. Therefore, different views provide complementary information for learning tasks. An effective integration of the multi-view information is expected to facilitate the learning performance. In this paper, we propose a general predictor, named multi-view machines (MVMs), that can effectively include all the possible interactions between features from multiple views. A joint factorization is embedded for the full-order interaction parameters which allows parameter estimation under sparsity. Moreover, MVMs can work in conjunction with different loss functions for a variety of machine learning tasks. A stochastic gradient descent method is presented to learn the MVM model. We further illustrate the advantages of MVMs through comparison with other methods for multi-view classification, including support vector machines (SVMs), support tensor machines (STMs) and factorization machines (FMs).","PeriodicalId":8468,"journal":{"name":"arXiv: Learning","volume":"757 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78802882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Proficiency Comparison of LADTree and REPTree Classifiers for Credit Risk Forecast LADTree和REPTree分类器在信用风险预测中的熟练程度比较
Pub Date : 2015-02-28 DOI: 10.5121/IJCSA.2015.5104
L. Devasena
Predicting the Credit Defaulter is a perilous task of Financial Industries like Banks. Ascertaining non-payer before giving loan is a significant and conflict-ridden task of the Banker. Classification techniques are the better choice for predictive analysis like finding the claimant, whether he/she is an unpretentious customer or a cheat. Defining the outstanding classifier is a risky assignment for any industrialist like a banker. This allow computer science researchers to drill down efficient research works through evaluating different classifiers and finding out the best classifier for such predictive problems. This research work investigates the productivity of LADTree Classifier and REPTree Classifier for the credit risk prediction and compares their fitness through various measures. German credit dataset has been taken and used to predict the credit risk with a help of open source machine learning tool.
对银行等金融行业来说,预测信用违约者是一项危险的任务。在发放贷款前确定非付款人是银行的一项重要且充满冲突的任务。分类技术是预测分析的更好选择,比如找到索赔人,无论他/她是一个朴实的顾客还是一个骗子。对于银行家这样的实业家来说,定义杰出的分类器是一项有风险的任务。这使得计算机科学研究人员可以通过评估不同的分类器来深入研究有效的研究工作,并为此类预测问题找到最佳分类器。本研究考察了LADTree分类器和REPTree分类器用于信用风险预测的生产率,并通过各种度量比较了它们的适应度。利用德国信用数据集,借助开源机器学习工具进行信用风险预测。
{"title":"Proficiency Comparison of LADTree and REPTree Classifiers for Credit Risk Forecast","authors":"L. Devasena","doi":"10.5121/IJCSA.2015.5104","DOIUrl":"https://doi.org/10.5121/IJCSA.2015.5104","url":null,"abstract":"Predicting the Credit Defaulter is a perilous task of Financial Industries like Banks. Ascertaining non-payer before giving loan is a significant and conflict-ridden task of the Banker. Classification techniques are the better choice for predictive analysis like finding the claimant, whether he/she is an unpretentious customer or a cheat. Defining the outstanding classifier is a risky assignment for any industrialist like a banker. This allow computer science researchers to drill down efficient research works through evaluating different classifiers and finding out the best classifier for such predictive problems. This research work investigates the productivity of LADTree Classifier and REPTree Classifier for the credit risk prediction and compares their fitness through various measures. German credit dataset has been taken and used to predict the credit risk with a help of open source machine learning tool.","PeriodicalId":8468,"journal":{"name":"arXiv: Learning","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74370408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast, Robust and Non-convex Subspace Recovery 快速、鲁棒和非凸子空间恢复
Pub Date : 2014-06-24 DOI: 10.1093/imaiai/iax012
Gilad Lerman, Tyler Maunu
This work presents a fast and non-convex algorithm for robust subspace recovery. The data sets considered include inliers drawn around a low-dimensional subspace of a higher dimensional ambient space, and a possibly large portion of outliers that do not lie nearby this subspace. The proposed algorithm, which we refer to as Fast Median Subspace (FMS), is designed to robustly determine the underlying subspace of such data sets, while having lower computational complexity than existing methods. We prove convergence of the FMS iterates to a stationary point. Further, under a special model of data, FMS converges to a point which is near to the global minimum with overwhelming probability. Under this model, we show that the iteration complexity is globally bounded and locally $r$-linear. The latter theorem holds for any fixed fraction of outliers (less than 1) and any fixed positive distance between the limit point and the global minimum. Numerical experiments on synthetic and real data demonstrate its competitive speed and accuracy.
本文提出了一种快速、非凸的鲁棒子空间恢复算法。考虑的数据集包括在高维环境空间的低维子空间周围绘制的内线,以及可能不在该子空间附近的很大一部分离群值。我们提出的算法,我们称之为快速中位数子空间(FMS),旨在鲁棒地确定这些数据集的底层子空间,同时具有比现有方法更低的计算复杂度。证明了FMS迭代到一个平稳点的收敛性。此外,在一种特殊的数据模型下,FMS以压倒性的概率收敛到一个接近全局最小值的点。在此模型下,我们证明了迭代复杂度是全局有界的,局部是线性的。后一个定理适用于任何固定的离群值分数(小于1)和任何固定的极限点与全局最小值之间的正距离。合成数据和实际数据的数值实验证明了该方法的速度和准确性。
{"title":"Fast, Robust and Non-convex Subspace Recovery","authors":"Gilad Lerman, Tyler Maunu","doi":"10.1093/imaiai/iax012","DOIUrl":"https://doi.org/10.1093/imaiai/iax012","url":null,"abstract":"This work presents a fast and non-convex algorithm for robust subspace recovery. The data sets considered include inliers drawn around a low-dimensional subspace of a higher dimensional ambient space, and a possibly large portion of outliers that do not lie nearby this subspace. The proposed algorithm, which we refer to as Fast Median Subspace (FMS), is designed to robustly determine the underlying subspace of such data sets, while having lower computational complexity than existing methods. We prove convergence of the FMS iterates to a stationary point. Further, under a special model of data, FMS converges to a point which is near to the global minimum with overwhelming probability. Under this model, we show that the iteration complexity is globally bounded and locally $r$-linear. The latter theorem holds for any fixed fraction of outliers (less than 1) and any fixed positive distance between the limit point and the global minimum. Numerical experiments on synthetic and real data demonstrate its competitive speed and accuracy.","PeriodicalId":8468,"journal":{"name":"arXiv: Learning","volume":"96 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79068151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Cross-situational and supervised learning in the emergence of communication 交际出现中的跨情境和监督学习
Pub Date : 2009-01-26 DOI: 10.1075/is.12.1.05fon
J. Fontanari, A. Cangelosi
Scenarios for the emergence or bootstrap of a lexicon involve the repeated interaction between at least two agents who must reach a consensus on how to name N objects using H words. Here we consider minimal models of two types of learning algorithms: cross-situational learning, in which the individuals determine the meaning of a word by looking for something in common across all observed uses of that word, and supervised operant conditioning learning, in which there is strong feedback between individuals about the intended meaning of the words. Despite the stark differences between these learning schemes, we show that they yield the same communication accuracy in the realistic limits of large N and H, which coincides with the result of the classical occupancy problem of randomly assigning N objects to H words.
词汇出现或引导的场景涉及至少两个代理之间的重复交互,这些代理必须就如何使用H个单词命名N个对象达成共识。在这里,我们考虑了两种学习算法的最小模型:跨情境学习,在这种学习中,个体通过在所有观察到的单词的使用中寻找共同点来确定单词的含义;监督操作条件反射学习,在这种学习中,个体之间对单词的预期含义有强烈的反馈。尽管这些学习方案之间存在明显差异,但我们表明,它们在大N和H的实际限制下产生相同的通信精度,这与随机分配N个对象给H个单词的经典占用问题的结果一致。
{"title":"Cross-situational and supervised learning in the emergence of communication","authors":"J. Fontanari, A. Cangelosi","doi":"10.1075/is.12.1.05fon","DOIUrl":"https://doi.org/10.1075/is.12.1.05fon","url":null,"abstract":"Scenarios for the emergence or bootstrap of a lexicon involve the repeated interaction between at least two agents who must reach a consensus on how to name N objects using H words. Here we consider minimal models of two types of learning algorithms: cross-situational learning, in which the individuals determine the meaning of a word by looking for something in common across all observed uses of that word, and supervised operant conditioning learning, in which there is strong feedback between individuals about the intended meaning of the words. Despite the stark differences between these learning schemes, we show that they yield the same communication accuracy in the realistic limits of large N and H, which coincides with the result of the classical occupancy problem of randomly assigning N objects to H words.","PeriodicalId":8468,"journal":{"name":"arXiv: Learning","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2009-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85366010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Entropy, Perception, and Relativity 熵、知觉和相对性
Pub Date : 2006-04-01 DOI: 10.1037/e645982007-001
S. Jaegar
Abstract : In this paper, I expand Shannon's definition of entropy into a new form of entropy that allows integration of information from different random events. Shannon's notion of entropy is a special case of my more general definition of entropy. I define probability using a so-called performance function, which is de facto an exponential distribution. Assuming that my general notion of entropy reflects the true uncertainty about a probabilistic event, I understand that our perceived uncertainty differs. I claim that our perception is the result of two opposing forces similar to the two famous antagonists in Chinese philosophy: Yin and Yang. Based on this idea, I show that our perceived uncertainty matches the true uncertainty in points determined by the golden ratio. I demonstrate that the well-known sigmoid function, which we typically employ in artificial neural networks as a non-linear threshold function, describes the actual performance. Furthermore, I provide a motivation for the time dilation in Einstein's Special Relativity, basically claiming that although time dilation conforms with our perception, it does not correspond to reality. At the end of the paper, I show how to apply this theoretical framework to practical applications. I present recognition rates for a pattern recognition problem, and also propose a network architecture that can take advantage of general entropy to solve complex decision problems.
摘要:本文将香农的熵定义扩展为一种新的熵形式,允许不同随机事件的信息集成。香农的熵的概念是我对熵的更一般定义的一个特例。我用所谓的性能函数来定义概率,它实际上是一个指数分布。假设我对熵的一般概念反映了概率事件的真实不确定性,我理解我们感知的不确定性是不同的。我认为,我们的感知是两种对立力量的结果,就像中国哲学中著名的两种对立力量:阴和阳。基于这个想法,我展示了我们感知到的不确定性与由黄金比例决定的点的真实不确定性相匹配。我证明了众所周知的sigmoid函数,我们通常在人工神经网络中用作非线性阈值函数,描述了实际性能。此外,我为爱因斯坦狭义相对论中的时间膨胀提供了一个动机,基本上声称虽然时间膨胀符合我们的感知,但它不符合现实。在论文的最后,我展示了如何将这一理论框架应用于实际应用。我提出了一个模式识别问题的识别率,并提出了一个可以利用一般熵来解决复杂决策问题的网络架构。
{"title":"Entropy, Perception, and Relativity","authors":"S. Jaegar","doi":"10.1037/e645982007-001","DOIUrl":"https://doi.org/10.1037/e645982007-001","url":null,"abstract":"Abstract : In this paper, I expand Shannon's definition of entropy into a new form of entropy that allows integration of information from different random events. Shannon's notion of entropy is a special case of my more general definition of entropy. I define probability using a so-called performance function, which is de facto an exponential distribution. Assuming that my general notion of entropy reflects the true uncertainty about a probabilistic event, I understand that our perceived uncertainty differs. I claim that our perception is the result of two opposing forces similar to the two famous antagonists in Chinese philosophy: Yin and Yang. Based on this idea, I show that our perceived uncertainty matches the true uncertainty in points determined by the golden ratio. I demonstrate that the well-known sigmoid function, which we typically employ in artificial neural networks as a non-linear threshold function, describes the actual performance. Furthermore, I provide a motivation for the time dilation in Einstein's Special Relativity, basically claiming that although time dilation conforms with our perception, it does not correspond to reality. At the end of the paper, I show how to apply this theoretical framework to practical applications. I present recognition rates for a pattern recognition problem, and also propose a network architecture that can take advantage of general entropy to solve complex decision problems.","PeriodicalId":8468,"journal":{"name":"arXiv: Learning","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2006-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83051930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv: Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1