首页 > 最新文献

Journal of Machine Learning Research最新文献

英文 中文
Pathfinder: Parallel quasi-Newton variational inference. 探路者:平行准牛顿变分推理。
IF 5.2 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari

We propose Pathfinder, a variational method for approximately sampling from differentiable probability densities. Starting from a random initialization, Pathfinder locates normal approximations to the target density along a quasi-Newton optimization path, with local covariance estimated using the inverse Hessian estimates produced by the optimizer. Pathfinder returns draws from the approximation with the lowest estimated Kullback-Leibler (KL) divergence to the target distribution. We evaluate Pathfinder on a wide range of posterior distributions, demonstrating that its approximate draws are better than those from automatic differentiation variational inference (ADVI) and comparable to those produced by short chains of dynamic Hamiltonian Monte Carlo (HMC), as measured by 1-Wasserstein distance. Compared to ADVI and short dynamic HMC runs, Pathfinder requires one to two orders of magnitude fewer log density and gradient evaluations, with greater reductions for more challenging posteriors. Importance resampling over multiple runs of Pathfinder improves the diversity of approximate draws, reducing 1-Wasserstein distance further and providing a measure of robustness to optimization failures on plateaus, saddle points, or in minor modes. The Monte Carlo KL divergence estimates are embarrassingly parallelizable in the core Pathfinder algorithm, as are multiple runs in the resampling version, further increasing Pathfinder's speed advantage with multiple cores.

我们提出了一种从可微概率密度近似采样的变分方法Pathfinder。从随机初始化开始,Pathfinder沿着准牛顿优化路径定位目标密度的正态近似,并使用优化器产生的逆Hessian估计估计局部协方差。探路者返回从具有最低估计Kullback-Leibler (KL)散度的近似到目标分布。我们在大范围的后验分布上对Pathfinder进行了评估,表明其近似结果优于自动微分变分推理(ADVI),并可与短链动态哈密顿蒙特卡罗(HMC)产生的近似结果相媲美,以1-Wasserstein距离测量。与ADVI和短时间动态HMC运行相比,Pathfinder需要的对数密度和梯度评估减少了一到两个数量级,对于更具挑战性的后验,其减少幅度更大。在Pathfinder的多次运行中进行重要重采样,提高了近似绘制的多样性,进一步减少了1-Wasserstein距离,并对高原、鞍点或小模式下的优化失败提供了鲁棒性度量。蒙特卡罗KL散度估计在核心Pathfinder算法中是令人尴尬的并行化,在重采样版本中也是多次运行,进一步增加了Pathfinder在多核心下的速度优势。
{"title":"Pathfinder: Parallel quasi-Newton variational inference.","authors":"Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We propose Pathfinder, a variational method for approximately sampling from differentiable probability densities. Starting from a random initialization, Pathfinder locates normal approximations to the target density along a quasi-Newton optimization path, with local covariance estimated using the inverse Hessian estimates produced by the optimizer. Pathfinder returns draws from the approximation with the lowest estimated Kullback-Leibler (KL) divergence to the target distribution. We evaluate Pathfinder on a wide range of posterior distributions, demonstrating that its approximate draws are better than those from automatic differentiation variational inference (ADVI) and comparable to those produced by short chains of dynamic Hamiltonian Monte Carlo (HMC), as measured by 1-Wasserstein distance. Compared to ADVI and short dynamic HMC runs, Pathfinder requires one to two orders of magnitude fewer log density and gradient evaluations, with greater reductions for more challenging posteriors. Importance resampling over multiple runs of Pathfinder improves the diversity of approximate draws, reducing 1-Wasserstein distance further and providing a measure of robustness to optimization failures on plateaus, saddle points, or in minor modes. The Monte Carlo KL divergence estimates are embarrassingly parallelizable in the core Pathfinder algorithm, as are multiple runs in the resampling version, further increasing Pathfinder's speed advantage with multiple cores.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987689/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147464243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Importance of Being Correlated: Implications of Dependence in Joint Spectral Inference across Multiple Networks. 相关的重要性:多个网络联合频谱推断中的依赖性影响。
IF 6 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Konstantinos Pantazis, Avanti Athreya, Jesús Arroyo, William N Frost, Evan S Hill, Vince Lyzinski

Spectral inference on multiple networks is a rapidly-developing subfield of graph statistics. Recent work has demonstrated that joint, or simultaneous, spectral embedding of multiple independent networks can deliver more accurate estimation than individual spectral decompositions of those same networks. Such inference procedures typically rely heavily on independence assumptions across the multiple network realizations, and even in this case, little attention has been paid to the induced network correlation that can be a consequence of such joint embeddings. In this paper, we present a generalized omnibus embedding methodology and we provide a detailed analysis of this embedding across both independent and correlated networks, the latter of which significantly extends the reach of such procedures, and we describe how this omnibus embedding can itself induce correlation. This leads us to distinguish between inherent correlation-that is, the correlation that arises naturally in multisample network data-and induced correlation, which is an artifice of the joint embedding methodology. We show that the generalized omnibus embedding procedure is flexible and robust, and we prove both consistency and a central limit theorem for the embedded points. We examine how induced and inherent correlation can impact inference for network time series data, and we provide network analogues of classical questions such as the effective sample size for more generally correlated data. Further, we show how an appropriately calibrated generalized omnibus embedding can detect changes in real biological networks that previous embedding procedures could not discern, confirming that the effect of inherent and induced correlation can be subtle and transformative. By allowing for and deconstructing both forms of correlation, our methodology widens the scope of spectral techniques for network inference, with import in theory and practice.

多个网络的光谱推断是图统计中发展迅速的一个子领域。最近的研究表明,对多个独立网络进行联合或同步频谱嵌入,比对相同网络进行单独频谱分解能提供更精确的估计。此类推断程序通常严重依赖于多个网络实现的独立性假设,即使在这种情况下,人们也很少关注此类联合嵌入可能导致的网络相关性。在本文中,我们提出了一种通用的总括嵌入方法,并详细分析了这种嵌入在独立和相关网络中的应用,后者大大扩展了此类程序的应用范围,我们还描述了这种总括嵌入本身是如何诱发相关性的。这使我们区分了固有相关性(即多样本网络数据中自然产生的相关性)和诱导相关性(联合嵌入方法的一种伪装)。我们证明了广义总括嵌入程序的灵活性和稳健性,并证明了嵌入点的一致性和中心极限定理。我们研究了诱导相关性和内在相关性如何影响网络时间序列数据的推断,并提供了经典问题的网络类比,如更一般相关数据的有效样本大小。此外,我们还展示了经过适当校准的广义总括嵌入是如何在真实的生物网络中发现以前的嵌入程序无法辨别的变化的,从而证实了固有相关性和诱导相关性的影响是微妙的,也是可以改变的。通过允许和解构这两种形式的相关性,我们的方法拓宽了用于网络推断的光谱技术的范围,在理论和实践上都具有重要意义。
{"title":"The Importance of Being Correlated: Implications of Dependence in Joint Spectral Inference across Multiple Networks.","authors":"Konstantinos Pantazis, Avanti Athreya, Jesús Arroyo, William N Frost, Evan S Hill, Vince Lyzinski","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Spectral inference on multiple networks is a rapidly-developing subfield of graph statistics. Recent work has demonstrated that joint, or simultaneous, spectral embedding of multiple independent networks can deliver more accurate estimation than individual spectral decompositions of those same networks. Such inference procedures typically rely heavily on independence assumptions across the multiple network realizations, and even in this case, little attention has been paid to the induced network correlation that can be a consequence of such joint embeddings. In this paper, we present a <i>generalized omnibus</i> embedding methodology and we provide a detailed analysis of this embedding across both independent and correlated networks, the latter of which significantly extends the reach of such procedures, and we describe how this omnibus embedding can itself induce correlation. This leads us to distinguish between <i>inherent</i> correlation-that is, the correlation that arises naturally in multisample network data-and <i>induced</i> correlation, which is an artifice of the joint embedding methodology. We show that the generalized omnibus embedding procedure is flexible and robust, and we prove both consistency and a central limit theorem for the embedded points. We examine how induced and inherent correlation can impact inference for network time series data, and we provide network analogues of classical questions such as the effective sample size for more generally correlated data. Further, we show how an appropriately calibrated generalized omnibus embedding can detect changes in real biological networks that previous embedding procedures could not discern, confirming that the effect of inherent and induced correlation can be subtle and transformative. By allowing for and deconstructing both forms of correlation, our methodology widens the scope of spectral techniques for network inference, with import in theory and practice.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 ","pages":""},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10465120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10127031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Sparse Additive Models. 广义稀疏加性模型。
IF 6 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Asad Haris, Noah Simon, Ali Shojaie

We present a unified framework for estimation and analysis of generalized additive models in high dimensions. The framework defines a large class of penalized regression estimators, encompassing many existing methods. An efficient computational algorithm for this class is presented that easily scales to thousands of observations and features. We prove minimax optimal convergence bounds for this class under a weak compatibility condition. In addition, we characterize the rate of convergence when this compatibility condition is not met. Finally, we also show that the optimal penalty parameters for structure and sparsity penalties in our framework are linked, allowing cross-validation to be conducted over only a single tuning parameter. We complement our theoretical results with empirical studies comparing some existing methods within this framework.

我们提出了一个统一的框架来估计和分析高维广义加性模型。该框架定义了一大类惩罚回归估计量,包括许多现有的方法。针对这一类,提出了一种有效的计算算法,可以轻松地扩展到数千个观测值和特征。在弱相容条件下,我们证明了这一类的极小极大最优收敛界。此外,我们还刻画了当不满足该相容条件时的收敛速度。最后,我们还表明,在我们的框架中,结构的最优惩罚参数和稀疏性惩罚是相互关联的,从而允许仅在单个调整参数上进行交叉验证。我们通过比较该框架内的一些现有方法的实证研究来补充我们的理论结果。
{"title":"Generalized Sparse Additive Models.","authors":"Asad Haris, Noah Simon, Ali Shojaie","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a unified framework for estimation and analysis of generalized additive models in high dimensions. The framework defines a large class of penalized regression estimators, encompassing many existing methods. An efficient computational algorithm for this class is presented that easily scales to thousands of observations and features. We prove minimax optimal convergence bounds for this class under a weak compatibility condition. In addition, we characterize the rate of convergence when this compatibility condition is not met. Finally, we also show that the optimal penalty parameters for structure and sparsity penalties in our framework are linked, allowing cross-validation to be conducted over only a single tuning parameter. We complement our theoretical results with empirical studies comparing some existing methods within this framework.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 ","pages":""},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10593424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49693499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation and inference on high-dimensional individualized treatment rule in observational data using split-and-pooled de-correlated score. 使用分割和池化去相关分数对观察数据中的高维个体化治疗规则进行估计和推断。
IF 6 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Muxuan Liang, Young-Geun Choi, Yang Ning, Maureen A Smith, Ying-Qi Zhao

With the increasing adoption of electronic health records, there is an increasing interest in developing individualized treatment rules, which recommend treatments according to patients' characteristics, from large observational data. However, there is a lack of valid inference procedures for such rules developed from this type of data in the presence of high-dimensional covariates. In this work, we develop a penalized doubly robust method to estimate the optimal individualized treatment rule from high-dimensional data. We propose a split-and-pooled de-correlated score to construct hypothesis tests and confidence intervals. Our proposal adopts the data splitting to conquer the slow convergence rate of nuisance parameter estimations, such as non-parametric methods for outcome regression or propensity models. We establish the limiting distributions of the split-and-pooled de-correlated score test and the corresponding one-step estimator in high-dimensional setting. Simulation and real data analysis are conducted to demonstrate the superiority of the proposed method.

随着电子健康记录的应用日益广泛,人们越来越关注从大型观察数据中开发个性化治疗规则,根据患者的特征推荐治疗方法。然而,在存在高维协变量的情况下,从这类数据中制定的规则缺乏有效的推断程序。在这项工作中,我们开发了一种惩罚性双重稳健方法,用于从高维数据中估计最优个体化治疗规则。我们提出了一种拆分和池化去相关分数来构建假设检验和置信区间。我们的建议采用数据拆分来克服骚扰参数估计收敛速度慢的问题,如结果回归或倾向模型的非参数方法。我们在高维环境中建立了拆分和池化去相关分数检验的极限分布和相应的一步估计器。通过模拟和实际数据分析,证明了所提方法的优越性。
{"title":"Estimation and inference on high-dimensional individualized treatment rule in observational data using split-and-pooled de-correlated score.","authors":"Muxuan Liang, Young-Geun Choi, Yang Ning, Maureen A Smith, Ying-Qi Zhao","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>With the increasing adoption of electronic health records, there is an increasing interest in developing individualized treatment rules, which recommend treatments according to patients' characteristics, from large observational data. However, there is a lack of valid inference procedures for such rules developed from this type of data in the presence of high-dimensional covariates. In this work, we develop a penalized doubly robust method to estimate the optimal individualized treatment rule from high-dimensional data. We propose a split-and-pooled de-correlated score to construct hypothesis tests and confidence intervals. Our proposal adopts the data splitting to conquer the slow convergence rate of nuisance parameter estimations, such as non-parametric methods for outcome regression or propensity models. We establish the limiting distributions of the split-and-pooled de-correlated score test and the corresponding one-step estimator in high-dimensional setting. Simulation and real data analysis are conducted to demonstrate the superiority of the proposed method.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 ","pages":""},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10720606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138811858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior Adaptive Semi-supervised Learning with Application to EHR Phenotyping. 先验自适应半监督学习在EHR表型中的应用。
IF 6 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Yichi Zhang, Molei Liu, Matey Neykov, Tianxi Cai

Electronic Health Record (EHR) data, a rich source for biomedical research, have been successfully used to gain novel insight into a wide range of diseases. Despite its potential, EHR is currently underutilized for discovery research due to its major limitation in the lack of precise phenotype information. To overcome such difficulties, recent efforts have been devoted to developing supervised algorithms to accurately predict phenotypes based on relatively small training datasets with gold standard labels extracted via chart review. However, supervised methods typically require a sizable training set to yield generalizable algorithms, especially when the number of candidate features, p, is large. In this paper, we propose a semi-supervised (SS) EHR phenotyping method that borrows information from both a small, labeled dataset (where both the label Y and the feature set X are observed) and a much larger, weakly-labeled dataset in which the feature set X is accompanied only by a surrogate label S that is available to all patients. Under a working prior assumption that S is related to X only through Y and allowing it to hold approximately, we propose a prior adaptive semi-supervised (PASS) estimator that incorporates the prior knowledge by shrinking the estimator towards a direction derived under the prior. We derive asymptotic theory for the proposed estimator and justify its efficiency and robustness to prior information of poor quality. We also demonstrate its superiority over existing estimators under various scenarios via simulation studies and on three real-world EHR phenotyping studies at a large tertiary hospital.

电子健康记录(EHR)数据是生物医学研究的一个丰富来源,已成功地用于获得对各种疾病的新见解。尽管电子病历具有潜力,但由于其缺乏精确表型信息的主要限制,目前在发现研究中未得到充分利用。为了克服这些困难,最近的努力致力于开发监督算法,以基于相对较小的训练数据集,通过图表审查提取金标准标签,准确预测表型。然而,监督方法通常需要一个相当大的训练集来产生可推广的算法,特别是当候选特征的数量p很大时。在本文中,我们提出了一种半监督(SS) EHR表型方法,该方法从一个小的、标记的数据集(其中标签Y和特征集X都被观察到)和一个更大的、弱标记的数据集(其中特征集X只伴随着一个可用于所有患者的替代标签S)中借鉴信息。在工作先验假设S仅通过Y与X相关并允许其近似保持的情况下,我们提出了一个先验自适应半监督(PASS)估计器,该估计器通过将估计器缩小到在先验下导出的方向来结合先验知识。我们推导了该估计器的渐近理论,并证明了其对低质量先验信息的有效性和鲁棒性。我们还通过模拟研究和在一家大型三级医院进行的三个现实世界的EHR表型研究,证明了它在各种情况下比现有估计器的优越性。
{"title":"Prior Adaptive Semi-supervised Learning with Application to EHR Phenotyping.","authors":"Yichi Zhang, Molei Liu, Matey Neykov, Tianxi Cai","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Electronic Health Record (EHR) data, a rich source for biomedical research, have been successfully used to gain novel insight into a wide range of diseases. Despite its potential, EHR is currently underutilized for discovery research due to its major limitation in the lack of precise phenotype information. To overcome such difficulties, recent efforts have been devoted to developing supervised algorithms to accurately predict phenotypes based on relatively small training datasets with gold standard labels extracted via chart review. However, supervised methods typically require a sizable training set to yield generalizable algorithms, especially when the number of candidate features, <math><mi>p</mi></math>, is large. In this paper, we propose a semi-supervised (SS) EHR phenotyping method that borrows information from both a small, labeled dataset (where both the label <math><mi>Y</mi></math> and the feature set <math><mi>X</mi></math> are observed) and a much larger, weakly-labeled dataset in which the feature set <math><mi>X</mi></math> is accompanied only by a surrogate label <math><mi>S</mi></math> that is available to all patients. Under a <i>working</i> prior assumption that <math><mi>S</mi></math> is related to <math><mi>X</mi></math> only through <math><mi>Y</mi></math> and allowing it to hold <i>approximately</i>, we propose a prior adaptive semi-supervised (PASS) estimator that incorporates the prior knowledge by shrinking the estimator towards a direction derived under the prior. We derive asymptotic theory for the proposed estimator and justify its efficiency and robustness to prior information of poor quality. We also demonstrate its superiority over existing estimators under various scenarios via simulation studies and on three real-world EHR phenotyping studies at a large tertiary hospital.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 ","pages":""},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10653017/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136400046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Nonlinear Instrumental Variable Models through Prediction Validity. 通过预测有效性反思非线性工具变量模型。
IF 4.3 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Chunxiao Li, Cynthia Rudin, Tyler H McCormick

Instrumental variables (IV) are widely used in the social and health sciences in situations where a researcher would like to measure a causal effect but cannot perform an experiment. For valid causal inference in an IV model, there must be external (exogenous) variation that (i) has a sufficiently large impact on the variable of interest (called the relevance assumption) and where (ii) the only pathway through which the external variation impacts the outcome is via the variable of interest (called the exclusion restriction). For statistical inference, researchers must also make assumptions about the functional form of the relationship between the three variables. Current practice assumes (i) and (ii) are met, then postulates a functional form with limited input from the data. In this paper, we describe a framework that leverages machine learning to validate these typically unchecked but consequential assumptions in the IV framework, providing the researcher empirical evidence about the quality of the instrument given the data at hand. Central to the proposed approach is the idea of prediction validity. Prediction validity checks that error terms - which should be independent from the instrument - cannot be modeled with machine learning any better than a model that is identically zero. We use prediction validity to develop both one-stage and two-stage approaches for IV, and demonstrate their performance on an example relevant to climate change policy.

工具变量(IV)在社会科学和健康科学中被广泛应用于研究人员想要测量因果效应但又无法进行实验的情况。要在 IV 模型中进行有效的因果推断,必须存在以下外部(外生)变量:(i) 对相关变量有足够大的影响(称为相关性假设);(ii) 外部变量影响结果的唯一途径是通过相关变量(称为排除限制)。为了进行统计推断,研究人员还必须对这三个变量之间关系的函数形式做出假设。目前的做法是先假设满足(i)和(ii),然后在数据输入有限的情况下假设函数形式。在本文中,我们描述了一个框架,该框架利用机器学习来验证 IV 框架中这些通常未被检查但却具有重要意义的假设,从而为研究人员提供有关手头数据下工具质量的经验证据。预测有效性是所提方法的核心。预测有效性检验了误差项(应独立于工具)的机器学习建模效果是否优于同为零的模型。我们利用预测有效性开发了单阶段和双阶段 IV 方法,并在一个与气候变化政策相关的例子中展示了它们的性能。
{"title":"Rethinking Nonlinear Instrumental Variable Models through Prediction Validity.","authors":"Chunxiao Li, Cynthia Rudin, Tyler H McCormick","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Instrumental variables (IV) are widely used in the social and health sciences in situations where a researcher would like to measure a causal effect but cannot perform an experiment. For valid causal inference in an IV model, there must be external (exogenous) variation that (i) has a sufficiently large impact on the variable of interest (called the <i>relevance assumption</i>) and where (ii) the only pathway through which the external variation impacts the outcome is via the variable of interest (called the <i>exclusion restriction</i>). For statistical inference, researchers must also make assumptions about the functional form of the relationship between the three variables. Current practice assumes (i) and (ii) are met, then postulates a functional form with limited input from the data. In this paper, we describe a framework that leverages machine learning to validate these typically unchecked but consequential assumptions in the IV framework, providing the researcher empirical evidence about the quality of the instrument given the data at hand. Central to the proposed approach is the idea of <i>prediction validity</i>. Prediction validity checks that error terms - which should be independent from the instrument - cannot be modeled with machine learning any better than a model that is identically zero. We use prediction validity to develop both one-stage and two-stage approaches for IV, and demonstrate their performance on an example relevant to climate change policy.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11539950/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Covariate-Dependent Gaussian Graphical Models with Varying Structure. 具有变结构的贝叶斯协变量相关高斯图形模型。
IF 6 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Yang Ni, Francesco C Stingo, Veerabhadran Baladandayuthapani

We introduce Bayesian Gaussian graphical models with covariates (GGMx), a class of multivariate Gaussian distributions with covariate-dependent sparse precision matrix. We propose a general construction of a functional mapping from the covariate space to the cone of sparse positive definite matrices, which encompasses many existing graphical models for heterogeneous settings. Our methodology is based on a novel mixture prior for precision matrices with a non-local component that admits attractive theoretical and empirical properties. The flexible formulation of GGMx allows both the strength and the sparsity pattern of the precision matrix (hence the graph structure) change with the covariates. Posterior inference is carried out with a carefully designed Markov chain Monte Carlo algorithm, which ensures the positive definiteness of sparse precision matrices at any given covariates' values. Extensive simulations and a case study in cancer genomics demonstrate the utility of the proposed model.

我们引入了具有协变量的贝叶斯-高斯图形模型(GGMx),这是一类具有协变量相关稀疏精度矩阵的多变量高斯分布。我们提出了一个从协变空间到稀疏正定矩阵锥的函数映射的一般构造,它包含了许多现有的异构环境的图形模型。我们的方法基于具有非局部分量的精确矩阵的新的混合先验,该混合先验具有吸引人的理论和经验性质。GGMx的灵活公式允许精度矩阵(因此图结构)的强度和稀疏性模式随协变量而变化。后验推理是用精心设计的马尔可夫链蒙特卡罗算法进行的,该算法确保了稀疏精度矩阵在任何给定协变量值下的正定性。癌症基因组学的广泛模拟和案例研究证明了所提出的模型的实用性。
{"title":"Bayesian Covariate-Dependent Gaussian Graphical Models with Varying Structure.","authors":"Yang Ni,&nbsp;Francesco C Stingo,&nbsp;Veerabhadran Baladandayuthapani","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We introduce Bayesian Gaussian graphical models with covariates (GGMx), a class of multivariate Gaussian distributions with covariate-dependent sparse precision matrix. We propose a general construction of a functional mapping from the covariate space to the cone of sparse positive definite matrices, which encompasses many existing graphical models for heterogeneous settings. Our methodology is based on a novel mixture prior for precision matrices with a non-local component that admits attractive theoretical and empirical properties. The flexible formulation of GGMx allows both the strength and the sparsity pattern of the precision matrix (hence the graph structure) change with the covariates. Posterior inference is carried out with a carefully designed Markov chain Monte Carlo algorithm, which ensures the positive definiteness of sparse precision matrices at any given covariates' values. Extensive simulations and a case study in cancer genomics demonstrate the utility of the proposed model.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 242","pages":""},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10552903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41161813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-asymptotic Properties of Individualized Treatment Rules from Sequentially Rule-Adaptive Trials. 序贯规则自适应试验个体化治疗规则的非渐近性质。
IF 6 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Daiqi Gao, Yufeng Liu, Donglin Zeng

Learning optimal individualized treatment rules (ITRs) has become increasingly important in the modern era of precision medicine. Many statistical and machine learning methods for learning optimal ITRs have been developed in the literature. However, most existing methods are based on data collected from traditional randomized controlled trials and thus cannot take advantage of the accumulative evidence when patients enter the trials sequentially. It is also ethically important that future patients should have a high probability to be treated optimally based on the updated knowledge so far. In this work, we propose a new design called sequentially rule-adaptive trials to learn optimal ITRs based on the contextual bandit framework, in contrast to the response-adaptive design in traditional adaptive trials. In our design, each entering patient will be allocated with a high probability to the current best treatment for this patient, which is estimated using the past data based on some machine learning algorithm (for example, outcome weighted learning in our implementation). We explore the tradeoff between training and test values of the estimated ITR in single-stage problems by proving theoretically that for a higher probability of following the estimated ITR, the training value converges to the optimal value at a faster rate, while the test value converges at a slower rate. This problem is different from traditional decision problems in the sense that the training data are generated sequentially and are dependent. We also develop a tool that combines martingale with empirical process to tackle the problem that cannot be solved by previous techniques for i.i.d. data. We show by numerical examples that without much loss of the test value, our proposed algorithm can improve the training value significantly as compared to existing methods. Finally, we use a real data study to illustrate the performance of the proposed method.

在现代精准医学时代,学习最佳个体化治疗规则(ITRs)变得越来越重要。在文献中已经开发了许多用于学习最优itr的统计和机器学习方法。然而,现有的方法大多是基于传统的随机对照试验收集的数据,无法利用患者顺序进入试验时累积的证据。根据目前的最新知识,未来的患者应该有很高的概率得到最佳治疗,这在伦理上也很重要。在这项工作中,我们提出了一种新的设计,称为顺序规则自适应试验,以学习基于上下文强盗框架的最优itr,而不是传统自适应试验中的响应自适应设计。在我们的设计中,每个进入的患者将以高概率分配给该患者当前的最佳治疗,这是使用基于某些机器学习算法的过去数据来估计的(例如,我们实现中的结果加权学习)。我们通过从理论上证明,对于遵循估计ITR的概率越高,训练值收敛到最优值的速度越快,而测试值收敛的速度越慢,从而探讨了单阶段问题中估计ITR的训练值和测试值之间的权衡。这个问题与传统的决策问题不同,因为训练数据是顺序生成的,并且是相互依赖的。我们还开发了一个将鞅与经验过程相结合的工具,以解决以前的i.i.d数据技术无法解决的问题。通过算例表明,在不损失测试值的情况下,与现有方法相比,本文提出的算法可以显著提高训练值。最后,我们用一个真实的数据研究来说明所提出的方法的性能。
{"title":"Non-asymptotic Properties of Individualized Treatment Rules from Sequentially Rule-Adaptive Trials.","authors":"Daiqi Gao,&nbsp;Yufeng Liu,&nbsp;Donglin Zeng","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Learning optimal individualized treatment rules (ITRs) has become increasingly important in the modern era of precision medicine. Many statistical and machine learning methods for learning optimal ITRs have been developed in the literature. However, most existing methods are based on data collected from traditional randomized controlled trials and thus cannot take advantage of the accumulative evidence when patients enter the trials sequentially. It is also ethically important that future patients should have a high probability to be treated optimally based on the updated knowledge so far. In this work, we propose a new design called sequentially rule-adaptive trials to learn optimal ITRs based on the contextual bandit framework, in contrast to the response-adaptive design in traditional adaptive trials. In our design, each entering patient will be allocated with a high probability to the current best treatment for this patient, which is estimated using the past data based on some machine learning algorithm (for example, outcome weighted learning in our implementation). We explore the tradeoff between training and test values of the estimated ITR in single-stage problems by proving theoretically that for a higher probability of following the estimated ITR, the training value converges to the optimal value at a faster rate, while the test value converges at a slower rate. This problem is different from traditional decision problems in the sense that the training data are generated sequentially and are dependent. We also develop a tool that combines martingale with empirical process to tackle the problem that cannot be solved by previous techniques for i.i.d. data. We show by numerical examples that without much loss of the test value, our proposed algorithm can improve the training value significantly as compared to existing methods. Finally, we use a real data study to illustrate the performance of the proposed method.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 250","pages":""},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10419117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10008225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D-GCCA: Decomposition-based Generalized Canonical Correlation Analysis for Multi-view High-dimensional Data. D-GCCA:基于分解的多视角高维数据广义典范相关分析。
IF 4.3 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Hai Shu, Zhe Qu, Hongtu Zhu

Modern biomedical studies often collect multi-view data, that is, multiple types of data measured on the same set of objects. A popular model in high-dimensional multi-view data analysis is to decompose each view's data matrix into a low-rank common-source matrix generated by latent factors common across all data views, a low-rank distinctive-source matrix corresponding to each view, and an additive noise matrix. We propose a novel decomposition method for this model, called decomposition-based generalized canonical correlation analysis (D-GCCA). The D-GCCA rigorously defines the decomposition on the L 2 space of random variables in contrast to the Euclidean dot product space used by most existing methods, thereby being able to provide the estimation consistency for the low-rank matrix recovery. Moreover, to well calibrate common latent factors, we impose a desirable orthogonality constraint on distinctive latent factors. Existing methods, however, inadequately consider such orthogonality and may thus suffer from substantial loss of undetected common-source variation. Our D-GCCA takes one step further than generalized canonical correlation analysis by separating common and distinctive components among canonical variables, while enjoying an appealing interpretation from the perspective of principal component analysis. Furthermore, we propose to use the variable-level proportion of signal variance explained by common or distinctive latent factors for selecting the variables most influenced. Consistent estimators of our D-GCCA method are established with good finite-sample numerical performance, and have closed-form expressions leading to efficient computation especially for large-scale data. The superiority of D-GCCA over state-of-the-art methods is also corroborated in simulations and real-world data examples.

现代生物医学研究经常收集多视图数据,即对同一组对象测量的多种类型数据。高维多视图数据分析中的一种流行模型是将每个视图的数据矩阵分解为由所有数据视图中共同的潜在因子生成的低阶共源矩阵、与每个视图相对应的低阶独特源矩阵以及加性噪声矩阵。我们为此模型提出了一种新颖的分解方法,称为基于分解的广义典型相关分析(D-GCCA)。与大多数现有方法使用的欧几里得点积空间不同,D-GCCA 在随机变量的 L 2 空间上严格定义了分解,因此能为低阶矩阵恢复提供估计一致性。此外,为了很好地校准共同潜因,我们对不同的潜因施加了理想的正交性约束。然而,现有的方法没有充分考虑到这种正交性,因此可能会导致大量未检测到的共源变异损失。我们的 D-GCCA 比广义典型相关分析更进了一步,它在典型变量中分离了共同成分和独特成分,同时从主成分分析的角度进行了有吸引力的解释。此外,我们还建议使用由共同或独特潜在因素解释的信号方差的变量级比例来选择受影响最大的变量。我们的 D-GCCA 方法建立了一致的估计值,具有良好的有限样本数值性能,并且具有闭式表达式,特别适合大规模数据的高效计算。模拟和实际数据实例也证实了 D-GCCA 方法优于最先进的方法。
{"title":"D-GCCA: Decomposition-based Generalized Canonical Correlation Analysis for Multi-view High-dimensional Data.","authors":"Hai Shu, Zhe Qu, Hongtu Zhu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Modern biomedical studies often collect multi-view data, that is, multiple types of data measured on the same set of objects. A popular model in high-dimensional multi-view data analysis is to decompose each view's data matrix into a low-rank common-source matrix generated by latent factors common across all data views, a low-rank distinctive-source matrix corresponding to each view, and an additive noise matrix. We propose a novel decomposition method for this model, called decomposition-based generalized canonical correlation analysis (D-GCCA). The D-GCCA rigorously defines the decomposition on the <math> <mrow><msup><mi>L</mi> <mn>2</mn></msup> </mrow> </math> space of random variables in contrast to the Euclidean dot product space used by most existing methods, thereby being able to provide the estimation consistency for the low-rank matrix recovery. Moreover, to well calibrate common latent factors, we impose a desirable orthogonality constraint on distinctive latent factors. Existing methods, however, inadequately consider such orthogonality and may thus suffer from substantial loss of undetected common-source variation. Our D-GCCA takes one step further than generalized canonical correlation analysis by separating common and distinctive components among canonical variables, while enjoying an appealing interpretation from the perspective of principal component analysis. Furthermore, we propose to use the variable-level proportion of signal variance explained by common or distinctive latent factors for selecting the variables most influenced. Consistent estimators of our D-GCCA method are established with good finite-sample numerical performance, and have closed-form expressions leading to efficient computation especially for large-scale data. The superiority of D-GCCA over state-of-the-art methods is also corroborated in simulations and real-world data examples.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9380864/pdf/nihms-1815754.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10468609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Classification of Categorical Time Series Using the Spectral Envelope and Optimal Scalings. 使用谱包络和最优标度的分类时间序列的可解释分类。
IF 6 3区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2022-01-01
Zeda Li, Scott A Bruce, Tian Cai

This article introduces a novel approach to the classification of categorical time series under the supervised learning paradigm. To construct meaningful features for categorical time series classification, we consider two relevant quantities: the spectral envelope and its corresponding set of optimal scalings. These quantities characterize oscillatory patterns in a categorical time series as the largest possible power at each frequency, or spectral envelope, obtained by assigning numerical values, or scalings, to categories that optimally emphasize oscillations at each frequency. Our procedure combines these two quantities to produce an interpretable and parsimonious feature-based classifier that can be used to accurately determine group membership for categorical time series. Classification consistency of the proposed method is investigated, and simulation studies are used to demonstrate accuracy in classifying categorical time series with various underlying group structures. Finally, we use the proposed method to explore key differences in oscillatory patterns of sleep stage time series for patients with different sleep disorders and accurately classify patients accordingly. The code for implementing the proposed method is available at https://github.com/zedali16/envsca.

本文介绍了一种在监督学习范式下分类时间序列的新方法。为了构造对分类时间序列分类有意义的特征,我们考虑了两个相关的量:谱包络及其相应的最优尺度集。这些量将分类时间序列中的振荡模式表征为每个频率或频谱包络的最大可能功率,通过分配数值或缩放来获得,以最优地强调每个频率的振荡。我们的程序将这两个量结合起来,产生一个可解释且简洁的基于特征的分类器,可用于准确确定分类时间序列的组成员关系。研究了该方法的分类一致性,并用仿真研究证明了该方法对具有不同底层群结构的分类时间序列进行分类的准确性。最后,我们使用该方法探索不同睡眠障碍患者睡眠阶段时间序列振荡模式的关键差异,并据此对患者进行准确分类。实现所建议的方法的代码可在https://github.com/zedali16/envsca上获得。
{"title":"Interpretable Classification of Categorical Time Series Using the Spectral Envelope and Optimal Scalings.","authors":"Zeda Li,&nbsp;Scott A Bruce,&nbsp;Tian Cai","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This article introduces a novel approach to the classification of categorical time series under the supervised learning paradigm. To construct meaningful features for categorical time series classification, we consider two relevant quantities: the spectral envelope and its corresponding set of optimal scalings. These quantities characterize oscillatory patterns in a categorical time series as the largest possible power at each frequency, or <i>spectral envelope</i>, obtained by assigning numerical values, or <i>scalings</i>, to categories that optimally emphasize oscillations at each frequency. Our procedure combines these two quantities to produce an interpretable and parsimonious feature-based classifier that can be used to accurately determine group membership for categorical time series. Classification consistency of the proposed method is investigated, and simulation studies are used to demonstrate accuracy in classifying categorical time series with various underlying group structures. Finally, we use the proposed method to explore key differences in oscillatory patterns of sleep stage time series for patients with different sleep disorders and accurately classify patients accordingly. The code for implementing the proposed method is available at https://github.com/zedali16/envsca.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":"23 299","pages":""},"PeriodicalIF":6.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10210597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9529646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Machine Learning Research
全部 Chem. Ecol. J. Hydrol. Environ. Eng. Manage. J. ASTRON ASTROPHYS Front. Phys. COMP BIOCHEM PHYS C Aquat. Geochem. Environmental Claims Journal Environmental dermatology : the official journal of the Japanese Society for Contact Dermatitis IZV-PHYS SOLID EART+ J NONLINEAR OPT PHYS Geosci. Front. Ecol. Res. Low Temp. Phys. European Journal of Biological Research 2012 IEEE International Conference on Oxide Materials for Electronic Engineering (OMEE) Exp. Eye Res. 液晶与显示 Yan Ke Xue Bao (Hong Kong) 2011 International Conference on Infrared, Millimeter, and Terahertz Waves J. Opt. ACTA GEOL SIN-ENGL PHYSICA B FOLIA PHONIATR LOGO J. Adv. Model. Earth Syst. Clean Technol. Environ. Policy Chin. Phys. Lett. 2013 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST) INT J MOD PHYS D ENVIRON GEOL Geol. J. OPT LASER TECHNOL Opt. Commun. Phys. Dark Universe Behav Sci (Basel) Carbon Balance Manage. Adv. Astron. Vision Res. J. Theor. Comput. Acoust. Child Dev. Perspect. J. Space Weather Space Clim. 材料工程研究(英文) 中国煤层气 Astrophys. Space Sci. Psychological methods PROD PLAN CONTROL PHARMACOL BIOCHEM BE Energy Storage Ecol. Processes 2011 Optical Fiber Communication Conference and Exposition and the National Fiber Optic Engineers Conference 2012 International Symposium on Geomatics for Integrated Water Resource Management 非金属矿 ACTA PETROL SIN Energy Environ. 2013 IEEE International Solid-State Circuits Conference Digest of Technical Papers Ocean Sci. Indian Journal of Clinical and Experimental Ophthalmology Chin. J. Phys. Conserv. Biol. J Psychol Afr Social Influence Environment and Natural Resources Journal PERIOD MINERAL Environ. Chem. Appl. Geochem. Stud. Geophys. Geod. Acta Oceanolog. Sin. J OPT TECHNOL+ Oper. Res. Perspect. Chin. Phys. B J QUANT SPECTROSC RA Women & Therapy EUR PHYS J-APPL PHYS INDIAN J PURE AP PHY FITOTERAPIA European Journal of Chemistry Infection Control & Hospital Epidemiology Global Biogeochem. Cycles ENVIRONMENT 2013 21st IEEE International Requirements Engineering Conference (RE) essentia law Merchant Shipping Act 1995 Behavioural and Cognitive Psychotherapy Omega-Journal of Death and Dying 2013 International Conference on Optical MEMS and Nanophotonics (OMN) Clean-Soil Air Water Eur. Rev. Med. Pharmacol. Sci. ATMOSPHERE-BASEL [Zasshi] [Journal]. Nihon Kyobu Geka Gakkai Opt. Express “Agriculture for Life, Life for Agriculture” Conference Proceedings Ind Biotechnol (New Rochelle N Y) Exp. Anim. Personnel Psychology J PHYS D APPL PHYS Hydrogeol. J. Communications Earth & Environment Topics in Cognitive Science 2013 IEEE Conference on Computer Vision and Pattern Recognition 2010 IEEE International Solid-State Circuits Conference - (ISSCC) Open Astron.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1