首页 > 最新文献

International Journal of Biostatistics最新文献

英文 中文
Targeting the Optimal Design in Randomized Clinical Trials with Binary Outcomes and No Covariate: Theoretical Study 双结果无协变量随机临床试验的优化设计:理论研究
IF 1.2 4区 数学 Pub Date : 2010-02-18 DOI: 10.2202/1557-4679.1247
A. Chambaz, M. J. van der Laan
This article is devoted to the asymptotic study of adaptive group sequential designs in the case of randomized clinical trials (RCTs) with binary treatment, binary outcome and no covariate. By adaptive design, we mean in this setting a RCT design that allows the investigator to dynamically modify its course through data-driven adjustment of the randomization probability based on data accrued so far, without negatively impacting on the statistical integrity of the trial. By adaptive group sequential design, we refer to the fact that group sequential testing methods can be equally well applied on top of adaptive designs. We obtain that, theoretically, the adaptive design converges almost surely to the targeted unknown randomization scheme. In the estimation framework, we obtain that our maximum likelihood estimator of the parameter of interest is a strongly consistent estimator, and it satisfies a central limit theorem. We can estimate its asymptotic variance, which is the same as that it would feature had we known in advance the targeted randomization scheme and independently sampled from it. Consequently, inference can be carried out as if we had resorted to independent and identically distributed (iid) sampling. In the testing framework, we obtain that the multidimensional t-statistic that we would use under iid sampling still converges to the same canonical distribution under adaptive sampling. Consequently, the same group sequential testing can be carried out as if we had resorted to iid sampling. Furthermore, a comprehensive simulation study that we undertake in a companion article validates the theory. A three-sentence take-home message is “Adaptive designs do learn the targeted optimal design and inference, and testing can be carried out under adaptive sampling as they would under the targeted optimal randomization probability iid sampling. In particular, adaptive designs achieve the same efficiency as the fixed oracle design. This is confirmed by a simulation study, at least for moderate or large sample sizes, across a large collection of targeted randomization probabilities.'”
本文致力于在随机临床试验(rct)中采用二元治疗、二元结局和无协变量的自适应组序贯设计的渐近研究。通过自适应设计,我们的意思是在这种情况下,RCT设计允许研究者根据到目前为止累积的数据,通过数据驱动的随机化概率调整来动态修改其过程,而不会对试验的统计完整性产生负面影响。通过自适应组序列设计,我们指的是组序列测试方法可以同样很好地应用于自适应设计之上。我们得到,从理论上讲,自适应设计几乎肯定地收敛于目标未知随机化方案。在估计框架中,我们得到了目标参数的极大似然估计量是一个强相合估计量,它满足中心极限定理。我们可以估计它的渐近方差,这与我们事先知道目标随机化方案并从中独立采样时它的特征相同。因此,推理可以进行,如果我们已经采取了独立和同分布(iid)抽样。在测试框架中,我们得到了在iid抽样下使用的多维t统计量在自适应抽样下仍然收敛于相同的正则分布。因此,同一组顺序测试可以进行,如果我们已经采取了iid抽样。此外,我们在一篇配套文章中进行的全面模拟研究验证了这一理论。一个三句话的关键信息是“自适应设计确实学习了目标最优设计和推理,并且测试可以在自适应抽样下进行,就像在目标最优随机化概率下一样。”特别是,自适应设计可以达到与固定oracle设计相同的效率。一项模拟研究证实了这一点,至少对于中等或较大的样本量,在大量目标随机化概率的集合中。”
{"title":"Targeting the Optimal Design in Randomized Clinical Trials with Binary Outcomes and No Covariate: Theoretical Study","authors":"A. Chambaz, M. J. van der Laan","doi":"10.2202/1557-4679.1247","DOIUrl":"https://doi.org/10.2202/1557-4679.1247","url":null,"abstract":"This article is devoted to the asymptotic study of adaptive group sequential designs in the case of randomized clinical trials (RCTs) with binary treatment, binary outcome and no covariate. By adaptive design, we mean in this setting a RCT design that allows the investigator to dynamically modify its course through data-driven adjustment of the randomization probability based on data accrued so far, without negatively impacting on the statistical integrity of the trial. By adaptive group sequential design, we refer to the fact that group sequential testing methods can be equally well applied on top of adaptive designs. We obtain that, theoretically, the adaptive design converges almost surely to the targeted unknown randomization scheme. In the estimation framework, we obtain that our maximum likelihood estimator of the parameter of interest is a strongly consistent estimator, and it satisfies a central limit theorem. We can estimate its asymptotic variance, which is the same as that it would feature had we known in advance the targeted randomization scheme and independently sampled from it. Consequently, inference can be carried out as if we had resorted to independent and identically distributed (iid) sampling. In the testing framework, we obtain that the multidimensional t-statistic that we would use under iid sampling still converges to the same canonical distribution under adaptive sampling. Consequently, the same group sequential testing can be carried out as if we had resorted to iid sampling. Furthermore, a comprehensive simulation study that we undertake in a companion article validates the theory. A three-sentence take-home message is “Adaptive designs do learn the targeted optimal design and inference, and testing can be carried out under adaptive sampling as they would under the targeted optimal randomization probability iid sampling. In particular, adaptive designs achieve the same efficiency as the fixed oracle design. This is confirmed by a simulation study, at least for moderate or large sample sizes, across a large collection of targeted randomization probabilities.'”","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2010-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1247","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Targeted Maximum Likelihood Based Causal Inference: Part I 基于目标最大似然的因果推理:第一部分
IF 1.2 4区 数学 Pub Date : 2010-01-01 DOI: 10.2202/1557-4679.1211
M. J. van der Laan
Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions.In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is consistent if either the initial estimator is consistent, or the estimator of the optimal fluctuation function is consistent. The optimal fluctuation function is correctly specified if the conditional distributions of the nodes in the causal graph one intervenes upon are correctly specified. The latter conditional distributions often comprise the so called treatment and censoring mechanism. Selection among different targeted maximum likelihood estimators (e.g., indexed by different initial estimators) can be based on loss-based cross-validation such as likelihood based cross-validation or cross-validation based on another appropriate loss function for the distribution of the data. Some specific loss functions are mentioned in this article.Subsequently, a variety of interesting observations about this targeted maximum likelihood estimation procedure are made. This article provides the basis for the subsequent companion Part II-article in which concrete demonstrations for the implementation of the
给定因果图假设,数据的特定干预反事实分布可以通过所谓的g计算公式来定义,该公式是通过对根据因果图分解的数据的可能性进行这些干预而获得的。获得的g计算公式表示,如果对生成数据的系统强制执行这种干预,数据将具有的反事实分布。兴趣的因果效应现在可以定义为这些由不同干预措施索引的反事实分布之间的一些差异。例如,干预措施可以代表静态治疗方案或个性化治疗规则,根据时间相关协变量分配治疗,因果效应可以根据治疗方案特定反事实结果的平均值作为相应治疗方案的函数的特征来定义。这些特征可以根据所谓的静态或个性化治疗规则的(非参数)边际结构模型来非参数地定义,其参数可以被认为是治疗方案特定反事实分布之间差异的(平滑)总结度量。在本文中,我们开发了一个特定的目标最大似然估计多时间点干预的因果效应。这涉及到使用基于损失的超级学习来获得g计算公式中未知因素的初始估计,随后,对每个估计的因素应用目标参数特定的最优波动函数(最不利参数子模型),用最大似然估计估计波动参数,并迭代初始因素的更新步骤直到收敛。这种迭代的目标最大似然更新步骤使得因果效应的结果估计量具有双重鲁棒性,即如果初始估计量一致,或者最优波动函数的估计量一致,则结果估计量是一致的。如果正确指定了因果图中节点的条件分布,则正确指定了最优波动函数。后一种条件分布通常包括所谓的处理和审查机制。在不同的目标最大似然估计器(例如,由不同的初始估计器索引)之间的选择可以基于基于损失的交叉验证,例如基于似然的交叉验证或基于数据分布的另一个适当损失函数的交叉验证。文中提到了一些具体的损失函数。随后,对这种有针对性的最大似然估计过程进行了各种有趣的观察。本文为后续的第二部分文章提供了基础,其中提供了在复杂因果效应估计问题中实现目标MLE的具体演示。
{"title":"Targeted Maximum Likelihood Based Causal Inference: Part I","authors":"M. J. van der Laan","doi":"10.2202/1557-4679.1211","DOIUrl":"https://doi.org/10.2202/1557-4679.1211","url":null,"abstract":"Given causal graph assumptions, intervention-specific counterfactual distributions of the data can be defined by the so called G-computation formula, which is obtained by carrying out these interventions on the likelihood of the data factorized according to the causal graph. The obtained G-computation formula represents the counterfactual distribution the data would have had if this intervention would have been enforced on the system generating the data. A causal effect of interest can now be defined as some difference between these counterfactual distributions indexed by different interventions. For example, the interventions can represent static treatment regimens or individualized treatment rules that assign treatment in response to time-dependent covariates, and the causal effects could be defined in terms of features of the mean of the treatment-regimen specific counterfactual outcome of interest as a function of the corresponding treatment regimens. Such features could be defined nonparametrically in terms of so called (nonparametric) marginal structural models for static or individualized treatment rules, whose parameters can be thought of as (smooth) summary measures of differences between the treatment regimen specific counterfactual distributions.In this article, we develop a particular targeted maximum likelihood estimator of causal effects of multiple time point interventions. This involves the use of loss-based super-learning to obtain an initial estimate of the unknown factors of the G-computation formula, and subsequently, applying a target-parameter specific optimal fluctuation function (least favorable parametric submodel) to each estimated factor, estimating the fluctuation parameter(s) with maximum likelihood estimation, and iterating this updating step of the initial factor till convergence. This iterative targeted maximum likelihood updating step makes the resulting estimator of the causal effect double robust in the sense that it is consistent if either the initial estimator is consistent, or the estimator of the optimal fluctuation function is consistent. The optimal fluctuation function is correctly specified if the conditional distributions of the nodes in the causal graph one intervenes upon are correctly specified. The latter conditional distributions often comprise the so called treatment and censoring mechanism. Selection among different targeted maximum likelihood estimators (e.g., indexed by different initial estimators) can be based on loss-based cross-validation such as likelihood based cross-validation or cross-validation based on another appropriate loss function for the distribution of the data. Some specific loss functions are mentioned in this article.Subsequently, a variety of interesting observations about this targeted maximum likelihood estimation procedure are made. This article provides the basis for the subsequent companion Part II-article in which concrete demonstrations for the implementation of the ","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2010-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1211","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Modeling Cumulative Incidences of Dementia and Dementia-Free Death Using a Novel Three-Parameter Logistic Function 用一种新颖的三参数逻辑函数建模痴呆和无痴呆死亡的累积发病率
IF 1.2 4区 数学 Pub Date : 2009-11-10 DOI: 10.2202/1557-4679.1183
Y. Cheng
Parametric modeling of univariate cumulative incidence functions and logistic models have been studied extensively. However, to the best of our knowledge, there is no study using logistic models to characterize cumulative incidence functions. In this paper, we propose a novel parametric model which is an extension of a widely-used four-parameter logistic function for dose-response curves. The modified model can accommodate various shapes of cumulative incidence functions and be easily implemented using standard statistical software. The simulation studies demonstrate that the proposed model is as efficient as or more efficient than its nonparametric counterpart when it is correctly specified, and outperforms the existing Gompertz model when the underlying cumulative incidence function is sigmoidal. The practical utility of the modified three-parameter logistic model is illustrated using the data from the Cache County Study of dementia.
单变量累积关联函数的参数化建模和logistic模型得到了广泛的研究。然而,据我们所知,还没有研究使用逻辑模型来表征累积关联函数。在本文中,我们提出了一种新的参数模型,它是一种广泛使用的剂量-反应曲线的四参数逻辑函数的扩展。修正后的模型可以适应各种形状的累积关联函数,并且易于使用标准统计软件实现。仿真研究表明,当正确指定该模型时,该模型与非参数模型一样有效或更有效,并且当潜在累积关联函数为s型时,该模型优于现有的Gompertz模型。修改后的三参数逻辑模型的实际效用是说明使用数据从Cache县研究痴呆症。
{"title":"Modeling Cumulative Incidences of Dementia and Dementia-Free Death Using a Novel Three-Parameter Logistic Function","authors":"Y. Cheng","doi":"10.2202/1557-4679.1183","DOIUrl":"https://doi.org/10.2202/1557-4679.1183","url":null,"abstract":"Parametric modeling of univariate cumulative incidence functions and logistic models have been studied extensively. However, to the best of our knowledge, there is no study using logistic models to characterize cumulative incidence functions. In this paper, we propose a novel parametric model which is an extension of a widely-used four-parameter logistic function for dose-response curves. The modified model can accommodate various shapes of cumulative incidence functions and be easily implemented using standard statistical software. The simulation studies demonstrate that the proposed model is as efficient as or more efficient than its nonparametric counterpart when it is correctly specified, and outperforms the existing Gompertz model when the underlying cumulative incidence function is sigmoidal. The practical utility of the modified three-parameter logistic model is illustrated using the data from the Cache County Study of dementia.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"5 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2009-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Using Generalized Additive Models to Detect and Estimate Threshold Associations 利用广义加性模型检测和估计阈值关联
IF 1.2 4区 数学 Pub Date : 2009-09-16 DOI: 10.2202/1557-4679.1172
A. Benedetti, M. Abrahamowicz, K. Leffondré, M. Goldberg, R. Tamblyn
In a variety of research settings, investigators may wish to detect and estimate a threshold in the association between continuous variables. A threshold model implies a non-linear relationship, with the slope changing at an unknown location. Generalized additive models (GAMs) (Hastie and Tibshirani, 1990) estimate the shape of the non-linear relationship directly from the data and, thus, may be useful in this endeavour.We propose a method based on GAMs to detect and estimate thresholds in the association between a continuous covariate and a continuous dependent variable. Using simulations, we compare it with the maximum likelihood estimation procedure proposed by Hudson (1966).We search for potential thresholds in a neighbourhood of points whose mean numerical second derivative (a measure of local curvature) of the estimated GAM curve was more than one standard deviation away from 0 across the entire range of the predictor values. A threshold association is declared if an F-test indicates that the threshold model fit significantly better than the linear model.For each method, type I error for testing the existence of a threshold against the null hypothesis of a linear association was estimated. We also investigated the impact of the position of the true threshold on power, and precision and bias of the estimated threshold.Finally, we illustrate the methods by considering whether a threshold exists in the association between systolic blood pressure (SBP) and body mass index (BMI) in two data sets.
在各种研究设置中,研究者可能希望检测和估计连续变量之间关联的阈值。阈值模型意味着一种非线性关系,斜率在未知位置变化。广义加性模型(GAMs) (Hastie和Tibshirani, 1990)直接从数据中估计非线性关系的形状,因此,在这一努力中可能有用。我们提出了一种基于GAMs的方法来检测和估计连续协变量和连续因变量之间关联的阈值。通过模拟,我们将其与Hudson(1966)提出的最大似然估计过程进行了比较。我们在估计的GAM曲线的平均数值二阶导数(局部曲率的度量)在整个预测值范围内距离0超过一个标准差的点的邻域中搜索潜在阈值。如果f检验表明阈值模型明显优于线性模型,则声明阈值关联。对于每种方法,对线性关联的零假设检验阈值存在性的类型I误差进行了估计。我们还研究了真实阈值的位置对功率的影响,以及估计阈值的精度和偏差。最后,我们通过考虑收缩压(SBP)和体重指数(BMI)之间的关联是否存在阈值来说明方法。
{"title":"Using Generalized Additive Models to Detect and Estimate Threshold Associations","authors":"A. Benedetti, M. Abrahamowicz, K. Leffondré, M. Goldberg, R. Tamblyn","doi":"10.2202/1557-4679.1172","DOIUrl":"https://doi.org/10.2202/1557-4679.1172","url":null,"abstract":"In a variety of research settings, investigators may wish to detect and estimate a threshold in the association between continuous variables. A threshold model implies a non-linear relationship, with the slope changing at an unknown location. Generalized additive models (GAMs) (Hastie and Tibshirani, 1990) estimate the shape of the non-linear relationship directly from the data and, thus, may be useful in this endeavour.We propose a method based on GAMs to detect and estimate thresholds in the association between a continuous covariate and a continuous dependent variable. Using simulations, we compare it with the maximum likelihood estimation procedure proposed by Hudson (1966).We search for potential thresholds in a neighbourhood of points whose mean numerical second derivative (a measure of local curvature) of the estimated GAM curve was more than one standard deviation away from 0 across the entire range of the predictor values. A threshold association is declared if an F-test indicates that the threshold model fit significantly better than the linear model.For each method, type I error for testing the existence of a threshold against the null hypothesis of a linear association was estimated. We also investigated the impact of the position of the true threshold on power, and precision and bias of the estimated threshold.Finally, we illustrate the methods by considering whether a threshold exists in the association between systolic blood pressure (SBP) and body mass index (BMI) in two data sets.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"5 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2009-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1172","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Consonance and the Closure Method in Multiple Testing 多重测试中的谐音与闭合法
IF 1.2 4区 数学 Pub Date : 2009-09-01 DOI: 10.2202/1557-4679.1300
Joseph P. Romano, A. Shaikh, Michael Wolf
Consider the problem of testing s null hypotheses simultaneously. In order to deal with the multiplicity problem, the classical approach is to restrict attention to multiple testing procedures that control the familywise error rate (FWE). The closure method of Marcus et al. (1976) reduces the problem of constructing such procedures to one of constructing single tests that control the usual probability of a Type 1 error. It was shown by Sonnemann (1982, 2008) that any coherent multiple testing procedure can be constructed using the closure method. Moreover, it was shown by Sonnemann and Finner (1988) that any incoherent multiple testing procedure can be replaced by a coherent multiple testing procedure which is at least as good. In this paper, we first show an analogous result for dissonant and consonant multiple testing procedures. We show further that, in many cases, the improvement of the consonant multiple testing procedure over the dissonant multiple testing procedure may in fact be strict in the sense that it has strictly greater probability of detecting a false null hypothesis while still maintaining control of the FWE. Finally, we show how consonance can be used in the construction of some optimal maximin multiple testing procedures. This last result is especially of interest because there are very few results on optimality in the multiple testing literature.
考虑同时检验5个零假设的问题。为了解决多重性问题,经典的方法是将注意力限制在控制家族错误率的多个测试程序上。Marcus等人(1976)的闭包方法将构造此类过程的问题简化为构造单个测试的问题之一,这些测试可以控制类型1错误的通常概率。Sonnemann(1982, 2008)表明,任何连贯的多重测试过程都可以使用闭包方法构建。此外,Sonnemann和Finner(1988)表明,任何不连贯的多重测试程序都可以被至少同样好的连贯多重测试程序所取代。在本文中,我们首先展示了不谐音和辅音多重测试程序的类似结果。我们进一步表明,在许多情况下,辅音多重测试程序对不和谐多重测试程序的改进实际上可能是严格的,因为它在仍然保持对FWE的控制的同时,具有严格更大的检测错误零假设的概率。最后,我们展示了如何在构建一些最优最大值的多重测试程序中使用一致性。最后一个结果特别有趣,因为在多重测试文献中很少有关于最优性的结果。
{"title":"Consonance and the Closure Method in Multiple Testing","authors":"Joseph P. Romano, A. Shaikh, Michael Wolf","doi":"10.2202/1557-4679.1300","DOIUrl":"https://doi.org/10.2202/1557-4679.1300","url":null,"abstract":"Consider the problem of testing s null hypotheses simultaneously. In order to deal with the multiplicity problem, the classical approach is to restrict attention to multiple testing procedures that control the familywise error rate (FWE). The closure method of Marcus et al. (1976) reduces the problem of constructing such procedures to one of constructing single tests that control the usual probability of a Type 1 error. It was shown by Sonnemann (1982, 2008) that any coherent multiple testing procedure can be constructed using the closure method. Moreover, it was shown by Sonnemann and Finner (1988) that any incoherent multiple testing procedure can be replaced by a coherent multiple testing procedure which is at least as good. In this paper, we first show an analogous result for dissonant and consonant multiple testing procedures. We show further that, in many cases, the improvement of the consonant multiple testing procedure over the dissonant multiple testing procedure may in fact be strict in the sense that it has strictly greater probability of detecting a false null hypothesis while still maintaining control of the FWE. Finally, we show how consonance can be used in the construction of some optimal maximin multiple testing procedures. This last result is especially of interest because there are very few results on optimality in the multiple testing literature.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2009-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68717898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Inference in Epidemic Models without Likelihoods 无似然的流行病模型中的推论
IF 1.2 4区 数学 Pub Date : 2009-07-20 DOI: 10.2202/1557-4679.1171
T. McKinley, A. Cook, R. Deardon
Likelihood-based inference for epidemic models can be challenging, in part due to difficulties in evaluating the likelihood. The problem is particularly acute in models of large-scale outbreaks, and unobserved or partially observed data further complicates this process. Here we investigate the performance of Markov Chain Monte Carlo and Sequential Monte Carlo algorithms for parameter inference, where the routines are based on approximate likelihoods generated from model simulations. We compare our results to a gold-standard data-augmented MCMC for both complete and incomplete data. We illustrate our techniques using simulated epidemics as well as data from a recent outbreak of Ebola Haemorrhagic Fever in the Democratic Republic of Congo and discuss situations in which we think simulation-based inference may be preferable to likelihood-based inference.
基于可能性的流行病模型推断可能具有挑战性,部分原因是难以评估可能性。这一问题在大规模疫情模型中尤为严重,而未观察到或部分观察到的数据使这一过程进一步复杂化。在这里,我们研究了马尔可夫链蒙特卡罗和顺序蒙特卡罗算法在参数推理方面的性能,其中例程是基于模型模拟产生的近似似然。对于完整和不完整的数据,我们将结果与金标准的数据增强MCMC进行比较。我们使用模拟流行病以及刚果民主共和国最近爆发的埃博拉出血热的数据来说明我们的技术,并讨论了我们认为基于模拟的推断可能比基于可能性的推断更可取的情况。
{"title":"Inference in Epidemic Models without Likelihoods","authors":"T. McKinley, A. Cook, R. Deardon","doi":"10.2202/1557-4679.1171","DOIUrl":"https://doi.org/10.2202/1557-4679.1171","url":null,"abstract":"Likelihood-based inference for epidemic models can be challenging, in part due to difficulties in evaluating the likelihood. The problem is particularly acute in models of large-scale outbreaks, and unobserved or partially observed data further complicates this process. Here we investigate the performance of Markov Chain Monte Carlo and Sequential Monte Carlo algorithms for parameter inference, where the routines are based on approximate likelihoods generated from model simulations. We compare our results to a gold-standard data-augmented MCMC for both complete and incomplete data. We illustrate our techniques using simulated epidemics as well as data from a recent outbreak of Ebola Haemorrhagic Fever in the Democratic Republic of Congo and discuss situations in which we think simulation-based inference may be preferable to likelihood-based inference.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"5 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2009-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1171","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68716137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 188
A Non-Parametric Approach to Scale Reduction for Uni-Dimensional Screening Scales 一维筛选尺度尺度缩减的非参数方法
IF 1.2 4区 数学 Pub Date : 2009-01-28 DOI: 10.2202/1557-4679.1094
Xinhua Liu, Zhezhen Jin
To select items from a uni-dimensional scale to create a reduced scale for disease screening, Liu and Jin (2007) developed a non-parametric method based on binary risk classification. When the measure for the risk of a disease is ordinal or quantitative, and possibly subject to random censoring, this method is inefficient because it requires dichotomizing the risk measure, which may cause information loss and sample size reduction. In this paper, we modify Harrell's C-index (1984) such that the concordance probability, used as a measure of the discrimination accuracy of a scale with integer valued scores, can be estimated consistently when data are subject to random censoring. By evaluating changes in discrimination accuracy with the addition or deletion of items, we can select risk-related items without specifying parametric models. The procedure first removes the least useful items from the full scale, then, applies forward stepwise selection to the remaining items to obtain a reduced scale whose discrimination accuracy matches or exceeds that of the full scale. A simulation study shows the procedure to have good finite sample performance. We illustrate the method using a data set of patients at risk of developing Alzheimer's disease, who were administered a 40-item test of olfactory function before their semi-annual follow-up assessment.
为了从单维量表中选择项目来创建疾病筛查的简化量表,Liu和Jin(2007)开发了一种基于二元风险分类的非参数方法。当一种疾病的风险度量是有序的或定量的,并且可能受到随机审查时,这种方法是低效的,因为它需要对风险度量进行二分类,这可能导致信息丢失和样本量减少。在本文中,我们修改了Harrell的C-index(1984),使得当数据受到随机审查时,用于衡量整数值分数的尺度的判别精度的一致性概率能够得到一致的估计。通过评估增加或删除项目对识别精度的影响,我们可以在不指定参数模型的情况下选择与风险相关的项目。该方法首先从全量表中去除最无用的项目,然后对剩余的项目进行前向逐步选择,得到一个识别精度与全量表相当或超过全量表的缩减量表。仿真研究表明,该程序具有良好的有限样本性能。我们使用一组有患阿尔茨海默病风险的患者数据来说明该方法,这些患者在每半年进行一次随访评估之前进行了40项嗅觉功能测试。
{"title":"A Non-Parametric Approach to Scale Reduction for Uni-Dimensional Screening Scales","authors":"Xinhua Liu, Zhezhen Jin","doi":"10.2202/1557-4679.1094","DOIUrl":"https://doi.org/10.2202/1557-4679.1094","url":null,"abstract":"To select items from a uni-dimensional scale to create a reduced scale for disease screening, Liu and Jin (2007) developed a non-parametric method based on binary risk classification. When the measure for the risk of a disease is ordinal or quantitative, and possibly subject to random censoring, this method is inefficient because it requires dichotomizing the risk measure, which may cause information loss and sample size reduction. In this paper, we modify Harrell's C-index (1984) such that the concordance probability, used as a measure of the discrimination accuracy of a scale with integer valued scores, can be estimated consistently when data are subject to random censoring. By evaluating changes in discrimination accuracy with the addition or deletion of items, we can select risk-related items without specifying parametric models. The procedure first removes the least useful items from the full scale, then, applies forward stepwise selection to the remaining items to obtain a reduced scale whose discrimination accuracy matches or exceeds that of the full scale. A simulation study shows the procedure to have good finite sample performance. We illustrate the method using a data set of patients at risk of developing Alzheimer's disease, who were administered a 40-item test of olfactory function before their semi-annual follow-up assessment.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"23 1","pages":"1-22"},"PeriodicalIF":1.2,"publicationDate":"2009-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1094","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68715807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Measuring Agreement about Ranked Decision Choices for a Single Subject 衡量单个受试者的排序决策选择的一致性
IF 1.2 4区 数学 Pub Date : 2009-01-01 DOI: 10.2202/1557-4679.1113
R. Riffenburgh, P. Johnstone
Introduction. When faced with a medical classification, clinicians often rank-order the likelihood of potential diagnoses, treatment choices, or prognoses as a way to focus on likely occurrences without dropping rarer ones from consideration. To know how well clinicians agree on such rankings might help extend the realm of clinical judgment farther into the purview of evidence-based medicine. If rankings by different clinicians agree better than chance, the order of assignments and their relative likelihoods may justifiably contribute to medical decisions. If the agreement is no better than chance, the ranking should not influence the medical decision.Background. Available rank-order methods measure agreement over a set of decision choices by two rankers or by a set of rankers over two choices (rank correlation methods), or an overall agreement over a set of choices by a set of rankers (Kendall's W), but will not measure agreement about a single decision choice across a set of rankers. Rating methods (e.g. kappa) assign multiple subjects to nominal categories rather than ranking possible choices about a single subject and will not measure agreement about a single decision choice across a set of rankers.Method. In this article, we pose an agreement coefficient A for measuring agreement among a set of clinicians about a single decision choice and compare several potential forms of A. A takes on the value 0 when agreement is random and 1 when agreement is perfect. It is shown that A = 1 - observed disagreement/maximum disagreement. A particular form of A is recommended and tables of 5% and 10% significant values of A are generated for common numbers of ranks and rankers.Examples. In the selection of potential treatment assignments by a Tumor Board to a patient with a neck mass, there is no significant agreement about any treatment. Another example involves ranking decisions about a proposed medical research protocol by an Institutional Review Board (IRB). The decision to pass a protocol with minor revisions shows agreement at the 5% significance level, adequate for a consistent decision.
介绍。当面临医学分类时,临床医生通常会对潜在诊断、治疗选择或预后的可能性进行排序,以此来关注可能发生的情况,而不会放弃对罕见情况的考虑。了解临床医生对这种排名的认同程度,可能有助于将临床判断的领域进一步扩展到循证医学的范围。如果不同临床医生的排名比偶然更一致,那么分配的顺序及其相对可能性可能合理地有助于医疗决策。如果协议不比偶然好,排名不应该影响医疗决定。背景。可用的秩序方法度量两个排序者对一组决策选择的一致性,或者度量一组排序者对两个选择的一致性(秩相关方法),或者度量一组排序者对一组选择的总体一致性(Kendall's W),但是不能度量跨一组排序者对单个决策选择的一致性。评级方法(如kappa)将多个受试者分配到名义类别,而不是对单个主题的可能选择进行排名,并且不会衡量一组排名者对单个决策选择的一致性。在本文中,我们提出了一个一致性系数A来衡量一组临床医生对单一决策选择的一致性,并比较了几种可能的A形式。当一致性是随机的时,A的值为0,当一致性是完美的时,A的值为1。结果表明,A = 1 -观察到的分歧/最大分歧。推荐一种特殊形式的A,并为常见的等级和排名生成5%和10%显著值的A表。在肿瘤委员会对颈部肿块患者的潜在治疗方案的选择中,对任何治疗方案都没有显著的一致意见。另一个例子涉及机构审查委员会(IRB)对拟议医学研究方案的排名决定。对协议进行少量修改的决定表明在5%的显著性水平上达成一致,足以做出一致的决定。
{"title":"Measuring Agreement about Ranked Decision Choices for a Single Subject","authors":"R. Riffenburgh, P. Johnstone","doi":"10.2202/1557-4679.1113","DOIUrl":"https://doi.org/10.2202/1557-4679.1113","url":null,"abstract":"Introduction. When faced with a medical classification, clinicians often rank-order the likelihood of potential diagnoses, treatment choices, or prognoses as a way to focus on likely occurrences without dropping rarer ones from consideration. To know how well clinicians agree on such rankings might help extend the realm of clinical judgment farther into the purview of evidence-based medicine. If rankings by different clinicians agree better than chance, the order of assignments and their relative likelihoods may justifiably contribute to medical decisions. If the agreement is no better than chance, the ranking should not influence the medical decision.Background. Available rank-order methods measure agreement over a set of decision choices by two rankers or by a set of rankers over two choices (rank correlation methods), or an overall agreement over a set of choices by a set of rankers (Kendall's W), but will not measure agreement about a single decision choice across a set of rankers. Rating methods (e.g. kappa) assign multiple subjects to nominal categories rather than ranking possible choices about a single subject and will not measure agreement about a single decision choice across a set of rankers.Method. In this article, we pose an agreement coefficient A for measuring agreement among a set of clinicians about a single decision choice and compare several potential forms of A. A takes on the value 0 when agreement is random and 1 when agreement is perfect. It is shown that A = 1 - observed disagreement/maximum disagreement. A particular form of A is recommended and tables of 5% and 10% significant values of A are generated for common numbers of ranks and rankers.Examples. In the selection of potential treatment assignments by a Tumor Board to a patient with a neck mass, there is no significant agreement about any treatment. Another example involves ranking decisions about a proposed medical research protocol by an Institutional Review Board (IRB). The decision to pass a protocol with minor revisions shows agreement at the 5% significance level, adequate for a consistent decision.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"47 47 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2009-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68715496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Empirical Efficiency Maximization: Improved Locally Efficient Covariate Adjustment in Randomized Experiments and Survival Analysis 经验效率最大化:随机实验和生存分析中改进的局部有效协变量调整
IF 1.2 4区 数学 Pub Date : 2008-05-04 DOI: 10.2202/1557-4679.1084
D. Rubin, M. J. van der Laan
It has long been recognized that covariate adjustment can increase precision in randomized experiments, even when it is not strictly necessary. Adjustment is often straightforward when a discrete covariate partitions the sample into a handful of strata, but becomes more involved with even a single continuous covariate such as age. As randomized experiments remain a gold standard for scientific inquiry, and the information age facilitates a massive collection of baseline information, the longstanding problem of if and how to adjust for covariates is likely to engage investigators for the foreseeable future.In the locally efficient estimation approach introduced for general coarsened data structures by James Robins and collaborators, one first fits a relatively small working model, often with maximum likelihood, giving a nuisance parameter fit in an estimating equation for the parameter of interest. The usual advertisement is that the estimator will be asymptotically efficient if the working model is correct, but otherwise will still be consistent and asymptotically Gaussian.However, by applying standard likelihood-based fits to misspecified working models in covariate adjustment problems, one can poorly estimate the parameter of interest. We propose a new method, empirical efficiency maximization, to optimize the working model fit for the resulting parameter estimate.In addition to the randomized experiment setting, we show how our covariate adjustment procedure can be used in survival analysis applications. Numerical asymptotic efficiency calculations demonstrate gains relative to standard locally efficient estimators.
人们早就认识到协变量调整可以提高随机实验的精度,即使它不是严格必要的。当一个离散的协变量将样本划分为几个层时,调整通常是直接的,但即使是一个连续的协变量,如年龄,也会变得更加复杂。由于随机实验仍然是科学探究的黄金标准,信息时代促进了大量基线信息的收集,在可预见的未来,是否以及如何调整协变量的长期问题可能会让研究人员参与其中。在James Robins及其合作者为一般粗化数据结构引入的局部有效估计方法中,首先拟合一个相对较小的工作模型,通常具有最大似然性,在对感兴趣的参数的估计方程中给出一个讨厌的参数拟合。通常的广告是,如果工作模型是正确的,估计器将是渐近有效的,但否则仍然是一致的和渐近高斯的。然而,在协变量调整问题中,通过将标准的基于似然的拟合应用于错误指定的工作模型,人们可以很好地估计感兴趣的参数。我们提出了一种新的方法,经验效率最大化,以优化工作模型拟合的结果参数估计。除了随机实验设置外,我们还展示了协变量调整程序如何用于生存分析应用。数值渐近效率计算证明了相对于标准局部有效估计的增益。
{"title":"Empirical Efficiency Maximization: Improved Locally Efficient Covariate Adjustment in Randomized Experiments and Survival Analysis","authors":"D. Rubin, M. J. van der Laan","doi":"10.2202/1557-4679.1084","DOIUrl":"https://doi.org/10.2202/1557-4679.1084","url":null,"abstract":"It has long been recognized that covariate adjustment can increase precision in randomized experiments, even when it is not strictly necessary. Adjustment is often straightforward when a discrete covariate partitions the sample into a handful of strata, but becomes more involved with even a single continuous covariate such as age. As randomized experiments remain a gold standard for scientific inquiry, and the information age facilitates a massive collection of baseline information, the longstanding problem of if and how to adjust for covariates is likely to engage investigators for the foreseeable future.In the locally efficient estimation approach introduced for general coarsened data structures by James Robins and collaborators, one first fits a relatively small working model, often with maximum likelihood, giving a nuisance parameter fit in an estimating equation for the parameter of interest. The usual advertisement is that the estimator will be asymptotically efficient if the working model is correct, but otherwise will still be consistent and asymptotically Gaussian.However, by applying standard likelihood-based fits to misspecified working models in covariate adjustment problems, one can poorly estimate the parameter of interest. We propose a new method, empirical efficiency maximization, to optimize the working model fit for the resulting parameter estimate.In addition to the randomized experiment setting, we show how our covariate adjustment procedure can be used in survival analysis applications. Numerical asymptotic efficiency calculations demonstrate gains relative to standard locally efficient estimators.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"13 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2008-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1084","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68715781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 101
Modeling the Effect of a Preventive Intervention on the Natural History of Cancer: Application to the Prostate Cancer Prevention Trial 模拟预防干预对癌症自然史的影响:在前列腺癌预防试验中的应用
IF 1.2 4区 数学 Pub Date : 2006-12-28 DOI: 10.2202/1557-4679.1036
P. Pinsky, Ruth Etzioni, N. Howlader, P. Goodman, I. Thompson
The Prostate Cancer Prevention Trial (PCPT) recently demonstrated a significant reduction in prostate cancer incidence of about 25% among men taking finasteride compared to men taking placebo. However, the effect of finasteride on the natural history of prostate cancer is not well understood. We adapted a convolution model developed by Pinsky (2001) to characterize the natural history of prostate cancer in the presence and absence of finasteride. The model was applied to data from 10,995 men in PCPT who had disease status determined by interim diagnosis of prostate cancer or end-of-study biopsy. Prostate cancer cases were either screen-detected by Prostate-Specific Antigen (PSA), biopsy-detected at the end of the study, or clinically detected, that is, detected by methods other than PSA screening. The hazard ratio (HR) for the incidence of preclinical disease on finasteride versus placebo was 0.42 (95% CI: 0.20-0.58). The progression from preclinical to clinical disease was relatively unaffected by finasteride, with mean sojourn time being 16 years for placebo cases and 18.5 years for finasteride cases (p-value for difference = 0.2). We conclude that finasteride appears to affect prostate cancer primarily by preventing the emergence of new, preclinical tumors with little impact on established, latent disease.
前列腺癌预防试验(PCPT)最近表明,与服用安慰剂的男性相比,服用非那雄胺的男性前列腺癌发病率显著降低约25%。然而,非那雄胺对前列腺癌自然史的影响尚不清楚。我们采用了Pinsky(2001)开发的卷积模型来描述非那雄胺存在和不存在时前列腺癌的自然历史。该模型应用于10995名通过前列腺癌中期诊断或研究结束活检确定疾病状态的PCPT患者的数据。前列腺癌病例要么通过前列腺特异性抗原(PSA)筛查检测,要么在研究结束时进行活检检测,要么通过临床检测,即通过PSA筛查以外的方法检测。非那雄胺与安慰剂的临床前疾病发生率的危险比(HR)为0.42 (95% CI: 0.20-0.58)。从临床前到临床疾病的进展相对不受非那雄胺的影响,安慰剂组的平均停留时间为16年,非那雄胺组的平均停留时间为18.5年(p值差异= 0.2)。我们得出结论,非那雄胺似乎主要通过预防新的临床前肿瘤的出现来影响前列腺癌,而对已建立的潜伏性疾病几乎没有影响。
{"title":"Modeling the Effect of a Preventive Intervention on the Natural History of Cancer: Application to the Prostate Cancer Prevention Trial","authors":"P. Pinsky, Ruth Etzioni, N. Howlader, P. Goodman, I. Thompson","doi":"10.2202/1557-4679.1036","DOIUrl":"https://doi.org/10.2202/1557-4679.1036","url":null,"abstract":"The Prostate Cancer Prevention Trial (PCPT) recently demonstrated a significant reduction in prostate cancer incidence of about 25% among men taking finasteride compared to men taking placebo. However, the effect of finasteride on the natural history of prostate cancer is not well understood. We adapted a convolution model developed by Pinsky (2001) to characterize the natural history of prostate cancer in the presence and absence of finasteride. The model was applied to data from 10,995 men in PCPT who had disease status determined by interim diagnosis of prostate cancer or end-of-study biopsy. Prostate cancer cases were either screen-detected by Prostate-Specific Antigen (PSA), biopsy-detected at the end of the study, or clinically detected, that is, detected by methods other than PSA screening. The hazard ratio (HR) for the incidence of preclinical disease on finasteride versus placebo was 0.42 (95% CI: 0.20-0.58). The progression from preclinical to clinical disease was relatively unaffected by finasteride, with mean sojourn time being 16 years for placebo cases and 18.5 years for finasteride cases (p-value for difference = 0.2). We conclude that finasteride appears to affect prostate cancer primarily by preventing the emergence of new, preclinical tumors with little impact on established, latent disease.","PeriodicalId":50333,"journal":{"name":"International Journal of Biostatistics","volume":"2 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2006-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2202/1557-4679.1036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68715671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
International Journal of Biostatistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1