首页 > 最新文献

Biostatistics and Epidemiology最新文献

英文 中文
A response adaptive design for ordinal categorical responses weighing the cumulative odds ratios 衡量累积优势比的有序分类反应的反应自适应设计
Q3 Medicine Pub Date : 2019-01-01 DOI: 10.1080/24709360.2019.1660111
A. Biswas, Rahul Bhattacharya, Soumyadeep Das
ABSTRACT Weighing the cumulative odds ratios suitably, a two treatment response adaptive design for phase III clinical trial is proposed for ordinal categorical responses. Properties of the proposed design are investigated theoretically as well as empirically. Applicability of the design is further verified using a data pertaining to a real clinical trial with trauma patients, where the responses are observed in an ordinal categorical scale.
摘要适当权衡累积优势比,提出了一种用于III期临床试验的两种治疗反应自适应设计,用于顺序分类反应。对所提出的设计的性质进行了理论和实证研究。使用与创伤患者的真实临床试验相关的数据进一步验证了该设计的适用性,其中反应是在有序分类量表中观察到的。
{"title":"A response adaptive design for ordinal categorical responses weighing the cumulative odds ratios","authors":"A. Biswas, Rahul Bhattacharya, Soumyadeep Das","doi":"10.1080/24709360.2019.1660111","DOIUrl":"https://doi.org/10.1080/24709360.2019.1660111","url":null,"abstract":"ABSTRACT Weighing the cumulative odds ratios suitably, a two treatment response adaptive design for phase III clinical trial is proposed for ordinal categorical responses. Properties of the proposed design are investigated theoretically as well as empirically. Applicability of the design is further verified using a data pertaining to a real clinical trial with trauma patients, where the responses are observed in an ordinal categorical scale.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"109 - 125"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2019.1660111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47455932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Regression Trees for Longitudinal Data with Baseline Covariates. 具有基线协变量的纵向数据回归树。
Q3 Medicine Pub Date : 2019-01-01 Epub Date: 2018-12-31 DOI: 10.1080/24709360.2018.1557797
Madan Gopal Kundu, Jaroslaw Harezlak

Longitudinal changes in a population of interest are often heterogeneous and may be influenced by a combination of baseline factors. In such cases, traditional linear mixed effects models (Laird and Ware, 1982) assuming common parametric form for the mean structure may not be applicable. We show that the regression tree methodology for longitudinal data can identify and characterize longitudinally homogeneous subgroups. Most of the currently available regression tree construction methods are either limited to a repeated measures scenario or combine the heterogeneity among subgroups with the random inter-subject variability. We propose a longitudinal classification and regression tree (LongCART) algorithm under conditional inference framework (Hothorn, Hornik and Zeileis, 2006) that overcomes these limitations utilizing a two-step approach. The LongCART algorithm first selects the partitioning variable via a parameter instability test and then finds the optimal split for the selected partitioning variable. Thus, at each node, the decision of further splitting is type-I error controlled and thus it guards against variable selection bias, over-fitting and spurious splitting. We have obtained the asymptotic results for the proposed instability test and examined its finite sample behavior through simulation studies. Comparative performance of LongCART algorithm were evaluated empirically via simulation studies. Finally, we applied LongCART to study the longitudinal changes in choline levels among HIV-positive patients.

研究人群的纵向变化往往是异质的,可能受到基线因素组合的影响。在这种情况下,传统的线性混合效应模型(Laird and Ware, 1982)假设平均结构的共同参数形式可能不适用。我们证明了纵向数据的回归树方法可以识别和表征纵向均匀的子群。目前大多数可用的回归树构建方法要么局限于重复测量场景,要么将子组之间的异质性与随机的主体间变异性结合起来。我们提出了一种在条件推理框架下的纵向分类和回归树(LongCART)算法(Hothorn, Hornik和Zeileis, 2006),该算法利用两步法克服了这些限制。LongCART算法首先通过参数不稳定性测试选择分区变量,然后为所选分区变量找到最优分割。因此,在每个节点上,进一步分裂的决策是类型- i错误控制的,从而防止了变量选择偏差,过拟合和虚假分裂。我们得到了所提出的不稳定性试验的渐近结果,并通过模拟研究检验了其有限样本行为。通过仿真研究,对LongCART算法的性能进行了实证评价。最后,我们应用LongCART研究hiv阳性患者胆碱水平的纵向变化。
{"title":"Regression Trees for Longitudinal Data with Baseline Covariates.","authors":"Madan Gopal Kundu,&nbsp;Jaroslaw Harezlak","doi":"10.1080/24709360.2018.1557797","DOIUrl":"https://doi.org/10.1080/24709360.2018.1557797","url":null,"abstract":"<p><p>Longitudinal changes in a population of interest are often heterogeneous and may be influenced by a combination of baseline factors. In such cases, traditional linear mixed effects models (Laird and Ware, 1982) assuming common parametric form for the mean structure may not be applicable. We show that the regression tree methodology for longitudinal data can identify and characterize longitudinally homogeneous subgroups. Most of the currently available regression tree construction methods are either limited to a repeated measures scenario or combine the heterogeneity among subgroups with the random inter-subject variability. We propose a longitudinal classification and regression tree (LongCART) algorithm under conditional inference framework (Hothorn, Hornik and Zeileis, 2006) that overcomes these limitations utilizing a two-step approach. The LongCART algorithm first selects the partitioning variable via a <i>parameter instability test</i> and then finds the optimal split for the selected partitioning variable. Thus, at each node, the decision of further splitting is type-I error controlled and thus it guards against variable selection bias, over-fitting and spurious splitting. We have obtained the asymptotic results for the proposed instability test and examined its finite sample behavior through simulation studies. Comparative performance of LongCART algorithm were evaluated empirically via simulation studies. Finally, we applied LongCART to study the longitudinal changes in <i>choline</i> levels among HIV-positive patients.</p>","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"1-22"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2018.1557797","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36896395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Essential concepts of causal inference: a remarkable history and an intriguing future 因果推理的基本概念:非凡的历史和迷人的未来
Q3 Medicine Pub Date : 2019-01-01 DOI: 10.1080/24709360.2019.1670513
D. Rubin
ABSTRACT Causal inference refers to the process of inferring what would happen in the future if we change what we are doing, or inferring what would have happened in the past, if we had done something different in the distant past. Humans adjust our behaviors by anticipating what will happen if we act in different ways, using past experiences to inform these choices. ‘Essential’ here means in the mathematical sense of excluding the unnecessary and including only the necessary, e.g. stating that the Pythagorean theorem works for an isosceles right triangle is bad mathematics because it includes the unnecessary adjective isosceles; of course this is not as bad as omitting the adjective ‘right.’ I find much of what is written about causal inference to be mathematically inapposite in one of these senses because the descriptions either include irrelevant clutter or omit conditions required for the correctness of the assertions. The history of formal causal inference is remarkable because its correct formulation is so recent, a twentieth century phenomenon, and its future is intriguing because it is currently undeveloped when applied to investigate interventions applied to conscious humans, and moreover will utilize tools impossible without modern computing.
摘要因果推断是指推断如果我们改变正在做的事情,未来会发生什么的过程,或者推断如果我们在遥远的过去做了一些不同的事情,过去会发生什么。人类通过预测如果我们以不同的方式行事会发生什么来调整我们的行为,并利用过去的经验来为这些选择提供信息。”Essential在这里的意思是在数学意义上排除不必要的,只包括必要的,例如说勾股定理适用于等腰直角三角形是糟糕的数学,因为它包括不必要的形容词等腰;当然,这并没有省略形容词“对”那么糟糕我发现,在其中一种意义上,关于因果推理的大部分内容在数学上都是不令人信服的,因为描述要么包括不相关的混乱,要么省略了断言正确性所需的条件。形式因果推理的历史之所以引人注目,是因为它的正确表述是最近才出现的,是二十世纪的一种现象,而它的未来之所以有趣,是因为当它被应用于研究应用于有意识的人类的干预措施时,它目前还没有开发出来,而且它将使用没有现代计算就不可能使用的工具。
{"title":"Essential concepts of causal inference: a remarkable history and an intriguing future","authors":"D. Rubin","doi":"10.1080/24709360.2019.1670513","DOIUrl":"https://doi.org/10.1080/24709360.2019.1670513","url":null,"abstract":"ABSTRACT Causal inference refers to the process of inferring what would happen in the future if we change what we are doing, or inferring what would have happened in the past, if we had done something different in the distant past. Humans adjust our behaviors by anticipating what will happen if we act in different ways, using past experiences to inform these choices. ‘Essential’ here means in the mathematical sense of excluding the unnecessary and including only the necessary, e.g. stating that the Pythagorean theorem works for an isosceles right triangle is bad mathematics because it includes the unnecessary adjective isosceles; of course this is not as bad as omitting the adjective ‘right.’ I find much of what is written about causal inference to be mathematically inapposite in one of these senses because the descriptions either include irrelevant clutter or omit conditions required for the correctness of the assertions. The history of formal causal inference is remarkable because its correct formulation is so recent, a twentieth century phenomenon, and its future is intriguing because it is currently undeveloped when applied to investigate interventions applied to conscious humans, and moreover will utilize tools impossible without modern computing.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"140 - 155"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2019.1670513","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43617355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Variable selection and nonlinear effect discovery in partially linear mixture cure rate models 部分线性混合固化率模型的变量选择与非线性效应发现
Q3 Medicine Pub Date : 2019-01-01 DOI: 10.1080/24709360.2019.1663665
A. Masud, Zhangsheng Yu, W. Tu
Survival data with long-term survivors are common in clinical investigations. Such data are often analyzed with mixture cure rate models. Existing model selection procedures do not readily discriminate nonlinear effects from linear ones. Here, we propose a procedure for accommodating nonlinear effects and for determining the cure rate model composition. The procedure is based on the Least Absolute Shrinkage and Selection Operators (LASSO). Specifically, by partitioning each variable into linear and nonlinear components, we use LASSO to select linear and nonlinear components. Operationally, we model the nonlinear components by cubic B-splines. The procedure adds to the existing variable selection methods an ability to discover hidden nonlinear effects in a cure rate model setting. To implement, we ascertain the maximum likelihood estimates by using an Expectation Maximization (EM) algorithm. We conduct an extensive simulation study to assess the operating characteristics of the selection procedure. We illustrate the use of the method by analyzing data from a real clinical study.
长期幸存者的生存数据在临床调查中很常见。此类数据通常使用混合物固化率模型进行分析。现有的模型选择程序不容易区分非线性效应和线性效应。在这里,我们提出了一种适应非线性效应和确定固化率模型组成的程序。该过程基于最小绝对收缩和选择算子(LASSO)。具体来说,通过将每个变量划分为线性和非线性分量,我们使用LASSO来选择线性和非线性组件。在操作上,我们使用三次B样条对非线性分量进行建模。该程序为现有的变量选择方法增加了发现治愈率模型设置中隐藏的非线性影响的能力。为了实现,我们通过使用期望最大化(EM)算法来确定最大似然估计。我们进行了广泛的模拟研究,以评估选择程序的操作特征。我们通过分析真实临床研究的数据来说明该方法的使用。
{"title":"Variable selection and nonlinear effect discovery in partially linear mixture cure rate models","authors":"A. Masud, Zhangsheng Yu, W. Tu","doi":"10.1080/24709360.2019.1663665","DOIUrl":"https://doi.org/10.1080/24709360.2019.1663665","url":null,"abstract":"Survival data with long-term survivors are common in clinical investigations. Such data are often analyzed with mixture cure rate models. Existing model selection procedures do not readily discriminate nonlinear effects from linear ones. Here, we propose a procedure for accommodating nonlinear effects and for determining the cure rate model composition. The procedure is based on the Least Absolute Shrinkage and Selection Operators (LASSO). Specifically, by partitioning each variable into linear and nonlinear components, we use LASSO to select linear and nonlinear components. Operationally, we model the nonlinear components by cubic B-splines. The procedure adds to the existing variable selection methods an ability to discover hidden nonlinear effects in a cure rate model setting. To implement, we ascertain the maximum likelihood estimates by using an Expectation Maximization (EM) algorithm. We conduct an extensive simulation study to assess the operating characteristics of the selection procedure. We illustrate the use of the method by analyzing data from a real clinical study.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"156 - 177"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2019.1663665","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47487854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modeling exposures with a spike at zero: simulation study and practical application to survival data 零峰值暴露的建模:生存数据的模拟研究和实际应用
Q3 Medicine Pub Date : 2019-01-01 DOI: 10.1080/24709360.2019.1580463
E. Lorenz, C. Jenkner, W. Sauerbrei, H. Becher
Risk and prognostic factors in epidemiological and clinical research are often semicontinuous such that a proportion of individuals have exposure zero, and a continuous distribution among those exposed. We call this a spike at zero (SAZ). Typical examples are consumption of alcohol and tobacco, or hormone receptor levels. To additionally model non-linear functional relationships for SAZ variables, an extension of the fractional polynomial (FP) approach was proposed. To indicate whether or not a value is zero, a binary variable is added to the model. In a two-stage procedure, called FP-spike, it is assessed whether the binary variable and/or the continuous FP function for the positive part is required for a suitable fit. In this paper, we compared the performance of two approaches – standard FP and FP-spike – in the Cox model in a motivating example on breast cancer prognosis and a simulation study. The comparisons lead to the suggestion to generally using FP-spike rather than standard FP when the SAZ effect is considerably large because the method performed better in real data applications and simulation in terms of deviance and functional form. Abbreviations: CI: confidence interval; FP: fractional polynomial; FP1: first degree fractional polynomial; FP2: second degree fractional polynomial; FSP: function selection procedure; HT: hormone therapy; OR: odds ratio; SAZ: spike at zero
流行病学和临床研究中的风险和预后因素通常是半连续的,因此一定比例的个体暴露为零,而暴露者之间的分布是连续的。我们称之为零峰值(SAZ)。典型的例子是酒精和烟草的消耗,或激素受体水平。为了进一步建立SAZ变量的非线性函数关系模型,提出了分数阶多项式(FP)方法的扩展。为了指示一个值是否为零,将一个二进制变量添加到模型中。在称为FP-spike的两阶段程序中,评估二元变量和/或正部分的连续FP函数是否需要合适的拟合。在本文中,我们比较了标准FP和FP-spike两种方法在Cox模型中对乳腺癌预后的激励实例和模拟研究中的性能。比较得出的建议是,当SAZ效应相当大时,通常使用FP-spike而不是标准FP,因为该方法在实际数据应用和模拟中的偏差和功能形式方面表现更好。缩写:CI:置信区间;FP:分数阶多项式;FP1:一次分数阶多项式;FP2:二阶分数多项式;FSP:功能选择程序;HT:激素治疗;OR:优势比;SAZ:峰值为零
{"title":"Modeling exposures with a spike at zero: simulation study and practical application to survival data","authors":"E. Lorenz, C. Jenkner, W. Sauerbrei, H. Becher","doi":"10.1080/24709360.2019.1580463","DOIUrl":"https://doi.org/10.1080/24709360.2019.1580463","url":null,"abstract":"Risk and prognostic factors in epidemiological and clinical research are often semicontinuous such that a proportion of individuals have exposure zero, and a continuous distribution among those exposed. We call this a spike at zero (SAZ). Typical examples are consumption of alcohol and tobacco, or hormone receptor levels. To additionally model non-linear functional relationships for SAZ variables, an extension of the fractional polynomial (FP) approach was proposed. To indicate whether or not a value is zero, a binary variable is added to the model. In a two-stage procedure, called FP-spike, it is assessed whether the binary variable and/or the continuous FP function for the positive part is required for a suitable fit. In this paper, we compared the performance of two approaches – standard FP and FP-spike – in the Cox model in a motivating example on breast cancer prognosis and a simulation study. The comparisons lead to the suggestion to generally using FP-spike rather than standard FP when the SAZ effect is considerably large because the method performed better in real data applications and simulation in terms of deviance and functional form. Abbreviations: CI: confidence interval; FP: fractional polynomial; FP1: first degree fractional polynomial; FP2: second degree fractional polynomial; FSP: function selection procedure; HT: hormone therapy; OR: odds ratio; SAZ: spike at zero","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"23 - 37"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2019.1580463","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48298967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Exact inference for the Youden index to discriminate individuals using two-parameter exponentially distributed pooled samples 使用双参数指数分布的混合样本对约登指数进行区分个体的精确推断
Q3 Medicine Pub Date : 2019-01-01 DOI: 10.1080/24709360.2019.1587264
Sumith Gunasekera, Lakmali Weerasena, Aruna Saram, O. Ajumobi
It has become increasingly common in epidemiological studies to pool specimens across subjects as a useful cot-cutting technique to achieve accurate quantification of biomarkers and certain environmental chemicals. The data collected from these pooled samples can then be utilized to estimate the Youden Index, which measures biomarker's effectiveness and aids in the selection of an optimal threshold value, as a summary measure of the Receiver Operating Characteristic curve. The aim of this paper is to make use of generalized approach to estimate and testing of the Youden index. This goal is accomplished by the comparison of classical and generalized procedures for the Youden Index with the aid of pooled samples from the shifted-exponentially distributed biomarkers for the low-risk and high-risk patients. These are juxtaposed using confidence intervals, p-values, power of the test, size of the test, and coverage probability with a wide-ranging simulation study featuring a selection of various scenarios. In order to demonstrate the advantages of the proposed generalized procedures over its classical counterpart, an illustrative example is discussed using the Duchenne Muscular Dystrophy data available at http://biostat.mc.vanderbilt.edu/wiki/Main/DataSets or http://lib.stat.cmu.edu/datasets/.
在流行病学研究中,将不同对象的标本汇集起来作为一种有用的裁剪技术,以实现生物标志物和某些环境化学物质的准确定量,这已经变得越来越普遍。然后,从这些汇集的样本中收集的数据可用于估计约登指数,该指数测量生物标志物的有效性,并有助于选择最佳阈值,作为接受者工作特征曲线的汇总测量。本文的目的是利用广义方法来估计和检验约登指数。这一目标是通过比较约登指数的经典和通用程序,并借助于低风险和高风险患者的指数分布生物标志物的汇集样本来实现的。这些并置使用置信区间,p值,测试的能力,测试的大小和覆盖概率与广泛的模拟研究,以选择各种场景为特征。为了证明所提出的广义程序相对于其经典对应程序的优势,使用可在http://biostat.mc.vanderbilt.edu/wiki/Main/DataSets或http://lib.stat.cmu.edu/datasets/上获得的杜氏肌营养不良症数据讨论了一个示例。
{"title":"Exact inference for the Youden index to discriminate individuals using two-parameter exponentially distributed pooled samples","authors":"Sumith Gunasekera, Lakmali Weerasena, Aruna Saram, O. Ajumobi","doi":"10.1080/24709360.2019.1587264","DOIUrl":"https://doi.org/10.1080/24709360.2019.1587264","url":null,"abstract":"It has become increasingly common in epidemiological studies to pool specimens across subjects as a useful cot-cutting technique to achieve accurate quantification of biomarkers and certain environmental chemicals. The data collected from these pooled samples can then be utilized to estimate the Youden Index, which measures biomarker's effectiveness and aids in the selection of an optimal threshold value, as a summary measure of the Receiver Operating Characteristic curve. The aim of this paper is to make use of generalized approach to estimate and testing of the Youden index. This goal is accomplished by the comparison of classical and generalized procedures for the Youden Index with the aid of pooled samples from the shifted-exponentially distributed biomarkers for the low-risk and high-risk patients. These are juxtaposed using confidence intervals, p-values, power of the test, size of the test, and coverage probability with a wide-ranging simulation study featuring a selection of various scenarios. In order to demonstrate the advantages of the proposed generalized procedures over its classical counterpart, an illustrative example is discussed using the Duchenne Muscular Dystrophy data available at http://biostat.mc.vanderbilt.edu/wiki/Main/DataSets or http://lib.stat.cmu.edu/datasets/.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"38 - 61"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2019.1587264","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48586931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A frequentist mixture modeling of stop signal reaction times 停止信号反应时间的频率混合建模
Q3 Medicine Pub Date : 2019-01-01 DOI: 10.1080/24709360.2019.1660110
M. Soltanifar, A. Dupuis, R. Schachar, M. Escobar
The stop signal reaction time (SSRT), a measure of the latency of the stop signal process, has been theoretically formulated using a horse race model of go and stop signal processes by the American scientist Gordon Logan (1994). The SSRT assumes equal impact of the preceding trial type (go/stop) on its measurement. In the case of a violation of this assumption, we consider estimation of SSRT based on the idea of earlier analysis of cluster type go reaction times (GORT) and linear mixed model (LMM) data analysis results. Two clusters of trials were considered including those trials preceded by a go trial and other trials preceded by a stop trial. Given disparities between cluster type SSRTs, we need to consider some new indexes considering the unused cluster type information in the calculations. We introduce mixture SSRT and weighted SSRT as two new distinct indexes of SSRT that address the violated assumption. Mixture SSRT and weighted SSRT are theoretically asymptotically equivalent under special conditions. An example of stop single task (SST) real data is presented to show equivalency of these two new SSRT indexes and their larger magnitude compared to Logan's single 1994 SSRT. Abbreviations: ADHD: attention deficit hyperactivity disorder; ExG: Ex-Gaussiandistribution; GORT: reaction time in a go trial; GORTA: reaction time in a type A gotrial; GORTB: reaction time in a type B go trial; LMM: linear mixed model; SWAN:strengths and weakness of ADHD symptoms and normal behavior rating scale; SSD: stop signal delay; SR: signal respond; SRRT: reaction time in a failedstop trial; SSRT: stop signal reaction times in a stop trial; SST: stop signaltask.
停止信号反应时间(SSRT)是一种测量停止信号过程潜伏期的方法,由美国科学家戈登·洛根(Gordon Logan, 1994)利用赛马的围棋和停止信号过程模型从理论上提出。SSRT假设前面的试验类型(go/stop)对其测量的影响相等。在违反这一假设的情况下,我们考虑基于先前分析聚类反应时间(GORT)和线性混合模型(LMM)数据分析结果的思想来估计SSRT。考虑了两组试验,包括在开始试验之前进行的试验和在停止试验之前进行的其他试验。考虑到集群类型SSRTs之间的差异,我们需要考虑计算中未使用的集群类型信息的一些新索引。我们引入混合SSRT和加权SSRT作为SSRT的两个新的不同指标来解决违反假设。混合SSRT和加权SSRT在特殊条件下理论上是渐近等价的。以停止单任务(SST)的实际数据为例,展示了这两个新的SSRT指数的等效性,以及与Logan的1994年单SSRT相比,它们的量级更大。缩写:ADHD:注意缺陷多动障碍;ExG: Ex-Gaussiandistribution;GORT:围棋试验中的反应时间;GORTA: a型试验的反应时间;GORTB: B型go试验的反应时间;LMM:线性混合模型;SWAN: ADHD症状优缺点与正常行为评定量表;SSD:停止信号延时;SR:信号响应;SRRT:失败停止试验的反应时间;SSRT:停止试验中停止信号反应时间;停止signaltask。
{"title":"A frequentist mixture modeling of stop signal reaction times","authors":"M. Soltanifar, A. Dupuis, R. Schachar, M. Escobar","doi":"10.1080/24709360.2019.1660110","DOIUrl":"https://doi.org/10.1080/24709360.2019.1660110","url":null,"abstract":"The stop signal reaction time (SSRT), a measure of the latency of the stop signal process, has been theoretically formulated using a horse race model of go and stop signal processes by the American scientist Gordon Logan (1994). The SSRT assumes equal impact of the preceding trial type (go/stop) on its measurement. In the case of a violation of this assumption, we consider estimation of SSRT based on the idea of earlier analysis of cluster type go reaction times (GORT) and linear mixed model (LMM) data analysis results. Two clusters of trials were considered including those trials preceded by a go trial and other trials preceded by a stop trial. Given disparities between cluster type SSRTs, we need to consider some new indexes considering the unused cluster type information in the calculations. We introduce mixture SSRT and weighted SSRT as two new distinct indexes of SSRT that address the violated assumption. Mixture SSRT and weighted SSRT are theoretically asymptotically equivalent under special conditions. An example of stop single task (SST) real data is presented to show equivalency of these two new SSRT indexes and their larger magnitude compared to Logan's single 1994 SSRT. Abbreviations: ADHD: attention deficit hyperactivity disorder; ExG: Ex-Gaussiandistribution; GORT: reaction time in a go trial; GORTA: reaction time in a type A gotrial; GORTB: reaction time in a type B go trial; LMM: linear mixed model; SWAN:strengths and weakness of ADHD symptoms and normal behavior rating scale; SSD: stop signal delay; SR: signal respond; SRRT: reaction time in a failedstop trial; SSRT: stop signal reaction times in a stop trial; SST: stop signaltask.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"108 - 90"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2019.1660110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41851070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effect of modeling a multilevel structure on the Indian population to identify the factors influencing HIV infection 对印度人口进行多层次结构建模以确定影响HIV感染的因素的效果
Q3 Medicine Pub Date : 2019-01-01 DOI: 10.1080/24709360.2019.1671096
Nidhiya Menon, Binukumar Bhaskarapillai, A. Richardson
ABSTRACT Many studies have addressed the factors associated with HIV in the Indian population. Some of these studies have used sampling weights for the risk estimation of factors associated with HIV, but few studies have adjusted for the multilevel structure of survey data. The National Family Health Survey 3 collected data across India between 2005 and 2006. 38,715 females and 66,212 males with complete information were analyzed. To account for the correlations within clusters, a three-level model was employed. Bivariate and multivariable mixed effect logistic regression analysis were performed to identify factors associated with HIV. Intracluster correlation coefficients were used to assess the relatedness of each pair of variables within clusters. Variables pertaining to no knowledge of contraceptive methods, age at first marriage, wealth index and noncoverage of PSUs by Anganwadis were significant risk factors for HIV when the multileveled model was used for analysis. This study has identified the risk profile for HIV infection using an appropriate modeling strategy and has highlighted the consequences of ignoring the structure of the data. It offers a methodological guide towards an applied approach to the identification of future risk and the need to customize intervention to address HIV infection in the Indian population.
摘要许多研究都探讨了印度人群中与艾滋病毒相关的因素。其中一些研究使用了抽样权重来估计与艾滋病毒相关的因素的风险,但很少有研究对调查数据的多级结构进行了调整。全国家庭健康调查3收集了2005年至2006年间印度各地的数据。对38715名女性和66212名信息完整的男性进行了分析。为了说明集群内的相关性,采用了一个三级模型。进行双变量和多变量混合效应逻辑回归分析,以确定与HIV相关的因素。聚类内相关系数用于评估聚类内每对变量的相关性。当使用多层次模型进行分析时,与不了解避孕方法、初婚年龄、财富指数和Anganwadis的PSU非平均值有关的变量是感染HIV的重要风险因素。这项研究使用适当的建模策略确定了艾滋病毒感染的风险状况,并强调了忽视数据结构的后果。它为确定未来风险的应用方法以及为解决印度人口中的艾滋病毒感染而定制干预措施的必要性提供了方法指南。
{"title":"Effect of modeling a multilevel structure on the Indian population to identify the factors influencing HIV infection","authors":"Nidhiya Menon, Binukumar Bhaskarapillai, A. Richardson","doi":"10.1080/24709360.2019.1671096","DOIUrl":"https://doi.org/10.1080/24709360.2019.1671096","url":null,"abstract":"ABSTRACT Many studies have addressed the factors associated with HIV in the Indian population. Some of these studies have used sampling weights for the risk estimation of factors associated with HIV, but few studies have adjusted for the multilevel structure of survey data. The National Family Health Survey 3 collected data across India between 2005 and 2006. 38,715 females and 66,212 males with complete information were analyzed. To account for the correlations within clusters, a three-level model was employed. Bivariate and multivariable mixed effect logistic regression analysis were performed to identify factors associated with HIV. Intracluster correlation coefficients were used to assess the relatedness of each pair of variables within clusters. Variables pertaining to no knowledge of contraceptive methods, age at first marriage, wealth index and noncoverage of PSUs by Anganwadis were significant risk factors for HIV when the multileveled model was used for analysis. This study has identified the risk profile for HIV infection using an appropriate modeling strategy and has highlighted the consequences of ignoring the structure of the data. It offers a methodological guide towards an applied approach to the identification of future risk and the need to customize intervention to address HIV infection in the Indian population.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"3 1","pages":"126 - 139"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2019.1671096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46923546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The 9-criteria evaluation framework for perceptions survey: the case of VA’s Learners’ Perceptions Survey 认知调查的9项标准评估框架——以弗吉尼亚大学学生认知调查为例
Q3 Medicine Pub Date : 2018-12-16 DOI: 10.1080/24709360.2018.1553362
T. Kashner, Christopher Clarke, D. Aron, John M. Byrne, G. Cannon, D. Deemer, S. Gilman, C. Kaminetzky, L. Loo, Sophia Li, Annie B. Wicker, S. Keitz
ABSTRACT For its clinical, epidemiologic, educational, and health services research, evaluation, administrative, regulatory, and accreditation purposes, the perceptions survey is a data collection tool that asks observers to describe perceptions of their experiences with a defined phenomenon of interest. In practice, these surveys are often subject to criticism for not having been thoroughly evaluated before its first application using a consistent and comprehensive set of criteria for validity and reliability. This paper introduces a 9-criteria framework to assess perceptions surveys that integrates criteria from multiple evaluation sources. The 9-criteria framework was applied to data from the Department of Veterans Affairs’ Learners’ Perceptions Survey (LPS) that had been administered to national and local samples, and from findings obtained through a literature review involving LPS survey data. We show that the LPS is a robust tool that may serve as a model for design and validation of other perceptions surveys. Findings underscore the importance of using all nine criteria to validate perceptions survey data.
为了临床、流行病学、教育和卫生服务研究、评估、行政、监管和认证的目的,感知调查是一种数据收集工具,要求观察者用感兴趣的定义现象描述他们对经验的感知。在实践中,这些调查经常受到批评,因为在首次应用之前没有使用一套一致和全面的有效性和可靠性标准进行彻底评估。本文介绍了一个9个标准框架来评估来自多个评估来源的综合标准的感知调查。9个标准框架应用于退伍军人事务部学习者感知调查(LPS)的数据,该调查已对国家和地方样本进行了管理,并从涉及LPS调查数据的文献综述中获得的结果。我们表明,LPS是一个强大的工具,可以作为设计和验证其他感知调查的模型。调查结果强调了使用所有九个标准来验证感知调查数据的重要性。
{"title":"The 9-criteria evaluation framework for perceptions survey: the case of VA’s Learners’ Perceptions Survey","authors":"T. Kashner, Christopher Clarke, D. Aron, John M. Byrne, G. Cannon, D. Deemer, S. Gilman, C. Kaminetzky, L. Loo, Sophia Li, Annie B. Wicker, S. Keitz","doi":"10.1080/24709360.2018.1553362","DOIUrl":"https://doi.org/10.1080/24709360.2018.1553362","url":null,"abstract":"ABSTRACT For its clinical, epidemiologic, educational, and health services research, evaluation, administrative, regulatory, and accreditation purposes, the perceptions survey is a data collection tool that asks observers to describe perceptions of their experiences with a defined phenomenon of interest. In practice, these surveys are often subject to criticism for not having been thoroughly evaluated before its first application using a consistent and comprehensive set of criteria for validity and reliability. This paper introduces a 9-criteria framework to assess perceptions surveys that integrates criteria from multiple evaluation sources. The 9-criteria framework was applied to data from the Department of Veterans Affairs’ Learners’ Perceptions Survey (LPS) that had been administered to national and local samples, and from findings obtained through a literature review involving LPS survey data. We show that the LPS is a robust tool that may serve as a model for design and validation of other perceptions surveys. Findings underscore the importance of using all nine criteria to validate perceptions survey data.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"4 1","pages":"140 - 171"},"PeriodicalIF":0.0,"publicationDate":"2018-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2018.1553362","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48159892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An introduction to the why and how of risk adjustment 介绍风险调整的原因和方法
Q3 Medicine Pub Date : 2018-09-17 DOI: 10.1080/24709360.2018.1519990
W. B. Vogel, Guoqing Chen
Department of Veterans Affairs (VA) health services researchers often adjust for the differing risk profiles of selected patient populations for a variety of purposes. This paper explains the major reasons to conduct risk adjustment and provides a high level overview of what risk adjustment actually does and how the results of risk adjustment can be used in different ways for different purposes. The paper also discusses choosing a diagnostic classification system and describes some of the systems commonly used in risk adjustment along with comorbidity/severity indices and individual disease taxonomies. The factors influencing the choice of diagnostic classification systems and other commonly used risk adjustors are also presented along with a discussion of data requirements. Statistical approaches to risk adjustment are also briefly discussed. The paper concludes with some recommendations concerning risk adjustment that should be considering when developing research proposals.
退伍军人事务部(VA)的健康服务研究人员经常调整不同的风险概况选定的病人群体为各种目的。本文解释了进行风险调整的主要原因,并对风险调整的实际作用以及风险调整的结果如何以不同的方式用于不同的目的提供了高层次的概述。本文还讨论了诊断分类系统的选择,并描述了一些在风险调整中常用的系统,以及合并症/严重程度指数和个体疾病分类。影响选择诊断分类系统和其他常用风险调节器的因素也被提出,并讨论了数据要求。本文还简要讨论了风险调整的统计方法。本文最后提出了一些在制定研究建议时应考虑的风险调整建议。
{"title":"An introduction to the why and how of risk adjustment","authors":"W. B. Vogel, Guoqing Chen","doi":"10.1080/24709360.2018.1519990","DOIUrl":"https://doi.org/10.1080/24709360.2018.1519990","url":null,"abstract":"Department of Veterans Affairs (VA) health services researchers often adjust for the differing risk profiles of selected patient populations for a variety of purposes. This paper explains the major reasons to conduct risk adjustment and provides a high level overview of what risk adjustment actually does and how the results of risk adjustment can be used in different ways for different purposes. The paper also discusses choosing a diagnostic classification system and describes some of the systems commonly used in risk adjustment along with comorbidity/severity indices and individual disease taxonomies. The factors influencing the choice of diagnostic classification systems and other commonly used risk adjustors are also presented along with a discussion of data requirements. Statistical approaches to risk adjustment are also briefly discussed. The paper concludes with some recommendations concerning risk adjustment that should be considering when developing research proposals.","PeriodicalId":37240,"journal":{"name":"Biostatistics and Epidemiology","volume":"4 1","pages":"84 - 97"},"PeriodicalIF":0.0,"publicationDate":"2018-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24709360.2018.1519990","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48627650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Biostatistics and Epidemiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1