The problem of testing the Hardy-Weinberg equilibrium (HWE) when the data are stratified in several strata is considered. In previous methods, null hypothesis is that HWE holds and alternative hypothesis is that HWE does not hold. But these methods cannot test the HWE positively. Therefore, we formulate the test of the HWE as the problem of testing equivalence. Considering an odds ratio as the measure of disequilibrium, it is assumed that the ratio is common across strata. We propose two tests based on the trinomial distribution and quadrinomial distribution. It is shown that those tests are asymptotically equivalent. Those methods are applied to practical data for illustration.
{"title":"共通オッズ比を用いる Hardy-Weinberg平衡の検証","authors":"哲司 大山, 公雄 吉村, 堯 柳川","doi":"10.5691/JJB.27.97","DOIUrl":"https://doi.org/10.5691/JJB.27.97","url":null,"abstract":"The problem of testing the Hardy-Weinberg equilibrium (HWE) when the data are stratified in several strata is considered. In previous methods, null hypothesis is that HWE holds and alternative hypothesis is that HWE does not hold. But these methods cannot test the HWE positively. Therefore, we formulate the test of the HWE as the problem of testing equivalence. Considering an odds ratio as the measure of disequilibrium, it is assumed that the ratio is common across strata. We propose two tests based on the trinomial distribution and quadrinomial distribution. It is shown that those tests are asymptotically equivalent. Those methods are applied to practical data for illustration.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115305599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the selection process in academic publication, all meta-analysis of published literature is more or less affected by the so-called publication bias and tends to overestimate the effect of interest. Statistically, publication bias in meta-analysis is a selection bias which results from a non-random sampling from the population of unpublished studies. Several authors proposed methods of modelling publication bias using a selection model approach, which considers a joint modelling of the weight function representing the publication probability of each study and a regression of the outcome of interest. Copas (1999) showed that in this approach some of the model parameters are not estimable and a sensitivity analysis should be conducted. In implementing the Copas’s sensitivity analysis of publication bias, a practical difficulty arises in determining the range of sensitivity parameters appropriately. We propose in this article a Bayesian hierarchical model which extends Copas’s selectivity model and incorporates the experts’ opinions as a prior distribution of sensitivity parameters. We illustrate this approach with an example of the passive smoking and lung cancer meta-analysis.
{"title":"Sensitivity Analysis of Publication Bias in Meta-analysis : A Bayesian Approach","authors":"Kimihiko Sakamoto, Y. Matsuyama, Y. Ohashi","doi":"10.5691/JJB.27.109","DOIUrl":"https://doi.org/10.5691/JJB.27.109","url":null,"abstract":"Due to the selection process in academic publication, all meta-analysis of published literature is more or less affected by the so-called publication bias and tends to overestimate the effect of interest. Statistically, publication bias in meta-analysis is a selection bias which results from a non-random sampling from the population of unpublished studies. Several authors proposed methods of modelling publication bias using a selection model approach, which considers a joint modelling of the weight function representing the publication probability of each study and a regression of the outcome of interest. Copas (1999) showed that in this approach some of the model parameters are not estimable and a sensitivity analysis should be conducted. In implementing the Copas’s sensitivity analysis of publication bias, a practical difficulty arises in determining the range of sensitivity parameters appropriately. We propose in this article a Bayesian hierarchical model which extends Copas’s selectivity model and incorporates the experts’ opinions as a prior distribution of sensitivity parameters. We illustrate this approach with an example of the passive smoking and lung cancer meta-analysis.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133990653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A generalized hazards model incorporating a cubic B-spline function into the baseline hazard function (GHMBS) was proposed as a model for estimating covariate effects in survival data analysis. The GHMBS integrated the three types of hazard models: the proportional hazards model (PHM), accelerated failure time model (AFTM), and accelerated hazards model (AHM), which enabled the likelihood principle for estimation and hypothesis testing to be applied irrespective of submodels (i.e., PHM, AFTM, and AHM). A procedure for adaptively choosing suitable knots from a set of candidate knots was proposed in order to actualize an appropriate baseline hazard function in GHMBS. The characteristic of the proposal was evaluated with bias and mean squared error of the estimation of covariate effects through a Monte-Carlo simulation experiment. A method for identifying a submodel appropriate for the data to be analyzed was also proposed based on GHMBS. The performance of the proposed model selection method was evaluated with the probability of selecting the true model through a Monte-Carlo simulation experiment based on PHM and AFTM. As a result, the proposed method achieved fairly high probabilities of identifying the true model. An application of the proposed method to actual data in a clinical trial provided a reasonable conclusion.
{"title":"Utility of Generalized Hazards Model Incorporating Cubic B-spline Function into the Baseline Hazard Function","authors":"Hisao Takeuchi, I. Yoshimura, C. Hamada","doi":"10.5691/JJB.27.121","DOIUrl":"https://doi.org/10.5691/JJB.27.121","url":null,"abstract":"A generalized hazards model incorporating a cubic B-spline function into the baseline hazard function (GHMBS) was proposed as a model for estimating covariate effects in survival data analysis. The GHMBS integrated the three types of hazard models: the proportional hazards model (PHM), accelerated failure time model (AFTM), and accelerated hazards model (AHM), which enabled the likelihood principle for estimation and hypothesis testing to be applied irrespective of submodels (i.e., PHM, AFTM, and AHM). A procedure for adaptively choosing suitable knots from a set of candidate knots was proposed in order to actualize an appropriate baseline hazard function in GHMBS. The characteristic of the proposal was evaluated with bias and mean squared error of the estimation of covariate effects through a Monte-Carlo simulation experiment. A method for identifying a submodel appropriate for the data to be analyzed was also proposed based on GHMBS. The performance of the proposed model selection method was evaluated with the probability of selecting the true model through a Monte-Carlo simulation experiment based on PHM and AFTM. As a result, the proposed method achieved fairly high probabilities of identifying the true model. An application of the proposed method to actual data in a clinical trial provided a reasonable conclusion.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127782643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article proposes a method of power and sample size calculation for confirmatory clinical trials, with the objective of showing superiority for all multiple primary variables, assuming normality of the variables. Since one sided t-statistics are used to evaluate statistical significance, the power is calculated based on a Wishart distribution. A Monte Carlo integration is used to calculate the expectation of conditional power, conditioned on Wishart variables, where random numbers are generated using the Bartlett's decomposition. Numerical examples revealed that the required sample size decreases with increases in the correlation coefficient, although the dependency is not large when the correlation coefficient is negative or when the effect sizes, on which power is calculated, are far different between variables. A SAS program (version 9.1) for the proposed method is provided in the Appendix.
{"title":"Power and Sample Size Calculations in Clinical Trials with Multiple Primary Variables","authors":"T. Sozu, Takeshi Kanou, C. Hamada, I. Yoshimura","doi":"10.5691/JJB.27.83","DOIUrl":"https://doi.org/10.5691/JJB.27.83","url":null,"abstract":"This article proposes a method of power and sample size calculation for confirmatory clinical trials, with the objective of showing superiority for all multiple primary variables, assuming normality of the variables. Since one sided t-statistics are used to evaluate statistical significance, the power is calculated based on a Wishart distribution. A Monte Carlo integration is used to calculate the expectation of conditional power, conditioned on Wishart variables, where random numbers are generated using the Bartlett's decomposition. Numerical examples revealed that the required sample size decreases with increases in the correlation coefficient, although the dependency is not large when the correlation coefficient is negative or when the effect sizes, on which power is calculated, are far different between variables. A SAS program (version 9.1) for the proposed method is provided in the Appendix.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114305179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
1. は じ め に 2003年 5月に,欧州医薬品庁 EMEAの医薬品委員会 CPMPから “POINTS TO CONSIDER ON ADJUSTMENT FOR BASELINE COVARIATES”(以下,本ガイダンスという)が公表さ れた.これまで既に数種類の Points to Considerが公表されているが,それらは ICH-E9ガイド ラインの中で紹介はされているものの十分に議論されていない事項について,留意すべき点を取 り上げ議論し,手引きとしたものである.本ガイダンスは,その Points to Considerの 1つであ り,ベースライン共変量の調整に焦点をあてたものである. ベースライン共変量には様々な種類がある.年齢や体重のような人口統計学的変数,罹病期間 や重症度のような疾患の状況,各疾患領域で広く受け入れられる予後因子,施設や医師といった 因子,そして主要変数のベースライン値などがこれにあたり,本稿では統一的に共変量という用 語を使用する. 以下,第 2節では本ガイダンスで述べられている内容を紹介し,第 3節では本ガイダンスの記 載内容に関する論点の提示並びに意見を示す.最後の第 4節で本稿のまとめを行う.
1.前言2003年5月,欧洲药品管理局EMEA的药品委员会CPMP授予“POINTS TO CONSIDER ON ADJUSTMENT FOR BASELINE”COVARIATES”(以下称为本指南)。目前已经公布了数种Points to Consider,它们都是ICH-E9指南。对线中虽有介绍但未充分讨论的事项,选取应注意的点进行讨论,作为指南。本指南是Points to Consider之一,重点放在基线协变量的调整上。基线协变量有很多种。年龄和体重等人口统计学变量,患病时间和严重程度等疾病状况,在各疾病领域被广泛接受的预后因素,设施和医生等。因子以及主要变量的基线值等都属于这种情况,在本文中统一使用协变量这一术语。以下,第2节介绍本指南所叙述的内容,第3节提出关于本指南记载内容的论点和意见。最后的第四节进行本文的总结。
{"title":"Comments on “Points to Consider on Adjustment for Baseline Covariates”","authors":"Atsushi Hagino","doi":"10.5691/JJB.27.S8","DOIUrl":"https://doi.org/10.5691/JJB.27.S8","url":null,"abstract":"1. は じ め に 2003年 5月に,欧州医薬品庁 EMEAの医薬品委員会 CPMPから “POINTS TO CONSIDER ON ADJUSTMENT FOR BASELINE COVARIATES”(以下,本ガイダンスという)が公表さ れた.これまで既に数種類の Points to Considerが公表されているが,それらは ICH-E9ガイド ラインの中で紹介はされているものの十分に議論されていない事項について,留意すべき点を取 り上げ議論し,手引きとしたものである.本ガイダンスは,その Points to Considerの 1つであ り,ベースライン共変量の調整に焦点をあてたものである. ベースライン共変量には様々な種類がある.年齢や体重のような人口統計学的変数,罹病期間 や重症度のような疾患の状況,各疾患領域で広く受け入れられる予後因子,施設や医師といった 因子,そして主要変数のベースライン値などがこれにあたり,本稿では統一的に共変量という用 語を使用する. 以下,第 2節では本ガイダンスで述べられている内容を紹介し,第 3節では本ガイダンスの記 載内容に関する論点の提示並びに意見を示す.最後の第 4節で本稿のまとめを行う.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131807144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
“Points to Consider on Multiplicity Issues in Clinical Trials” published by CPMP of EMEA in September 2003 is reviewed and issues to be discussed are identified. First, the examples identified in the PtC as unnecessary cases of “adjusting the type I error level” are grouped into 3 patterns, and the situation and the characteristics of each pattern are explained. After reviewing each pattern, important points for conducting the confirmatory trial according to PtC are summarized. Issues to be discussed about the content and the concrete measures are also addressed.
{"title":"Review of \"Points to Consider on Multiplicity Issues in Clinical Trials\" and Issues to be Discussed","authors":"S. Fujikoshi","doi":"10.5691/JJB.27.S64","DOIUrl":"https://doi.org/10.5691/JJB.27.S64","url":null,"abstract":"“Points to Consider on Multiplicity Issues in Clinical Trials” published by CPMP of EMEA in September 2003 is reviewed and issues to be discussed are identified. First, the examples identified in the PtC as unnecessary cases of “adjusting the type I error level” are grouped into 3 patterns, and the situation and the characteristics of each pattern are explained. After reviewing each pattern, important points for conducting the confirmatory trial according to PtC are summarized. Issues to be discussed about the content and the concrete measures are also addressed.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125897713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an allocation method for balancing prognostic variables among treatment groups in clinical trials under the condition that some prognostic variables are continuous and others are categorical. In principle, the proposed method utilizes the sum Sr, with respect to groups, of the Kullback-Leibler information (KLI) from the group-pooled distribution of prognostic variables to the group-specific distribution as the criterion for overall balancing, assuming normal and multinomial distributions, respectively. In the realized procedure, the proposed method allocates sequentially enrolled new subjects to a group with probability Pa so as to achieve the minimum of Sr under the condition that the maximum difference of the number of subjects among groups is in the prespecified allowable range DN . Monte-Carlo simulation studies were conducted in order to compare the performance of the proposed method with the Pocock-Simon method which was the most popular method. The homogeneity test of mean and variance among groups for evaluating the achieved balance showed greater P values in the proposed method than those in the Pocock-Simon method. The parameter estimates of treatment effect adjusted for prognostic variables were also likely to be more stable in the proposed method than in the Pocock-Simon method.
{"title":"An Allocation Method for Balancing Prognostic Variables Including Continuous Ones among Treatment Groups Using the Kullback-Leibler Information","authors":"Akira Endo, C. Hamada, I. Yoshimura","doi":"10.5691/JJB.27.1","DOIUrl":"https://doi.org/10.5691/JJB.27.1","url":null,"abstract":"We propose an allocation method for balancing prognostic variables among treatment groups in clinical trials under the condition that some prognostic variables are continuous and others are categorical. In principle, the proposed method utilizes the sum Sr, with respect to groups, of the Kullback-Leibler information (KLI) from the group-pooled distribution of prognostic variables to the group-specific distribution as the criterion for overall balancing, assuming normal and multinomial distributions, respectively. In the realized procedure, the proposed method allocates sequentially enrolled new subjects to a group with probability Pa so as to achieve the minimum of Sr under the condition that the maximum difference of the number of subjects among groups is in the prespecified allowable range DN . Monte-Carlo simulation studies were conducted in order to compare the performance of the proposed method with the Pocock-Simon method which was the most popular method. The homogeneity test of mean and variance among groups for evaluating the achieved balance showed greater P values in the proposed method than those in the Pocock-Simon method. The parameter estimates of treatment effect adjusted for prognostic variables were also likely to be more stable in the proposed method than in the Pocock-Simon method.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116057799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In measuring quality of life (QOL), outcome-dependent missing values are inevitable because of longitudinal nature of the study. In particular, in clinical trials of advanced-stage disease, it is desirable to distinguish differences between reasons for missing, death and drop-out, because QOL scores for death cases are not really missing data, but are nonexistent and are simply undefined. We focus on estimating the local average treatment effect among survivors. Standard randomized treatment comparisons cannot be performed because the QOL scores are only defined in the non-randomly selected subgroup of survivors. We propose a new estimation method of the survivor average causal effect (SACE) in the presence of both death and dropout. The proposed estimator is a weighted average of the standard estimators for survivors where the weight is the probability that the patient would have survived had he/she received the other treatment. For drop-out cases, the multiple imputation method is applied. Two analysis methods (proposed method and analysis based on only observed survivors) were compared by simulation studies. The proposed estimator had smaller biases with smaller MSEs compared with those of the standard estimator. The proposed method was applied to data from a randomized phase III clinical trial for advanced non-small-cell lung cancer patients.
{"title":"Analysis of Quality of Life Data with Death and Drop-out in Advanced Non-Small-Cell Lung Cancer Patients","authors":"Kazutaka Doi, Y. Matsuyama, Y. Ohashi","doi":"10.5691/JJB.27.17","DOIUrl":"https://doi.org/10.5691/JJB.27.17","url":null,"abstract":"In measuring quality of life (QOL), outcome-dependent missing values are inevitable because of longitudinal nature of the study. In particular, in clinical trials of advanced-stage disease, it is desirable to distinguish differences between reasons for missing, death and drop-out, because QOL scores for death cases are not really missing data, but are nonexistent and are simply undefined. We focus on estimating the local average treatment effect among survivors. Standard randomized treatment comparisons cannot be performed because the QOL scores are only defined in the non-randomly selected subgroup of survivors. We propose a new estimation method of the survivor average causal effect (SACE) in the presence of both death and dropout. The proposed estimator is a weighted average of the standard estimators for survivors where the weight is the probability that the patient would have survived had he/she received the other treatment. For drop-out cases, the multiple imputation method is applied. Two analysis methods (proposed method and analysis based on only observed survivors) were compared by simulation studies. The proposed estimator had smaller biases with smaller MSEs compared with those of the standard estimator. The proposed method was applied to data from a randomized phase III clinical trial for advanced non-small-cell lung cancer patients.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131409400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the occurrence of a rare event A such as a severe adverse drug reaction, there exists the “Rule of Three” to remind practitioners that “absence of evidence is not evidence of absence.” The Rule of Three actually says that even if the event A was not observed among n patients it would be quite possible to observe three events among other n patients. The present paper examines this useful rule in detail and also extends it to a testing problem for occurrence probability of A.First, the Rule of Three is extended to the case that the number of the event observed among the first n patients is more than zero. We give rules that when k (> 0) events were observed among n patients, nk events would be possibly observed among other n patients. Next, a testing procedure is introduced to examine whether the occurrence probabilities of A for two populations are the same under the condition that k events were observed among n patients for one population. It will be shown that the relevant probability distribution is a negative binomial, and then critical regions for small k's are given. For a possible application of the procedure, we mention the signal detection for spontaneous reporting system of adverse drug reaction.
{"title":"稀な事象の生起確率に関する統計的推測 —Rule of Threeとその周辺—","authors":"学 岩崎, 清隆 吉田","doi":"10.5691/JJB.26.53","DOIUrl":"https://doi.org/10.5691/JJB.26.53","url":null,"abstract":"For the occurrence of a rare event A such as a severe adverse drug reaction, there exists the “Rule of Three” to remind practitioners that “absence of evidence is not evidence of absence.” The Rule of Three actually says that even if the event A was not observed among n patients it would be quite possible to observe three events among other n patients. The present paper examines this useful rule in detail and also extends it to a testing problem for occurrence probability of A.First, the Rule of Three is extended to the case that the number of the event observed among the first n patients is more than zero. We give rules that when k (> 0) events were observed among n patients, nk events would be possibly observed among other n patients. Next, a testing procedure is introduced to examine whether the occurrence probabilities of A for two populations are the same under the condition that k events were observed among n patients for one population. It will be shown that the relevant probability distribution is a negative binomial, and then critical regions for small k's are given. For a possible application of the procedure, we mention the signal detection for spontaneous reporting system of adverse drug reaction.","PeriodicalId":365545,"journal":{"name":"Japanese journal of biometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132495543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}