Pub Date : 2021-01-02DOI: 10.1080/24754269.2021.1895528
Wen Xu, Huixia Judy Wang
Extreme value theory provides essential mathematical foundations for modelling tail risks and has wide applications. The emerging of big and heterogeneous data calls for the development of new extreme value theory and methods. For studying high-dimensional extremes and extreme clusters in time series, an important problem is how to measure and test for tail dependence between random variables. Section 3.1 of Dr. Zhang’s paper discusses some newly proposed tail dependence measures. In the era of big data, a timely and challenging question is how to study data from heterogeneous populations, e.g. from different sources. Section 3.2 reviews some new developments of extreme value theory for maxima of maxima. The theory and methods in Sections 3.1 and 2.3 set the foundations for modelling extremes of multivariate and heterogeneous data, and we believe they have wide applicability. We will discuss two possible directions: (1) measuring and testing of partial tail dependence; (2) application of the extreme value theory for maxima of maxima in highdimensional inference.
{"title":"Discussion on “on studying extreme values and systematic risks with nonlinear time series models and tail dependence measures”","authors":"Wen Xu, Huixia Judy Wang","doi":"10.1080/24754269.2021.1895528","DOIUrl":"https://doi.org/10.1080/24754269.2021.1895528","url":null,"abstract":"Extreme value theory provides essential mathematical foundations for modelling tail risks and has wide applications. The emerging of big and heterogeneous data calls for the development of new extreme value theory and methods. For studying high-dimensional extremes and extreme clusters in time series, an important problem is how to measure and test for tail dependence between random variables. Section 3.1 of Dr. Zhang’s paper discusses some newly proposed tail dependence measures. In the era of big data, a timely and challenging question is how to study data from heterogeneous populations, e.g. from different sources. Section 3.2 reviews some new developments of extreme value theory for maxima of maxima. The theory and methods in Sections 3.1 and 2.3 set the foundations for modelling extremes of multivariate and heterogeneous data, and we believe they have wide applicability. We will discuss two possible directions: (1) measuring and testing of partial tail dependence; (2) application of the extreme value theory for maxima of maxima in highdimensional inference.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 1","pages":"26 - 30"},"PeriodicalIF":0.5,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2021.1895528","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43522771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-02-15DOI: 10.1080/24754269.2021.1877950
W Jenny Shi, Jan Hannig, Randy C S Lai, Thomas C M Lee
As a classical problem, covariance estimation has drawn much attention from the statistical community for decades. Much work has been done under the frequentist and the Bayesian frameworks. Aiming to quantify the uncertainty of the estimators without having to choose a prior, we have developed a fiducial approach to the estimation of covariance matrix. Built upon the Fiducial Berstein-von Mises Theorem (Sonderegger and Hannig 2014), we show that the fiducial distribution of the covariate matrix is consistent under our framework. Consequently, the samples generated from this fiducial distribution are good estimators to the true covariance matrix, which enable us to define a meaningful confidence region for the covariance matrix. Lastly, we also show that the fiducial approach can be a powerful tool for identifying clique structures in covariance matrices.
协方差估计作为一个经典问题,几十年来一直受到统计学界的关注。在频率论和贝叶斯框架下已经做了很多工作。为了在不选择先验的情况下量化估计量的不确定性,我们开发了一种估计协方差矩阵的基准方法。基于Fiducial Berstein-von Mises定理(Sonderegger and Hannig 2014),我们证明了协变量矩阵的Fiducial分布在我们的框架下是一致的。因此,由该基准分布生成的样本是真实协方差矩阵的良好估计,这使我们能够为协方差矩阵定义一个有意义的置信区域。最后,我们还证明了基准方法可以成为识别协方差矩阵中团结构的有力工具。
{"title":"Covariance estimation via fiducial inference.","authors":"W Jenny Shi, Jan Hannig, Randy C S Lai, Thomas C M Lee","doi":"10.1080/24754269.2021.1877950","DOIUrl":"https://doi.org/10.1080/24754269.2021.1877950","url":null,"abstract":"<p><p>As a classical problem, covariance estimation has drawn much attention from the statistical community for decades. Much work has been done under the frequentist and the Bayesian frameworks. Aiming to quantify the uncertainty of the estimators without having to choose a prior, we have developed a fiducial approach to the estimation of covariance matrix. Built upon the Fiducial Berstein-von Mises Theorem (Sonderegger and Hannig 2014), we show that the fiducial distribution of the covariate matrix is consistent under our framework. Consequently, the samples generated from this fiducial distribution are good estimators to the true covariance matrix, which enable us to define a meaningful confidence region for the covariance matrix. Lastly, we also show that the fiducial approach can be a powerful tool for identifying clique structures in covariance matrices.</p>","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 4","pages":"316-331"},"PeriodicalIF":0.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2021.1877950","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33442561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-23DOI: 10.1080/24754269.2020.1856590
Zhengjun Zhang
ABSTRACT This review paper discusses advances of statistical inference in modeling extreme observations from multiple sources and heterogeneous populations. The paper starts briefly reviewing classical univariate/multivariate extreme value theory, tail equivalence, and tail (in)dependence. New extreme value theory for heterogeneous populations is then introduced. Time series models for maxima and extreme observations are the focus of the review. These models naturally form a new system with similar structures. They can be used as alternatives to the widely used ARMA models and GARCH models. Applications of these time series models can be in many fields. The paper discusses two important applications: systematic risks and extreme co-movements/large scale contagions.
{"title":"On studying extreme values and systematic risks with nonlinear time series models and tail dependence measures","authors":"Zhengjun Zhang","doi":"10.1080/24754269.2020.1856590","DOIUrl":"https://doi.org/10.1080/24754269.2020.1856590","url":null,"abstract":"ABSTRACT This review paper discusses advances of statistical inference in modeling extreme observations from multiple sources and heterogeneous populations. The paper starts briefly reviewing classical univariate/multivariate extreme value theory, tail equivalence, and tail (in)dependence. New extreme value theory for heterogeneous populations is then introduced. Time series models for maxima and extreme observations are the focus of the review. These models naturally form a new system with similar structures. They can be used as alternatives to the widely used ARMA models and GARCH models. Applications of these time series models can be in many fields. The paper discusses two important applications: systematic risks and extreme co-movements/large scale contagions.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 1","pages":"1 - 25"},"PeriodicalIF":0.5,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2020.1856590","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47147542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-20DOI: 10.1080/24754269.2020.1846115
Wenzhi Cao, Zhengjun Zhang
Although advanced statistical models have been proposed to fit complex data better, the advances of science and technology have generated more complex data, e.g., Big Data, in which existing probability theory and statistical models find their limitations. This work establishes probability foundations for studying extreme values of data generated from a mixture process with the mixture pattern depending on the sample length and data generating sources. In particular, we show that the limit distribution, termed as the accelerated max-stable distribution, of the maxima of maxima of sequences of random variables with the above mixture pattern is a product of three types of extreme value distributions. As a result, our theoretical results are more general than the classical extreme value theory and can be applicable to research problems related to Big Data. Examples are provided to give intuitions of the new distribution family. We also establish mixing conditions for a sequence of random variables to have the limit distributions. The results for the associated independent sequence and the maxima over arbitrary intervals are also developed. We use simulations to demonstrate the advantages of our newly established maxima of maxima extreme value theory.
{"title":"New extreme value theory for maxima of maxima","authors":"Wenzhi Cao, Zhengjun Zhang","doi":"10.1080/24754269.2020.1846115","DOIUrl":"https://doi.org/10.1080/24754269.2020.1846115","url":null,"abstract":"Although advanced statistical models have been proposed to fit complex data better, the advances of science and technology have generated more complex data, e.g., Big Data, in which existing probability theory and statistical models find their limitations. This work establishes probability foundations for studying extreme values of data generated from a mixture process with the mixture pattern depending on the sample length and data generating sources. In particular, we show that the limit distribution, termed as the accelerated max-stable distribution, of the maxima of maxima of sequences of random variables with the above mixture pattern is a product of three types of extreme value distributions. As a result, our theoretical results are more general than the classical extreme value theory and can be applicable to research problems related to Big Data. Examples are provided to give intuitions of the new distribution family. We also establish mixing conditions for a sequence of random variables to have the limit distributions. The results for the associated independent sequence and the maxima over arbitrary intervals are also developed. We use simulations to demonstrate the advantages of our newly established maxima of maxima extreme value theory.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 1","pages":"232 - 252"},"PeriodicalIF":0.5,"publicationDate":"2020-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2020.1846115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47671905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-17DOI: 10.1080/24754269.2020.1856589
T. McElroy, Srinjoy Das
ABSTRACT General prediction formulas involving Hermite polynomials are developed for time series expressed as a transformation of a Gaussian process. The prediction gains over linear predictors are examined numerically, demonstrating the improvement of nonlinear prediction.
{"title":"Nonlinear prediction via Hermite transformation","authors":"T. McElroy, Srinjoy Das","doi":"10.1080/24754269.2020.1856589","DOIUrl":"https://doi.org/10.1080/24754269.2020.1856589","url":null,"abstract":"ABSTRACT General prediction formulas involving Hermite polynomials are developed for time series expressed as a transformation of a Gaussian process. The prediction gains over linear predictors are examined numerically, demonstrating the improvement of nonlinear prediction.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 1","pages":"49 - 54"},"PeriodicalIF":0.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2020.1856589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44941464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-17DOI: 10.1080/24754269.2020.1856591
Sijing Li, J. Shao
To estimate unknown population parameters based on panel data having nonignorable item nonresponse, we propose an innovative data grouping approach according to the number of observed components in the multivariate outcome when the joint distribution of and associated covariate is nonparametric and the nonresponse probability conditional on and has a parametric form. To deal with the identifiability issue, we utilise a nonresponse instrument , an auxiliary variable related to but not related to the nonresponse probability conditional on and . We apply a modified generalised method of moments to obtain estimators of the parameters in the nonresponse probability, and a generalised regression estimation to utilise covariate information for efficient estimation of population parameters. Consistency and asymptotic normality of the proposed estimators of the population parameters are established. Simulation and real data results are presented.
{"title":"Nonignorable item nonresponse in panel data","authors":"Sijing Li, J. Shao","doi":"10.1080/24754269.2020.1856591","DOIUrl":"https://doi.org/10.1080/24754269.2020.1856591","url":null,"abstract":"To estimate unknown population parameters based on panel data having nonignorable item nonresponse, we propose an innovative data grouping approach according to the number of observed components in the multivariate outcome when the joint distribution of and associated covariate is nonparametric and the nonresponse probability conditional on and has a parametric form. To deal with the identifiability issue, we utilise a nonresponse instrument , an auxiliary variable related to but not related to the nonresponse probability conditional on and . We apply a modified generalised method of moments to obtain estimators of the parameters in the nonresponse probability, and a generalised regression estimation to utilise covariate information for efficient estimation of population parameters. Consistency and asymptotic normality of the proposed estimators of the population parameters are established. Simulation and real data results are presented.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"6 1","pages":"58 - 71"},"PeriodicalIF":0.5,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2020.1856591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42460662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-23DOI: 10.1080/24754269.2020.1846414
Yongqiang Lian, Yincai Tang, Shirong Zhou
Gradient descent (GD) algorithm is the widely used optimisation method in training machine learning and deep learning models. In this paper, based on GD, Polyak's momentum (PM), and Nesterov accelerated gradient (NAG), we give the convergence of the algorithms from an initial value to the optimal value of an objective function in simple quadratic form. Based on the convergence property of the quadratic function, two sister sequences of NAG's iteration and parallel tangent methods in neural networks, the three-step accelerated gradient (TAG) algorithm is proposed, which has three sequences other than two sister sequences. To illustrate the performance of this algorithm, we compare the proposed algorithm with the three other algorithms in quadratic function, high-dimensional quadratic functions, and nonquadratic function. Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning. For conveniently facilitate the proposed algorithms, we rewite the R package ‘neuralnet’ and extend it to ‘supneuralnet’. All kinds of deep learning algorithms in this paper are included in ‘supneuralnet’ package. Finally, we show our algorithms are superior to other algorithms in four case studies.
{"title":"Research on three-step accelerated gradient algorithm in deep learning","authors":"Yongqiang Lian, Yincai Tang, Shirong Zhou","doi":"10.1080/24754269.2020.1846414","DOIUrl":"https://doi.org/10.1080/24754269.2020.1846414","url":null,"abstract":"Gradient descent (GD) algorithm is the widely used optimisation method in training machine learning and deep learning models. In this paper, based on GD, Polyak's momentum (PM), and Nesterov accelerated gradient (NAG), we give the convergence of the algorithms from an initial value to the optimal value of an objective function in simple quadratic form. Based on the convergence property of the quadratic function, two sister sequences of NAG's iteration and parallel tangent methods in neural networks, the three-step accelerated gradient (TAG) algorithm is proposed, which has three sequences other than two sister sequences. To illustrate the performance of this algorithm, we compare the proposed algorithm with the three other algorithms in quadratic function, high-dimensional quadratic functions, and nonquadratic function. Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning. For conveniently facilitate the proposed algorithms, we rewite the R package ‘neuralnet’ and extend it to ‘supneuralnet’. All kinds of deep learning algorithms in this paper are included in ‘supneuralnet’ package. Finally, we show our algorithms are superior to other algorithms in four case studies.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"6 1","pages":"40 - 57"},"PeriodicalIF":0.5,"publicationDate":"2020-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2020.1846414","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44709557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-13DOI: 10.1080/24754269.2020.1800331
Wenchuan Guo, B. Zhong
We propose a new two-/three-stage dose-finding design called Target Toxicity (TT) for phase I clinical trials, where we link the decision rules in the dose-finding process with the conclusions from a hypothesis test. The power to detect excessive toxicity is also given. This solves the problem of why the minimal number of patients is needed for the selected dose level. Our method provides a statistical explanation of traditional ‘3+3’ design using frequentist framework. The proposed method is very flexible and it incorporates other interval-based decision rules through different parameter settings. We provide the decision tables to guide investigators when to decrease, increase or repeat a dose for next cohort of subjects. Simulation experiments were conducted to compare the performance of the proposed method with other dose-finding designs. A free open source R package tsdf is available on CRAN. It is dedicated to deriving two-/three-stage design decision tables and perform dose-finding simulations.
{"title":"Target toxicity design for phase I dose-finding","authors":"Wenchuan Guo, B. Zhong","doi":"10.1080/24754269.2020.1800331","DOIUrl":"https://doi.org/10.1080/24754269.2020.1800331","url":null,"abstract":"We propose a new two-/three-stage dose-finding design called Target Toxicity (TT) for phase I clinical trials, where we link the decision rules in the dose-finding process with the conclusions from a hypothesis test. The power to detect excessive toxicity is also given. This solves the problem of why the minimal number of patients is needed for the selected dose level. Our method provides a statistical explanation of traditional ‘3+3’ design using frequentist framework. The proposed method is very flexible and it incorporates other interval-based decision rules through different parameter settings. We provide the decision tables to guide investigators when to decrease, increase or repeat a dose for next cohort of subjects. Simulation experiments were conducted to compare the performance of the proposed method with other dose-finding designs. A free open source R package tsdf is available on CRAN. It is dedicated to deriving two-/three-stage design decision tables and perform dose-finding simulations.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 1","pages":"149 - 161"},"PeriodicalIF":0.5,"publicationDate":"2020-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2020.1800331","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46466841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-12DOI: 10.1080/24754269.2021.1878742
Yiou Li, Lulu Kang, Xiao Huang
ABSTRACT Controlled experiments are widely used in many applications to investigate the causal relationship between input factors and experimental outcomes. A completely randomised design is usually used to randomly assign treatment levels to experimental units. When covariates of the experimental units are available, the experimental design should achieve covariate balancing among the treatment groups, such that the statistical inference of the treatment effects is not confounded with any possible effects of covariates. However, covariate imbalance often exists, because the experiment is carried out based on a single realisation of the complete randomisation. It is more likely to occur and worsen when the size of the experimental units is small or moderate. In this paper, we introduce a new covariate balancing criterion, which measures the differences between kernel density estimates of the covariates of treatment groups. To achieve covariate balance before the treatments are randomly assigned, we partition the experimental units by minimising the criterion, then randomly assign the treatment levels to the partitioned groups. Through numerical examples, we show that the proposed partition approach can improve the accuracy of the difference-in-mean estimator and outperforms the complete randomisation and rerandomisation approaches.
{"title":"Covariate balancing based on kernel density estimates for controlled experiments","authors":"Yiou Li, Lulu Kang, Xiao Huang","doi":"10.1080/24754269.2021.1878742","DOIUrl":"https://doi.org/10.1080/24754269.2021.1878742","url":null,"abstract":"ABSTRACT Controlled experiments are widely used in many applications to investigate the causal relationship between input factors and experimental outcomes. A completely randomised design is usually used to randomly assign treatment levels to experimental units. When covariates of the experimental units are available, the experimental design should achieve covariate balancing among the treatment groups, such that the statistical inference of the treatment effects is not confounded with any possible effects of covariates. However, covariate imbalance often exists, because the experiment is carried out based on a single realisation of the complete randomisation. It is more likely to occur and worsen when the size of the experimental units is small or moderate. In this paper, we introduce a new covariate balancing criterion, which measures the differences between kernel density estimates of the covariates of treatment groups. To achieve covariate balance before the treatments are randomly assigned, we partition the experimental units by minimising the criterion, then randomly assign the treatment levels to the partitioned groups. Through numerical examples, we show that the proposed partition approach can improve the accuracy of the difference-in-mean estimator and outperforms the complete randomisation and rerandomisation approaches.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 1","pages":"102 - 113"},"PeriodicalIF":0.5,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2021.1878742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41800210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-24DOI: 10.1080/24754269.2020.1796098
Xiaoli Yu, Shaoting Li, Jiahua Chen
Dose–response experiments and data analyses are often carried out according to an optimal design under a model assumption. A two-parameter logistic model is often used because of its nice mathematical properties and plausible stochastic response mechanisms. There is an extensive literature on its optimal designs and data analysis strategies. However, a model is at best a good approximation in a real-world application, and researchers must be aware of the risk of model mis-specification. In this paper, we investigate the effectiveness of the sequential ED-design, the D-optimal design, and the up-and-down design under the three-parameter logistic regression model, and we develop a numerical method for the parameter estimation. Simulations show that the combination of the proposed model and the data analysis strategy performs well. When the logistic model is correct, this more complex model has hardly any efficiency loss. The three-parameter logistic model works better than the two-parameter logistic model in the presence of model mis-specification.
{"title":"A three-parameter logistic regression model","authors":"Xiaoli Yu, Shaoting Li, Jiahua Chen","doi":"10.1080/24754269.2020.1796098","DOIUrl":"https://doi.org/10.1080/24754269.2020.1796098","url":null,"abstract":"Dose–response experiments and data analyses are often carried out according to an optimal design under a model assumption. A two-parameter logistic model is often used because of its nice mathematical properties and plausible stochastic response mechanisms. There is an extensive literature on its optimal designs and data analysis strategies. However, a model is at best a good approximation in a real-world application, and researchers must be aware of the risk of model mis-specification. In this paper, we investigate the effectiveness of the sequential ED-design, the D-optimal design, and the up-and-down design under the three-parameter logistic regression model, and we develop a numerical method for the parameter estimation. Simulations show that the combination of the proposed model and the data analysis strategy performs well. When the logistic model is correct, this more complex model has hardly any efficiency loss. The three-parameter logistic model works better than the two-parameter logistic model in the presence of model mis-specification.","PeriodicalId":22070,"journal":{"name":"Statistical Theory and Related Fields","volume":"5 1","pages":"265 - 274"},"PeriodicalIF":0.5,"publicationDate":"2020-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24754269.2020.1796098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46183255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}