Propensity scores (PS) have been studied for many years, mostly in the aspect of confounder matching in the control and treatment groups. This work is devoted to the problem of estimation of the causal impact of the treatment versus control data in observational studies, and it is based on the simulation of thousands of scenarios and the measurement of the causal outcome. The generated treatment effect was added in simulation to the outcome, then it was retrieved using the PS and regression estimations, and the results were compared with the original known in the simulation treatment values. It is shown that only rarely the propensity score can successfully solve the causality problem, and the regressions often outperform the PS estimations. The results support the old philosophical critique of the counterfactual theory of causation from a statistical point of view.
{"title":"Limitations of the propensity scores approach: A simulation study","authors":"Igor Mandel","doi":"10.3233/mas-241505","DOIUrl":"https://doi.org/10.3233/mas-241505","url":null,"abstract":"Propensity scores (PS) have been studied for many years, mostly in the aspect of confounder matching in the control and treatment groups. This work is devoted to the problem of estimation of the causal impact of the treatment versus control data in observational studies, and it is based on the simulation of thousands of scenarios and the measurement of the causal outcome. The generated treatment effect was added in simulation to the outcome, then it was retrieved using the PS and regression estimations, and the results were compared with the original known in the simulation treatment values. It is shown that only rarely the propensity score can successfully solve the causality problem, and the regressions often outperform the PS estimations. The results support the old philosophical critique of the counterfactual theory of causation from a statistical point of view.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"44 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In medical research, it is common to estimate parameters for each group and then evaluate the estimated parameters for each group without comparing the groups. However, researchers frequently want to determine whether the two distributions using the estimated parameters differ significantly between the two groups. For the Weibull distribution, the two-sample Kolmogorov-Smirnov test (two-sided) was used to examine whether the two distributions were significantly different between the two groups. Based on this, we developed a method to compare the two groups using a three-parameter Fréchet distribution. The number of days from drug administration to remission frequently followed a Fréchet distribution. It is appropriate to use a three-parameter Fréchet distribution with a location parameter because patients typically go into remission after several days of drug administration. We propose a minimum variance linear estimator with a hyperparameter (MVLE-H) method for estimating a three-parameter Fréchet distribution based on the MVLE-H method for estimating a three-parameter Weibull distribution. We verified the effectiveness of the MVLE-H method and the two-sample Kolmogorov-Smirnov test (two-sided) on the three-parameter Fréchet distribution using Monte Carlo simulations and numerical examples.
{"title":"Estimation of three-parameter Fréchet distribution for the number of days from drug administration to remission in small sample sizes","authors":"T. Ogura, C. Shiraishi","doi":"10.3233/mas-231466","DOIUrl":"https://doi.org/10.3233/mas-231466","url":null,"abstract":"In medical research, it is common to estimate parameters for each group and then evaluate the estimated parameters for each group without comparing the groups. However, researchers frequently want to determine whether the two distributions using the estimated parameters differ significantly between the two groups. For the Weibull distribution, the two-sample Kolmogorov-Smirnov test (two-sided) was used to examine whether the two distributions were significantly different between the two groups. Based on this, we developed a method to compare the two groups using a three-parameter Fréchet distribution. The number of days from drug administration to remission frequently followed a Fréchet distribution. It is appropriate to use a three-parameter Fréchet distribution with a location parameter because patients typically go into remission after several days of drug administration. We propose a minimum variance linear estimator with a hyperparameter (MVLE-H) method for estimating a three-parameter Fréchet distribution based on the MVLE-H method for estimating a three-parameter Weibull distribution. We verified the effectiveness of the MVLE-H method and the two-sample Kolmogorov-Smirnov test (two-sided) on the three-parameter Fréchet distribution using Monte Carlo simulations and numerical examples.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"14 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141356084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Health technology assessments of interventions impacting survival often require extrapolating current data to gain a better understanding of the interventions’ long-term benefits. Both a comprehensive examination of the trial data up to the maximum follow-up period and the fitting of parametric models are required for extrapolation. It is standard practice to visually compare the parametric curves to the Kaplan-Meier survival estimate (or comparison of hazard estimates) and to assess the parametric models using likelihood-based information criteria. In place of these two steps, this work demonstrates how to minimize the squared distance of parametric estimators to the Kaplan-Meier estimate. This is in line with the selection of the model using Mean Squared Error, with the modification that the unknown true survival is replaced by the Kaplan-Meier estimate. We would assure the internal validity of the extrapolated model and its appropriate representation of the data by adhering to this procedure. We use both simulation and real-world data with a scenario where no model that properly fits the data could be found to illustrate how this process can aid in model selection.
{"title":"Parametric analysis and model selection for economic evaluation of survival data","authors":"Szilárd Nemes","doi":"10.3233/mas-241506","DOIUrl":"https://doi.org/10.3233/mas-241506","url":null,"abstract":"Health technology assessments of interventions impacting survival often require extrapolating current data to gain a better understanding of the interventions’ long-term benefits. Both a comprehensive examination of the trial data up to the maximum follow-up period and the fitting of parametric models are required for extrapolation. It is standard practice to visually compare the parametric curves to the Kaplan-Meier survival estimate (or comparison of hazard estimates) and to assess the parametric models using likelihood-based information criteria. In place of these two steps, this work demonstrates how to minimize the squared distance of parametric estimators to the Kaplan-Meier estimate. This is in line with the selection of the model using Mean Squared Error, with the modification that the unknown true survival is replaced by the Kaplan-Meier estimate. We would assure the internal validity of the extrapolated model and its appropriate representation of the data by adhering to this procedure. We use both simulation and real-world data with a scenario where no model that properly fits the data could be found to illustrate how this process can aid in model selection.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"85 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141357978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Volatility is a matter of concern for time series modeling. It provides valuable insights into the fluctuation and stability of concerning variables over time. Volatility patterns in historical data can provide valuable information for predicting future behaviour. Nonlinear time series models such as the autoregressive conditional heteroscedastic (ARCH) and the generalized version of the ARCH model, i.e. generalized ARCH (GARCH) models are popularly used for capturing the volatility of a time series. The realization of any time series may have significant statistical dependencies on its distant counterpart. This phenomenon is known as the long memory process. Long memory structure can also be present in volatility. Fractionally integrated volatility models such as the fractionally integrated GARCH (FIGARCH) model can be used to capture the long memory in volatility. In this paper, we derived the out-of-sample forecast formulae along with the forecast error variances for the AR (1) -FIGARCH (1, d, 1) model by recursive use of conditional expectations and conditional variances. For empirical illustration, the modal spot prices of onion for Delhi, Lasalgaon and Bengaluru markets, India and S&P 500 index (close) data are used.
{"title":"Development of out-of-sample forecast formulae for the FIGARCH model","authors":"Debopam Rakshit, R. Paul","doi":"10.3233/mas-241510","DOIUrl":"https://doi.org/10.3233/mas-241510","url":null,"abstract":"Volatility is a matter of concern for time series modeling. It provides valuable insights into the fluctuation and stability of concerning variables over time. Volatility patterns in historical data can provide valuable information for predicting future behaviour. Nonlinear time series models such as the autoregressive conditional heteroscedastic (ARCH) and the generalized version of the ARCH model, i.e. generalized ARCH (GARCH) models are popularly used for capturing the volatility of a time series. The realization of any time series may have significant statistical dependencies on its distant counterpart. This phenomenon is known as the long memory process. Long memory structure can also be present in volatility. Fractionally integrated volatility models such as the fractionally integrated GARCH (FIGARCH) model can be used to capture the long memory in volatility. In this paper, we derived the out-of-sample forecast formulae along with the forecast error variances for the AR (1) -FIGARCH (1, d, 1) model by recursive use of conditional expectations and conditional variances. For empirical illustration, the modal spot prices of onion for Delhi, Lasalgaon and Bengaluru markets, India and S&P 500 index (close) data are used.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"84 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141359800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shared frailty models are used despite their limitations. To overcome their disadvantages correlated frailty models may be used. In this paper, we introduce the correlated compound Poisson frailty models with two different baseline distributions namely, the generalized log logistic and the generalized Weibull. We introduce the Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in these models. We present a simulation study to compare the true values of the parameters with the estimated values. Also we apply these models to a real life bivariate survival data set of McGilchrist and Aisbett (1991) related to the kidney infection data and a better model is suggested for the data.
{"title":"Analysis of kidney infection data using correlated compound poisson frailty models","authors":"David D. Hanagal","doi":"10.3233/mas-231452","DOIUrl":"https://doi.org/10.3233/mas-231452","url":null,"abstract":"Shared frailty models are used despite their limitations. To overcome their disadvantages correlated frailty models may be used. In this paper, we introduce the correlated compound Poisson frailty models with two different baseline distributions namely, the generalized log logistic and the generalized Weibull. We introduce the Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in these models. We present a simulation study to compare the true values of the parameters with the estimated values. Also we apply these models to a real life bivariate survival data set of McGilchrist and Aisbett (1991) related to the kidney infection data and a better model is suggested for the data.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"28 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141357223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Irshad, Muhammed Ahammed, R. Maya, Christophe Chesneau
In their article, Erbayram and Akdoğan (Ricerche di Matematica, 2023) introduced the Poisson-transmuted record type exponential distribution by combining the Poisson and transmuted record type exponential distributions. This article presents a novel approach to modeling time series data using integer-valued time series with binomial thinning framework and the Poisson-transmuted record type exponential distribution as the innovation distribution. This model demonstrates remarkable proficiency in accurately representing over-dispersed integer-valued time series. Under this configuration, which is a flexible and highly dependable choice, the model accurately captures the underlying patterns present in the time series data. A comprehensive analysis of the statistical characteristics of the process is given. The conditional maximum likelihood and conditional least squares methods are employed to estimate the process parameters. The performance of the estimates is meticulously evaluated through extensive simulation studies. Finally, the proposed model is validated using real-time series data and compared against existing models to demonstrate its practical effectiveness.
Erbayram 和 Akdoğan (Ricerche di Matematica, 2023)在他们的文章中介绍了将泊松分布和嬗变记录型指数分布相结合的泊松嬗变记录型指数分布。本文提出了一种新颖的时间序列数据建模方法,使用二项稀疏框架的整数值时间序列和泊松变换记录型指数分布作为创新分布。该模型在准确表示过度分散的整数值时间序列方面表现出卓越的能力。在这种灵活且高度可靠的配置下,模型准确地捕捉到了时间序列数据中存在的基本模式。本文对这一过程的统计特征进行了全面分析。采用条件最大似然法和条件最小二乘法来估计过程参数。通过大量的模拟研究,对估计值的性能进行了细致的评估。最后,利用实时序列数据对所提出的模型进行了验证,并与现有模型进行了比较,以证明其实际有效性。
{"title":"INAR(1) process with Poisson-transmuted record type exponential innovations","authors":"M. Irshad, Muhammed Ahammed, R. Maya, Christophe Chesneau","doi":"10.3233/mas-231458","DOIUrl":"https://doi.org/10.3233/mas-231458","url":null,"abstract":"In their article, Erbayram and Akdoğan (Ricerche di Matematica, 2023) introduced the Poisson-transmuted record type exponential distribution by combining the Poisson and transmuted record type exponential distributions. This article presents a novel approach to modeling time series data using integer-valued time series with binomial thinning framework and the Poisson-transmuted record type exponential distribution as the innovation distribution. This model demonstrates remarkable proficiency in accurately representing over-dispersed integer-valued time series. Under this configuration, which is a flexible and highly dependable choice, the model accurately captures the underlying patterns present in the time series data. A comprehensive analysis of the statistical characteristics of the process is given. The conditional maximum likelihood and conditional least squares methods are employed to estimate the process parameters. The performance of the estimates is meticulously evaluated through extensive simulation studies. Finally, the proposed model is validated using real-time series data and compared against existing models to demonstrate its practical effectiveness.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"28 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141355710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Receiver operating characteristics (ROC) curves play a pivotal role in the analyses of data collected in applications involving machine vision, machine learning and clinical diagnostics. The importance of ROC curves lies in the fact that all decision-making strategies rely on the interpretations of the curves and features extracted from them. Such analyses become simple and straightforward if it is possible to have a statistical fit for the empirical ROC curve. A methodology is developed and demonstrated to obtain a parametric fit for the ROC curves using multiple tools in statistics such as chi square testing, bootstrapping (parametric and non-parametric) and t-testing. Relying on three data sets and an ensemble of density functions used in modeling sensor and econometric data, statistical modeling of the ROC curves (best fit) is accomplished. While the reported research relied on simulated data sets, the approaches implemented and demonstrated in this work can easily be adapted to data collected in clinical as well as non-clinical settings.
在涉及机器视觉、机器学习和临床诊断的应用中,接收方操作特征曲线(ROC)在分析所收集数据的过程中发挥着举足轻重的作用。ROC 曲线的重要性在于,所有决策策略都依赖于对曲线和从中提取的特征的解释。如果能对经验 ROC 曲线进行统计拟合,那么这些分析就会变得简单明了。本文开发并演示了一种方法,利用多种统计工具,如卡方检验、自引导(参数和非参数)和 t 检验,获得 ROC 曲线的参数拟合。依靠三个数据集和用于传感器和计量经济学数据建模的密度函数集合,完成了 ROC 曲线的统计建模(最佳拟合)。虽然报告中的研究依赖于模拟数据集,但这项工作中实施和演示的方法很容易适用于在临床和非临床环境中收集的数据。
{"title":"Parametric modeling of receiver operating characteristics curves","authors":"P.M. Shankar","doi":"10.3233/mas-231475","DOIUrl":"https://doi.org/10.3233/mas-231475","url":null,"abstract":"Receiver operating characteristics (ROC) curves play a pivotal role in the analyses of data collected in applications involving machine vision, machine learning and clinical diagnostics. The importance of ROC curves lies in the fact that all decision-making strategies rely on the interpretations of the curves and features extracted from them. Such analyses become simple and straightforward if it is possible to have a statistical fit for the empirical ROC curve. A methodology is developed and demonstrated to obtain a parametric fit for the ROC curves using multiple tools in statistics such as chi square testing, bootstrapping (parametric and non-parametric) and t-testing. Relying on three data sets and an ensemble of density functions used in modeling sensor and econometric data, statistical modeling of the ROC curves (best fit) is accomplished. While the reported research relied on simulated data sets, the approaches implemented and demonstrated in this work can easily be adapted to data collected in clinical as well as non-clinical settings.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141360110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new asymmetric V-shaped distribution for fitting continuous data. In this study, some statistical properties, such as the mean, the median, the variance, the survival, and the hazard function of the new distribution are investigated. Furthermore, we also presented how to generate the proposed asymmetric V-shaped distribution based on two random variables that have uniform distributions. Three examples are presented to illustrate the advantages of the asymmetric V-shaped distribution for some simulated and real-life data sets.
本文提出了一种新的非对称 V 型分布,用于拟合连续数据。在这项研究中,我们研究了新分布的一些统计特性,如均值、中位数、方差、生存率和危险函数。此外,我们还介绍了如何基于两个均匀分布的随机变量生成所提出的非对称 V 型分布。我们还列举了三个例子来说明非对称 V 型分布在一些模拟数据集和现实数据集中的优势。
{"title":"An asymmetric V-shaped distribution","authors":"Tai Vo-Van, Thao Nguyen-Trang, Ha Che-Ngoc","doi":"10.3233/mas-231441","DOIUrl":"https://doi.org/10.3233/mas-231441","url":null,"abstract":"This paper proposes a new asymmetric V-shaped distribution for fitting continuous data. In this study, some statistical properties, such as the mean, the median, the variance, the survival, and the hazard function of the new distribution are investigated. Furthermore, we also presented how to generate the proposed asymmetric V-shaped distribution based on two random variables that have uniform distributions. Three examples are presented to illustrate the advantages of the asymmetric V-shaped distribution for some simulated and real-life data sets.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140242574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriela M. Rodrigues, Roberto Vila, E. M. Ortega, G. Cordeiro, Victor Serra
We obtain new mathematical properties of the exponentiated odd log-logistic family of distributions, and of its special case named the exponentiated odd log-logistic Weibull, and its log transformed. A new location and scale regression model is constructed, and some simulations are carried out to verify the behavior of the maximum likelihood estimators, and of the modified deviance-based residuals. The methodology is applied to the Japanese-Brazilian emigration data.
{"title":"New results and regression model for the exponentiated odd log-logistic family with applications","authors":"Gabriela M. Rodrigues, Roberto Vila, E. M. Ortega, G. Cordeiro, Victor Serra","doi":"10.3233/mas-231450","DOIUrl":"https://doi.org/10.3233/mas-231450","url":null,"abstract":"We obtain new mathematical properties of the exponentiated odd log-logistic family of distributions, and of its special case named the exponentiated odd log-logistic Weibull, and its log transformed. A new location and scale regression model is constructed, and some simulations are carried out to verify the behavior of the maximum likelihood estimators, and of the modified deviance-based residuals. The methodology is applied to the Japanese-Brazilian emigration data.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"26 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140242836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Monte Carlo simulation study was conducted to investigate the performance of full information maximum-likelihood (FIML) estimator in multilevel structural equation modeling (SEM) with missing data and different intra-class correlations (ICCs) coefficients. The study simulated the influence of two independent variables (missing data patterns, and ICC coefficients) in multilevel SEM on five outcome measures (model rejection rates, parameter estimate bias, standard error bias, coverage, and power). Results indicated that FIML parameter estimates were generally robust for data missing on outcomes and/or higher-level predictor variables under the data completely at random (MCAR) and for data missing at random (MAR). However, FIML estimation yielded substantially lower parameter and standard error bias when data was not missing on higher-level variables, and in high rather than in low ICC conditions (0.50 vs 0.20). Future research should extend to further examination of the impacts of data distribution, complexity of the between-level model, and missingness on the between-level variables on FIML estimation performance.
{"title":"Impact of missing data and ICC on full information maximum-likelihood estimation in multilevel SEMs","authors":"Chunling Niu","doi":"10.3233/mas-231444","DOIUrl":"https://doi.org/10.3233/mas-231444","url":null,"abstract":"A Monte Carlo simulation study was conducted to investigate the performance of full information maximum-likelihood (FIML) estimator in multilevel structural equation modeling (SEM) with missing data and different intra-class correlations (ICCs) coefficients. The study simulated the influence of two independent variables (missing data patterns, and ICC coefficients) in multilevel SEM on five outcome measures (model rejection rates, parameter estimate bias, standard error bias, coverage, and power). Results indicated that FIML parameter estimates were generally robust for data missing on outcomes and/or higher-level predictor variables under the data completely at random (MCAR) and for data missing at random (MAR). However, FIML estimation yielded substantially lower parameter and standard error bias when data was not missing on higher-level variables, and in high rather than in low ICC conditions (0.50 vs 0.20). Future research should extend to further examination of the impacts of data distribution, complexity of the between-level model, and missingness on the between-level variables on FIML estimation performance.","PeriodicalId":35000,"journal":{"name":"Model Assisted Statistics and Applications","volume":"11 s2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140243591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}