Pub Date : 2023-08-07DOI: 10.3102/10769986231188442
D. Arthur, Hua-Hua Chang
Cognitive diagnosis models (CDMs) are the assessment tools that provide valuable formative feedback about skill mastery at both the individual and population level. Recent work has explored the performance of CDMs with small sample sizes but has focused solely on the estimates of individual profiles. The current research focuses on obtaining accurate estimates of skill mastery at the population level. We introduce a novel algorithm (bagging algorithm for deterministic inputs noisy “and” gate) that is inspired by ensemble learning methods in the machine learning literature and produces more stable and accurate estimates of the population skill mastery profile distribution for small sample sizes. Using both simulated data and real data from the Examination for the Certificate of Proficiency in English, we demonstrate that the proposed method outperforms other methods on several metrics in a wide variety of scenarios.
{"title":"DINA-BAG: A Bagging Algorithm for DINA Model Parameter Estimation in Small Samples","authors":"D. Arthur, Hua-Hua Chang","doi":"10.3102/10769986231188442","DOIUrl":"https://doi.org/10.3102/10769986231188442","url":null,"abstract":"Cognitive diagnosis models (CDMs) are the assessment tools that provide valuable formative feedback about skill mastery at both the individual and population level. Recent work has explored the performance of CDMs with small sample sizes but has focused solely on the estimates of individual profiles. The current research focuses on obtaining accurate estimates of skill mastery at the population level. We introduce a novel algorithm (bagging algorithm for deterministic inputs noisy “and” gate) that is inspired by ensemble learning methods in the machine learning literature and produces more stable and accurate estimates of the population skill mastery profile distribution for small sample sizes. Using both simulated data and real data from the Examination for the Certificate of Proficiency in English, we demonstrate that the proposed method outperforms other methods on several metrics in a wide variety of scenarios.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47475754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-24DOI: 10.3102/10769986231151961
Hongyue Zhu, Hong Jiao, Wei Gao, Xiangbin Meng
Change-point analysis (CPA) is a method for detecting abrupt changes in parameter(s) underlying a sequence of random variables. It has been applied to detect examinees’ aberrant test-taking behavior by identifying abrupt test performance change. Previous studies utilized maximum likelihood estimations of ability parameters, focusing on detecting one change point for each examinee. This article proposes a Bayesian CPA procedure using response times (RTs) to detect abrupt changes in examinee speed, which may be related to aberrant responding behaviors. The lognormal RT model is used to derive a procedure for detecting aberrant RT patterns. The method takes the numbers and locations of the change points as parameters in the model to detect multiple change points or multiple aberrant behaviors. Given the change points, the corresponding speed of each segment in the test can be estimated, which enables more accurate inferences about aberrant behaviors. Simulation study results indicate that the proposed procedure can effectively detect simulated aberrant behaviors and estimate change points accurately. The method is applied to data from a high-stakes computerized adaptive test, where its applicability is demonstrated.
{"title":"Bayesian Change-Point Analysis Approach to Detecting Aberrant Test-Taking Behavior Using Response Times","authors":"Hongyue Zhu, Hong Jiao, Wei Gao, Xiangbin Meng","doi":"10.3102/10769986231151961","DOIUrl":"https://doi.org/10.3102/10769986231151961","url":null,"abstract":"Change-point analysis (CPA) is a method for detecting abrupt changes in parameter(s) underlying a sequence of random variables. It has been applied to detect examinees’ aberrant test-taking behavior by identifying abrupt test performance change. Previous studies utilized maximum likelihood estimations of ability parameters, focusing on detecting one change point for each examinee. This article proposes a Bayesian CPA procedure using response times (RTs) to detect abrupt changes in examinee speed, which may be related to aberrant responding behaviors. The lognormal RT model is used to derive a procedure for detecting aberrant RT patterns. The method takes the numbers and locations of the change points as parameters in the model to detect multiple change points or multiple aberrant behaviors. Given the change points, the corresponding speed of each segment in the test can be estimated, which enables more accurate inferences about aberrant behaviors. Simulation study results indicate that the proposed procedure can effectively detect simulated aberrant behaviors and estimate change points accurately. The method is applied to data from a high-stakes computerized adaptive test, where its applicability is demonstrated.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":"48 1","pages":"490 - 520"},"PeriodicalIF":2.4,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46477500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-19DOI: 10.3102/10769986231183335
Clemens Draxler, A. Kurz, Can Gürer, J. Nolte
A modified and improved inductive inferential approach to evaluate item discriminations in a conditional maximum likelihood and Rasch modeling framework is suggested. The new approach involves the derivation of four hypothesis tests. It implies a linear restriction of the assumed set of probability distributions in the classical approach that represents scenarios of different item discriminations in a straightforward and efficient manner. Its improvement is discussed, compared to classical procedures (tests and information criteria), and illustrated in Monte Carlo experiments as well as real data examples from educational research. The results show an improvement of power of the modified tests of up to 0.3.
{"title":"An Improved Inferential Procedure to Evaluate Item Discriminations in a Conditional Maximum Likelihood Framework","authors":"Clemens Draxler, A. Kurz, Can Gürer, J. Nolte","doi":"10.3102/10769986231183335","DOIUrl":"https://doi.org/10.3102/10769986231183335","url":null,"abstract":"A modified and improved inductive inferential approach to evaluate item discriminations in a conditional maximum likelihood and Rasch modeling framework is suggested. The new approach involves the derivation of four hypothesis tests. It implies a linear restriction of the assumed set of probability distributions in the classical approach that represents scenarios of different item discriminations in a straightforward and efficient manner. Its improvement is discussed, compared to classical procedures (tests and information criteria), and illustrated in Monte Carlo experiments as well as real data examples from educational research. The results show an improvement of power of the modified tests of up to 0.3.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42084822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-11DOI: 10.3102/10769986231184153
Jochen Ranger, C. König, B. Domingue, Jörg-Tobias Kuhn, Andreas Frey
In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension of the LNRT model by assuming that the response times can be decomposed into two response time components. Each response time component is generated by a one-dimensional LNRT model with a different latent trait. As the response time components—but not the traits—are related additively, the model is partially compensatory. In a simulation study, we investigate the recovery of the model’s parameters. We also investigate whether the fully and the partially compensatory LNRT model can be distinguished empirically. Findings suggest that parameter recovery is good and that the two models can be distinctly identified under certain conditions. The utility of the model in practice is demonstrated with an empirical application. In the empirical application, the partially compensatory model fits better than the fully compensatory model.
{"title":"A Multidimensional Partially Compensatory Response Time Model on Basis of the Log-Normal Distribution","authors":"Jochen Ranger, C. König, B. Domingue, Jörg-Tobias Kuhn, Andreas Frey","doi":"10.3102/10769986231184153","DOIUrl":"https://doi.org/10.3102/10769986231184153","url":null,"abstract":"In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension of the LNRT model by assuming that the response times can be decomposed into two response time components. Each response time component is generated by a one-dimensional LNRT model with a different latent trait. As the response time components—but not the traits—are related additively, the model is partially compensatory. In a simulation study, we investigate the recovery of the model’s parameters. We also investigate whether the fully and the partially compensatory LNRT model can be distinguished empirically. Findings suggest that parameter recovery is good and that the two models can be distinctly identified under certain conditions. The utility of the model in practice is demonstrated with an empirical application. In the empirical application, the partially compensatory model fits better than the fully compensatory model.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45247254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-04DOI: 10.3102/10769986231183326
Sean Joo, Montserrat Valdivia, Dubravka Svetina Valdivia, Leslie Rutkowski
Evaluating scale comparability in international large-scale assessments depends on measurement invariance (MI). The root mean square deviation (RMSD) is a standard method for establishing MI in several programs, such as the Programme for International Student Assessment and the Programme for the International Assessment of Adult Competencies. Previous research showed that the RMSD was unable to detect departures from MI when the latent trait distribution was far from item difficulty. In this study, we developed three alternative approaches to the original RMSD: equal, item information, and b-norm weighted RMSDs. Specifically, we considered the item-centered normalized weight distributions to compute the item characteristic curve difference in the RMSD procedure more efficiently. We further compared all methods’ performance via a simulation study and the item information and b-norm weighted RMSDs showed the most promising results. An empirical example is demonstrated, and implications for researchers are discussed.
{"title":"Alternatives to Weighted Item Fit Statistics for Establishing Measurement Invariance in Many Groups","authors":"Sean Joo, Montserrat Valdivia, Dubravka Svetina Valdivia, Leslie Rutkowski","doi":"10.3102/10769986231183326","DOIUrl":"https://doi.org/10.3102/10769986231183326","url":null,"abstract":"Evaluating scale comparability in international large-scale assessments depends on measurement invariance (MI). The root mean square deviation (RMSD) is a standard method for establishing MI in several programs, such as the Programme for International Student Assessment and the Programme for the International Assessment of Adult Competencies. Previous research showed that the RMSD was unable to detect departures from MI when the latent trait distribution was far from item difficulty. In this study, we developed three alternative approaches to the original RMSD: equal, item information, and b-norm weighted RMSDs. Specifically, we considered the item-centered normalized weight distributions to compute the item characteristic curve difference in the RMSD procedure more efficiently. We further compared all methods’ performance via a simulation study and the item information and b-norm weighted RMSDs showed the most promising results. An empirical example is demonstrated, and implications for researchers are discussed.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41773579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-03DOI: 10.3102/10769986231181587
Justin L. Kern
Given the frequent presence of slipping and guessing in item responses, models for the inclusion of their effects are highly important. Unfortunately, the most common model for their inclusion, the four-parameter item response theory model, potentially has severe deficiencies related to its possible unidentifiability. With this issue in mind, the dyad four-parameter normal ogive (Dyad-4PNO) model was developed. This model allows for slipping and guessing effects by including binary augmented variables—each indicated by two items whose probabilities are determined by slipping and guessing parameters—which are subsequently related to a continuous latent trait through a two-parameter model. Furthermore, the Dyad-4PNO assumes uncertainty as to which items are paired on each augmented variable. In this way, the model is inherently exploratory. In the current article, the new model, called the Set-4PNO model, is an extension of the Dyad-4PNO in two ways. First, the new model allows for more than two items per augmented variable. Second, these item sets are assumed to be fixed, that is, the model is confirmatory. This article discusses this extension and introduces a Gibbs sampling algorithm to estimate the model. A Monte Carlo simulation study shows the efficacy of the algorithm at estimating the model parameters. A real data example shows that this extension may be viable in practice, with the data fitting a more general Set-4PNO model (i.e., more than two items per augmented variable) better than the Dyad-4PNO, 2PNO, 3PNO, and 4PNO models.
{"title":"Extending an Identified Four-Parameter IRT Model: The Confirmatory Set-4PNO Model","authors":"Justin L. Kern","doi":"10.3102/10769986231181587","DOIUrl":"https://doi.org/10.3102/10769986231181587","url":null,"abstract":"Given the frequent presence of slipping and guessing in item responses, models for the inclusion of their effects are highly important. Unfortunately, the most common model for their inclusion, the four-parameter item response theory model, potentially has severe deficiencies related to its possible unidentifiability. With this issue in mind, the dyad four-parameter normal ogive (Dyad-4PNO) model was developed. This model allows for slipping and guessing effects by including binary augmented variables—each indicated by two items whose probabilities are determined by slipping and guessing parameters—which are subsequently related to a continuous latent trait through a two-parameter model. Furthermore, the Dyad-4PNO assumes uncertainty as to which items are paired on each augmented variable. In this way, the model is inherently exploratory. In the current article, the new model, called the Set-4PNO model, is an extension of the Dyad-4PNO in two ways. First, the new model allows for more than two items per augmented variable. Second, these item sets are assumed to be fixed, that is, the model is confirmatory. This article discusses this extension and introduces a Gibbs sampling algorithm to estimate the model. A Monte Carlo simulation study shows the efficacy of the algorithm at estimating the model parameters. A real data example shows that this extension may be viable in practice, with the data fitting a more general Set-4PNO model (i.e., more than two items per augmented variable) better than the Dyad-4PNO, 2PNO, 3PNO, and 4PNO models.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45508174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-14DOI: 10.3102/10769986231176012
Joemari Olea, Kevin Carl P. Santos
Although the generalized deterministic inputs, noisy “and” gate model (G-DINA; de la Torre, 2011) is a general cognitive diagnosis model (CDM), it does not account for the heterogeneity that is rooted from the existing latent groups in the population of examinees. To address this, this study proposes the mixture G-DINA model, a CDM that incorporates the G-DINA model within the finite mixture modeling framework. An expectation–maximization algorithm is developed to estimate the mixture G-DINA model. To determine the viability of the proposed model, an extensive simulation study is conducted to examine the parameter recovery performance, model fit, and correct classification rates. Responses to a reading comprehension assessment were analyzed to further demonstrate the capability of the proposed model.
虽然广义确定性输入,噪声“和”门模型(G-DINA;de la Torre, 2011)是一种通用认知诊断模型(CDM),它没有考虑到考生群体中现有潜在群体的异质性。为了解决这个问题,本研究提出了混合G-DINA模型,这是一种将G-DINA模型纳入有限混合建模框架的清洁发展模型。提出了一种期望最大化算法来估计混合G-DINA模型。为了确定所提出模型的可行性,进行了广泛的模拟研究,以检查参数恢复性能,模型拟合和正确分类率。对阅读理解评估的反应进行了分析,以进一步证明所提出模型的能力。
{"title":"A General Mixture Model for Cognitive Diagnosis","authors":"Joemari Olea, Kevin Carl P. Santos","doi":"10.3102/10769986231176012","DOIUrl":"https://doi.org/10.3102/10769986231176012","url":null,"abstract":"Although the generalized deterministic inputs, noisy “and” gate model (G-DINA; de la Torre, 2011) is a general cognitive diagnosis model (CDM), it does not account for the heterogeneity that is rooted from the existing latent groups in the population of examinees. To address this, this study proposes the mixture G-DINA model, a CDM that incorporates the G-DINA model within the finite mixture modeling framework. An expectation–maximization algorithm is developed to estimate the mixture G-DINA model. To determine the viability of the proposed model, an extensive simulation study is conducted to examine the parameter recovery performance, model fit, and correct classification rates. Responses to a reading comprehension assessment were analyzed to further demonstrate the capability of the proposed model.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45491900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-13DOI: 10.3102/10769986231176023
Adrian Quintero, E. Lesaffre, G. Verbeke
Bayesian methods to infer model dimensionality in factor analysis generally assume a lower triangular structure for the factor loadings matrix. Consequently, the ordering of the outcomes influences the results. Therefore, we propose a method to infer model dimensionality without imposing any prior restriction on the loadings matrix. Our approach considers a relatively large number of factors and includes auxiliary multiplicative parameters, which may render null the unnecessary columns in the loadings matrix. The underlying dimensionality is then inferred based on the number of nonnull columns in the factor loadings matrix, and the model parameters are estimated with a postprocessing scheme. The advantages of the method in selecting the correct dimensionality are illustrated via simulations and using real data sets.
{"title":"Bayesian Exploratory Factor Analysis via Gibbs Sampling","authors":"Adrian Quintero, E. Lesaffre, G. Verbeke","doi":"10.3102/10769986231176023","DOIUrl":"https://doi.org/10.3102/10769986231176023","url":null,"abstract":"Bayesian methods to infer model dimensionality in factor analysis generally assume a lower triangular structure for the factor loadings matrix. Consequently, the ordering of the outcomes influences the results. Therefore, we propose a method to infer model dimensionality without imposing any prior restriction on the loadings matrix. Our approach considers a relatively large number of factors and includes auxiliary multiplicative parameters, which may render null the unnecessary columns in the loadings matrix. The underlying dimensionality is then inferred based on the number of nonnull columns in the factor loadings matrix, and the model parameters are estimated with a postprocessing scheme. The advantages of the method in selecting the correct dimensionality are illustrated via simulations and using real data sets.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43721362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-13DOI: 10.3102/10769986231174918
Yinghan Chen, Shiyu Wang
Attribute hierarchy, the underlying prerequisite relationship among attributes, plays an important role in applying cognitive diagnosis models (CDM) for designing efficient cognitive diagnostic assessments. However, there are limited statistical tools to directly estimate attribute hierarchy from response data. In this study, we proposed a Bayesian formulation for attribute hierarchy within CDM framework and developed an efficient Metropolis within Gibbs algorithm to estimate the underlying hierarchy along with the specified CDM parameters. Our proposed estimation method is flexible and can be adapted to a general class of CDMs. We demonstrated our proposed method via a simulation study, and the results from which show that the proposed method can fully recover or estimate at least a subgraph of the underlying structure across various conditions under a specified CDM model. The real data application indicates the potential of learning attribute structure from data using our algorithm and validating the existing attribute hierarchy specified by content experts.
属性层次是属性之间潜在的前提关系,在应用认知诊断模型设计高效的认知诊断评估中起着重要作用。然而,从响应数据中直接估计属性层次的统计工具有限。在本研究中,我们提出了CDM框架中属性层次的贝叶斯公式,并开发了一种高效的Metropolis within Gibbs算法来估计底层层次以及指定的CDM参数。我们提出的估计方法是灵活的,可以适用于一般类型的cdm。我们通过模拟研究证明了我们提出的方法,结果表明,在特定的CDM模型下,所提出的方法可以在各种条件下完全恢复或估计至少一个底层结构的子图。实际数据应用表明,使用我们的算法从数据中学习属性结构并验证由内容专家指定的现有属性层次结构的潜力。
{"title":"Bayesian Estimation of Attribute Hierarchy for Cognitive Diagnosis Models","authors":"Yinghan Chen, Shiyu Wang","doi":"10.3102/10769986231174918","DOIUrl":"https://doi.org/10.3102/10769986231174918","url":null,"abstract":"Attribute hierarchy, the underlying prerequisite relationship among attributes, plays an important role in applying cognitive diagnosis models (CDM) for designing efficient cognitive diagnostic assessments. However, there are limited statistical tools to directly estimate attribute hierarchy from response data. In this study, we proposed a Bayesian formulation for attribute hierarchy within CDM framework and developed an efficient Metropolis within Gibbs algorithm to estimate the underlying hierarchy along with the specified CDM parameters. Our proposed estimation method is flexible and can be adapted to a general class of CDMs. We demonstrated our proposed method via a simulation study, and the results from which show that the proposed method can fully recover or estimate at least a subgraph of the underlying structure across various conditions under a specified CDM model. The real data application indicates the potential of learning attribute structure from data using our algorithm and validating the existing attribute hierarchy specified by content experts.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42380392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-13DOI: 10.3102/10769986231176014
Zachary K. Collier, Minji Kong, Olushola Soyoye, Kamal Chawla, Ann M. Aviles, Y. Payne
Asymmetric Likert-type items in research studies can present several challenges in data analysis, particularly concerning missing data. These items are often characterized by a skewed scaling, where either there is no neutral response option or an unequal number of possible positive and negative responses. The use of conventional techniques, such as discriminant analysis or logistic regression imputation, for handling missing data in asymmetric items may result in significant bias. It is also recommended to exercise caution when employing alternative strategies, such as listwise deletion or mean imputation, because these methods rely on assumptions that are often unrealistic in surveys and rating scales. This article explores the potential of implementing a deep learning-based imputation method. Additionally, we provide access to deep learning-based imputation to a broader group of researchers without requiring advanced machine learning training. We apply the methodology to the Wilmington Street Participatory Action Research Health Project.
{"title":"Deep Learning Imputation for Asymmetric and Incomplete Likert-Type Items","authors":"Zachary K. Collier, Minji Kong, Olushola Soyoye, Kamal Chawla, Ann M. Aviles, Y. Payne","doi":"10.3102/10769986231176014","DOIUrl":"https://doi.org/10.3102/10769986231176014","url":null,"abstract":"Asymmetric Likert-type items in research studies can present several challenges in data analysis, particularly concerning missing data. These items are often characterized by a skewed scaling, where either there is no neutral response option or an unequal number of possible positive and negative responses. The use of conventional techniques, such as discriminant analysis or logistic regression imputation, for handling missing data in asymmetric items may result in significant bias. It is also recommended to exercise caution when employing alternative strategies, such as listwise deletion or mean imputation, because these methods rely on assumptions that are often unrealistic in surveys and rating scales. This article explores the potential of implementing a deep learning-based imputation method. Additionally, we provide access to deep learning-based imputation to a broader group of researchers without requiring advanced machine learning training. We apply the methodology to the Wilmington Street Participatory Action Research Health Project.","PeriodicalId":48001,"journal":{"name":"Journal of Educational and Behavioral Statistics","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49317944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}