An alternative closed-form expression for the marginal joint probability distribution of item scores under the random effects generalized partial credit model is presented. The closed-form expression involves a cumulant generating function and is therefore subjected to convexity constraints. As a consequence, complicated moment inequalities are taken into account in maximum likelihood estimation of the parameters of the model, so that the estimation solution is always proper. Another important favorable consequence is that the likelihood function has a single local extreme point, the global maximum. Furthermore, attention is paid to expected a posteriori person parameter estimation, generalizations of the model, and testing the goodness-of-fit of the model. Procedures proposed are demonstrated in an illustrative example.
{"title":"A convexity-constrained parameterization of the random effects generalized partial credit model","authors":"David J. Hessen","doi":"10.1111/bmsp.12365","DOIUrl":"10.1111/bmsp.12365","url":null,"abstract":"<p>An alternative closed-form expression for the marginal joint probability distribution of item scores under the random effects generalized partial credit model is presented. The closed-form expression involves a cumulant generating function and is therefore subjected to convexity constraints. As a consequence, complicated moment inequalities are taken into account in maximum likelihood estimation of the parameters of the model, so that the estimation solution is always proper. Another important favorable consequence is that the likelihood function has a single local extreme point, the global maximum. Furthermore, attention is paid to expected a posteriori person parameter estimation, generalizations of the model, and testing the goodness-of-fit of the model. Procedures proposed are demonstrated in an illustrative example.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"78 2","pages":"401-419"},"PeriodicalIF":1.8,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bmsp.12365","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently Variational Autoencoders (VAEs) have been proposed as a method to estimate high dimensional Item Response Theory (IRT) models on large datasets. Although these improve the efficiency of estimation drastically compared to traditional methods, they have no natural way to deal with missing values. In this paper, we adapt three existing methods from the VAE literature to the IRT setting and propose one new method. We compare the performance of the different VAE-based methods to each other and to marginal maximum likelihood estimation for increasing levels of missing data in a simulation study for both three- and ten-dimensional IRT models. Additionally, we demonstrate the use of the VAE-based models on an existing algebra test dataset. Results confirm that VAE-based methods are a time-efficient alternative to marginal maximum likelihood, but that a larger number of importance-weighted samples are needed when the proportion of missing values is large.
{"title":"Handling missing data in variational autoencoder based item response theory","authors":"Karel Veldkamp, Raoul Grasman, Dylan Molenaar","doi":"10.1111/bmsp.12363","DOIUrl":"10.1111/bmsp.12363","url":null,"abstract":"<p>Recently Variational Autoencoders (VAEs) have been proposed as a method to estimate high dimensional Item Response Theory (IRT) models on large datasets. Although these improve the efficiency of estimation drastically compared to traditional methods, they have no natural way to deal with missing values. In this paper, we adapt three existing methods from the VAE literature to the IRT setting and propose one new method. We compare the performance of the different VAE-based methods to each other and to marginal maximum likelihood estimation for increasing levels of missing data in a simulation study for both three- and ten-dimensional IRT models. Additionally, we demonstrate the use of the VAE-based models on an existing algebra test dataset. Results confirm that VAE-based methods are a time-efficient alternative to marginal maximum likelihood, but that a larger number of importance-weighted samples are needed when the proportion of missing values is large.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":"78 1","pages":"378-397"},"PeriodicalIF":1.8,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bmsp.12363","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p>We consider the problem of determining the maximum value of the point-polyserial correlation between a random variable with an assigned continuous distribution and an ordinal random variable with <span></span><math>