Giuseppe Alfonzetti, Ruggero Bellio, Yunxiao Chen, Irini Moustaki
Pairwise likelihood is a limited‐information method widely used to estimate latent variable models, including factor analysis of categorical data. It can often avoid evaluating high‐dimensional integrals and, thus, is computationally more efficient than relying on the full likelihood. Despite its computational advantage, the pairwise likelihood approach can still be demanding for large‐scale problems that involve many observed variables. We tackle this challenge by employing an approximation of the pairwise likelihood estimator, which is derived from an optimization procedure relying on stochastic gradients. The stochastic gradients are constructed by subsampling the pairwise log‐likelihood contributions, for which the subsampling scheme controls the per‐iteration computational complexity. The stochastic estimator is shown to be asymptotically equivalent to the pairwise likelihood one. However, finite‐sample performance can be improved by compounding the sampling variability of the data with the uncertainty introduced by the subsampling scheme. We demonstrate the performance of the proposed method using simulation studies and two real data applications.
{"title":"Pairwise stochastic approximation for confirmatory factor analysis of categorical data","authors":"Giuseppe Alfonzetti, Ruggero Bellio, Yunxiao Chen, Irini Moustaki","doi":"10.1111/bmsp.12347","DOIUrl":"https://doi.org/10.1111/bmsp.12347","url":null,"abstract":"Pairwise likelihood is a limited‐information method widely used to estimate latent variable models, including factor analysis of categorical data. It can often avoid evaluating high‐dimensional integrals and, thus, is computationally more efficient than relying on the full likelihood. Despite its computational advantage, the pairwise likelihood approach can still be demanding for large‐scale problems that involve many observed variables. We tackle this challenge by employing an approximation of the pairwise likelihood estimator, which is derived from an optimization procedure relying on stochastic gradients. The stochastic gradients are constructed by subsampling the pairwise log‐likelihood contributions, for which the subsampling scheme controls the per‐iteration computational complexity. The stochastic estimator is shown to be asymptotically equivalent to the pairwise likelihood one. However, finite‐sample performance can be improved by compounding the sampling variability of the data with the uncertainty introduced by the subsampling scheme. We demonstrate the performance of the proposed method using simulation studies and two real data applications.","PeriodicalId":501535,"journal":{"name":"British Journal of Mathematical and Statistical Psychology","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140811401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}