{"title":"A Tractable Estimator for General Mixed Multinomial Logit Models","authors":"J. James","doi":"10.26509/WP-201219","DOIUrl":null,"url":null,"abstract":"The mixed logit is a framework for incorporating unobserved heterogeneity in discrete choice models in a general way. These models are difficult to estimate because they result in a complicated incomplete data likelihood. This paper proposes a new approach for estimating mixed logit models. The estimator is easily implemented as iteratively re-weighted least squares: the well known solution for complete data likelihood logits. The main benefit of this approach is that it requires drastically fewer evaluations of the simulated likelihood function, making it significantly faster than conventional methods that rely on numerically approximating the gradient. The method is rooted in a generalized expectation and maximization (GEM) algorithm, so it is asymptotically consistent, efficient, and globally convergent.","PeriodicalId":165362,"journal":{"name":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ERN: Discrete Regression & Qualitative Choice Models (Single) (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26509/WP-201219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The mixed logit is a framework for incorporating unobserved heterogeneity in discrete choice models in a general way. These models are difficult to estimate because they result in a complicated incomplete data likelihood. This paper proposes a new approach for estimating mixed logit models. The estimator is easily implemented as iteratively re-weighted least squares: the well known solution for complete data likelihood logits. The main benefit of this approach is that it requires drastically fewer evaluations of the simulated likelihood function, making it significantly faster than conventional methods that rely on numerically approximating the gradient. The method is rooted in a generalized expectation and maximization (GEM) algorithm, so it is asymptotically consistent, efficient, and globally convergent.