New misspecification tests for multinomial logit models

IF 2.4 3区 经济学 Q1 ECONOMICS Journal of Choice Modelling Pub Date : 2025-03-01 Epub Date: 2024-12-12 DOI:10.1016/j.jocm.2024.100531
Dennis Fok, Richard Paap
{"title":"New misspecification tests for multinomial logit models","authors":"Dennis Fok,&nbsp;Richard Paap","doi":"10.1016/j.jocm.2024.100531","DOIUrl":null,"url":null,"abstract":"<div><div>Multinomial Logit [MNL] models are misspecified when the Independence of Irrelevant Assumption [IIA] does not hold. In this paper we compare existing tests for IIA with two newly proposed tests. Both new tests use that, when MNL is the true model, preferences across pairs of alternatives can be described by independent binary logit models. The first test compares Composite Likelihood parameter estimates based on pairs of alternatives with standard Maximum Likelihood estimates using a Hausman (1978) test. The second is a test for overidentification in a GMM framework using more pairs than necessary. A detailed Monte Carlo study shows that the GMM test is in general superior with respect to the performance under the null and under the alternative hypothesis. An empirical illustration demonstrates the practical usefulness of the tests.</div></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":"54 ","pages":"Article 100531"},"PeriodicalIF":2.4000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Choice Modelling","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1755534524000630","RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/12 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

Abstract

Multinomial Logit [MNL] models are misspecified when the Independence of Irrelevant Assumption [IIA] does not hold. In this paper we compare existing tests for IIA with two newly proposed tests. Both new tests use that, when MNL is the true model, preferences across pairs of alternatives can be described by independent binary logit models. The first test compares Composite Likelihood parameter estimates based on pairs of alternatives with standard Maximum Likelihood estimates using a Hausman (1978) test. The second is a test for overidentification in a GMM framework using more pairs than necessary. A detailed Monte Carlo study shows that the GMM test is in general superior with respect to the performance under the null and under the alternative hypothesis. An empirical illustration demonstrates the practical usefulness of the tests.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多项式逻辑模型的新错标检验
当不相关假设的独立性[IIA]不成立时,多项Logit [MNL]模型被错误地指定。在本文中,我们比较了现有的IIA测试和两个新提出的测试。当MNL是真实模型时,两个新测试都使用了这样的方法,即通过独立的二元logit模型来描述对备选方案的偏好。第一个测试使用Hausman(1978)测试,将基于对备选方案的复合似然参数估计与标准最大似然估计进行比较。第二个测试是在GMM框架中使用比必要更多的对进行过度标识。一项详细的蒙特卡罗研究表明,GMM检验在零假设和备择假设下的性能一般较好。一个实证说明了这些测试的实际用途。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.10
自引率
12.50%
发文量
31
期刊最新文献
Potential for electric vehicle adoption in Midwest US States: A stated preference and two-stage MRP study Respondent experience and willingness to pay: Reconciling stated preference data with scientific evidence Cyclists’ crossing behaviour at roundabouts: A Generalized Spatially Correlated Nested Logit model Heterogeneous impacts of system and vehicle security concern on autonomous vehicle adoption: A causal machine learning approach Editorial Board
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1