Pub Date : 2024-09-13DOI: 10.1016/j.jocm.2024.100508
Consumers are often assumed to use a two-stage decision process, screening out products in the first step and choosing among the remaining alternatives in the second step. When analyzing data from discrete choice studies, a compensatory decision strategy is usually presumed. Gilbride and Allenby (2004) introduced a method to model a decision process in a choice-based conjoint analysis combining the compensatory assumption with the two-stage decision process. Respondents first screen out alternatives that do not meet minimum requirements for attributes, followed by a choice between the remaining alternatives using the compensatory rule.
In this paper, we extend their approach by considering not only screening with a minimum threshold but also with a maximum value for every attribute. We compare this extension to the original method by Gilbride and Allenby (2004) and a single-step compensatory model. We do so on the basis of one simulation scenario as well as three empirical conjoint datasets.
The results indicate that two-sided screening is applied especially to prices. Both the original and extended models exhibit nearly identical performance. However, they outperform the one-step choice model that ignores screening in terms of fit and predictive validity.
{"title":"Too much, too little? A CBC approach accounting for screening from both sides","authors":"","doi":"10.1016/j.jocm.2024.100508","DOIUrl":"10.1016/j.jocm.2024.100508","url":null,"abstract":"<div><p>Consumers are often assumed to use a two-stage decision process, screening out products in the first step and choosing among the remaining alternatives in the second step. When analyzing data from discrete choice studies, a compensatory decision strategy is usually presumed. Gilbride and Allenby (2004) introduced a method to model a decision process in a choice-based conjoint analysis combining the compensatory assumption with the two-stage decision process. Respondents first screen out alternatives that do not meet minimum requirements for attributes, followed by a choice between the remaining alternatives using the compensatory rule.</p><p>In this paper, we extend their approach by considering not only screening with a minimum threshold but also with a maximum value for every attribute. We compare this extension to the original method by Gilbride and Allenby (2004) and a single-step compensatory model. We do so on the basis of one simulation scenario as well as three empirical conjoint datasets.</p><p>The results indicate that two-sided screening is applied especially to prices. Both the original and extended models exhibit nearly identical performance. However, they outperform the one-step choice model that ignores screening in terms of fit and predictive validity.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S175553452400040X/pdfft?md5=2432f35ff28dedac4b1080f5cb2768a2&pid=1-s2.0-S175553452400040X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.jocm.2024.100511
Use of preference information to infer risk tolerance has increased in recent years as a way to inform benefit-risk evaluations in regulatory and medical decision making. However, a framework for the measurement of tolerance for multiple uncertain outcomes has not been formalized when choices do not comply with expected utility theory (EUT). We developed a formal analytic framework for the measurement of preferences through choices under uncertainty with multiple risks. Based on the analytic framework, we find that violations of EUT can lead to interaction effects between uncertain outcomes, not just nonlinearities in the disutility of risks. Our framework also implies that measures of risk tolerance derived from utility, such as maximum-acceptable risk, must consider all relevant risks jointly if their effect on choices is expected to violate EUT. Somewhat reassuringly, however, we find that cross-outcome effects are expected to be negligible when the probabilities of other outcomes approach certainty. Finally, we identify a simple test that can help evaluate whether preferences for one uncertain outcome are affected by other uncertain outcomes.
{"title":"The impact of violations of expected utility theory on choices in the face of multiple risks","authors":"","doi":"10.1016/j.jocm.2024.100511","DOIUrl":"10.1016/j.jocm.2024.100511","url":null,"abstract":"<div><p>Use of preference information to infer risk tolerance has increased in recent years as a way to inform benefit-risk evaluations in regulatory and medical decision making. However, a framework for the measurement of tolerance for multiple uncertain outcomes has not been formalized when choices do not comply with expected utility theory (EUT). We developed a formal analytic framework for the measurement of preferences through choices under uncertainty with multiple risks. Based on the analytic framework, we find that violations of EUT can lead to interaction effects between uncertain outcomes, not just nonlinearities in the disutility of risks. Our framework also implies that measures of risk tolerance derived from utility, such as maximum-acceptable risk, must consider all relevant risks jointly if their effect on choices is expected to violate EUT. Somewhat reassuringly, however, we find that cross-outcome effects are expected to be negligible when the probabilities of other outcomes approach certainty. Finally, we identify a simple test that can help evaluate whether preferences for one uncertain outcome are affected by other uncertain outcomes.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1755534524000435/pdfft?md5=78daf7025ef5640366a9f49a1b357ad5&pid=1-s2.0-S1755534524000435-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-10DOI: 10.1016/j.jocm.2024.100510
Most choice models, e.g. Multinomial Logit (MNL), rely on random utility theory, which assumes that a compensatory utility maximization decision rule explains an individual’s choice behaviour. Research has shown, however, that behaviour is sometimes better explained by non-compensatory decision rules. While some research has used Latent Class Choice Models (LCCMs) to account for multiple decision rules, many of them – such as the disjunctive rule – have yet to be explored. This paper formulates, estimates, and evaluates a LCCM that combines the MNL with a Generalised Random Disjunctive Model (GRDM), a new choice model we develop. Addressing deficiencies of existing disjunctive choice models, the GRDM allows for relative importance between attributes and is insensitive to irrelevant attributes. Unlike most non-compensatory models, it is tractable and incorporates random error terms for capturing unobserved heterogeneity across choice situations. The GRDM can be expressed as a Universal Logit (UL) model, which helps derive welfare metrics such as Marginal Rates of Substitution and elasticities and makes it possible to estimate the model with traditional software packages. The LCCM combining the GRDM and the MNL is estimated in two large-scale case studies: cyclists’ route choice and public transport route choice. Results are compared with other relevant LCCM specifications and the individual choice models, where it is found that the MNL + GRDM LCCM provides the best fit to the data. We also interpret the fitted parameters and calculate the Marginal Rates of Substitution, which align with behavioural expectations.
{"title":"A novel choice model combining utility maximization and the disjunctive decision rules, application to two case studies","authors":"","doi":"10.1016/j.jocm.2024.100510","DOIUrl":"10.1016/j.jocm.2024.100510","url":null,"abstract":"<div><p>Most choice models, e.g. Multinomial Logit (MNL), rely on random utility theory, which assumes that a compensatory utility maximization decision rule explains an individual’s choice behaviour. Research has shown, however, that behaviour is sometimes better explained by non-compensatory decision rules. While some research has used Latent Class Choice Models (LCCMs) to account for multiple decision rules, many of them – such as the disjunctive rule – have yet to be explored. This paper formulates, estimates, and evaluates a LCCM that combines the MNL with a Generalised Random Disjunctive Model (GRDM), a new choice model we develop. Addressing deficiencies of existing disjunctive choice models, the GRDM allows for relative importance between attributes and is insensitive to irrelevant attributes. Unlike most non-compensatory models, it is tractable and incorporates random error terms for capturing unobserved heterogeneity across choice situations. The GRDM can be expressed as a Universal Logit (UL) model, which helps derive welfare metrics such as Marginal Rates of Substitution and elasticities and makes it possible to estimate the model with traditional software packages. The LCCM combining the GRDM and the MNL is estimated in two large-scale case studies: cyclists’ route choice and public transport route choice. Results are compared with other relevant LCCM specifications and the individual choice models, where it is found that the MNL + GRDM LCCM provides the best fit to the data. We also interpret the fitted parameters and calculate the Marginal Rates of Substitution, which align with behavioural expectations.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1755534524000423/pdfft?md5=400c13f9f97d02380fcb53d69fcb1b23&pid=1-s2.0-S1755534524000423-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1016/j.jocm.2024.100509
Hospital choice models often employ random utility theory and include waiting time as a choice determinant. When applied to evaluate health system improvement interventions, these models disregard that hospital choice in turn is a determinant of waiting time. We present a novel, general model capturing the endogeneous relationship between waiting time and hospital choice, including the choice to opt out, and characterize the unique equilibrium solution of the resulting convex problem. We apply the general model in a case study on the urban Chinese health system, specifying that patient choice follows a multinomial logit (MNL) model and waiting times are determined by M/M/1 queues. The results reveal that analyses which solely rely on MNL models overestimate the effectiveness of present policy interventions and that this effectiveness is limited. We explore alternative, more effective, improvement interventions.
{"title":"The interdependence between hospital choice and waiting time — with a case study in urban China","authors":"","doi":"10.1016/j.jocm.2024.100509","DOIUrl":"10.1016/j.jocm.2024.100509","url":null,"abstract":"<div><p>Hospital choice models often employ random utility theory and include waiting time as a choice determinant. When applied to evaluate health system improvement interventions, these models disregard that hospital choice in turn is a determinant of waiting time. We present a novel, general model capturing the endogeneous relationship between waiting time and hospital choice, including the choice to opt out, and characterize the unique equilibrium solution of the resulting convex problem. We apply the general model in a case study on the urban Chinese health system, specifying that patient choice follows a multinomial logit (MNL) model and waiting times are determined by M/M/1 queues. The results reveal that analyses which solely rely on MNL models overestimate the effectiveness of present policy interventions and that this effectiveness is limited. We explore alternative, more effective, improvement interventions.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1016/j.jocm.2024.100506
Ke Wang , Xin Ye
Generating random draws from multivariate extreme value (MEV) distributions plays an important role in the microsimulation of travel behaviors, which can effectively avoid heavy computational burdens from simulation based on calculated probability values, particularly in simulations for a large population or choice behaviors from a large choice set. However, there are few practical and effective methods for drawing from MEV distributions. This paper proposes a simple and computationally efficient approach for drawing from MEV distributions in the nested logit (NL), cross-nested logit (CNL), and paired combinatorial logit (PCL) models. The proposed approach to draw from the MEV distribution for a CNL model provides a new perspective to understand the underlying choice mechanism of the CNL model. To our knowledge, this is the first study to draw from an MEV distribution in the PCL model. Random draws from the proposed approach approximately follow the standard Gumbel distribution, which is the marginal distribution of NL/CNL/PCL models, and approximate correlations among alternatives well. Simulation results of NL/CNL/PCL models show that the proposed approach provides high-level accuracy in recovering model parameters with the overall mean absolute percentage bias being less than 3%. The proposed approach is computationally more efficient than similar ones because it only needs to draw from Gumbel distributions. The proposed approach can be used to simulate NL/CNL/PCL models with a large choice set or a multiple discrete-continuous generalized extreme value model in various application settings such as joint destination-mode choices, time use allocations, etc.
{"title":"A practical method to draw from multivariate extreme value distributions","authors":"Ke Wang , Xin Ye","doi":"10.1016/j.jocm.2024.100506","DOIUrl":"https://doi.org/10.1016/j.jocm.2024.100506","url":null,"abstract":"<div><p>Generating random draws from multivariate extreme value (MEV) distributions plays an important role in the microsimulation of travel behaviors, which can effectively avoid heavy computational burdens from simulation based on calculated probability values, particularly in simulations for a large population or choice behaviors from a large choice set. However, there are few practical and effective methods for drawing from MEV distributions. This paper proposes a simple and computationally efficient approach for drawing from MEV distributions in the nested logit (NL), cross-nested logit (CNL), and paired combinatorial logit (PCL) models. The proposed approach to draw from the MEV distribution for a CNL model provides a new perspective to understand the underlying choice mechanism of the CNL model. To our knowledge, this is the first study to draw from an MEV distribution in the PCL model. Random draws from the proposed approach approximately follow the standard Gumbel distribution, which is the marginal distribution of NL/CNL/PCL models, and approximate correlations among alternatives well. Simulation results of NL/CNL/PCL models show that the proposed approach provides high-level accuracy in recovering model parameters with the overall mean absolute percentage bias being less than 3%. The proposed approach is computationally more efficient than similar ones because it only needs to draw from Gumbel distributions. The proposed approach can be used to simulate NL/CNL/PCL models with a large choice set or a multiple discrete-continuous generalized extreme value model in various application settings such as joint destination-mode choices, time use allocations, etc.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-29DOI: 10.1016/j.jocm.2024.100507
Thijs Dekker , Paul Koster , Niek Mouter
This paper presents a micro-econometric framework to analyse choice data from participatory value evaluation (PVE) surveys. In a PVE survey respondents receive, similar to stated choice surveys, information on the social impacts of public sector projects before choosing the best policy portfolio according to their preferences. Respondents’ choices are limited by governmental and private budget constraints. The PVE data format is characterised by a mixture of discrete and continuous choice data. Building on recent literature of Kuhn–Tucker models, particularly the MDCEV model, a range of methodological and econometric contributions are provided facilitating model estimation and policy evaluation. We derive a set of closed form choice probabilities explaining the choice for the optimal portfolio with public projects, private consumption levels and whether to spend the public budget in full or not. The proposed policy evaluation framework is centred around the notion of social welfare maximisation. The parameter estimates are used to derive the optimal public sector budget and the corresponding portfolio maximising social welfare, but also to rank the set of feasible portfolios given a restricted budget, including sensitivity analyses. The proposed framework is illustrated using an empirical example on urban mobility investments in Amsterdam, The Netherlands.
{"title":"A micro-econometric framework for Participatory Value Evaluation","authors":"Thijs Dekker , Paul Koster , Niek Mouter","doi":"10.1016/j.jocm.2024.100507","DOIUrl":"https://doi.org/10.1016/j.jocm.2024.100507","url":null,"abstract":"<div><p>This paper presents a micro-econometric framework to analyse choice data from participatory value evaluation (PVE) surveys. In a PVE survey respondents receive, similar to stated choice surveys, information on the social impacts of public sector projects before choosing the best policy portfolio according to their preferences. Respondents’ choices are limited by governmental and private budget constraints. The PVE data format is characterised by a mixture of discrete and continuous choice data. Building on recent literature of Kuhn–Tucker models, particularly the MDCEV model, a range of methodological and econometric contributions are provided facilitating model estimation and policy evaluation. We derive a set of closed form choice probabilities explaining the choice for the optimal portfolio with public projects, private consumption levels and whether to spend the public budget in full or not. The proposed policy evaluation framework is centred around the notion of social welfare maximisation. The parameter estimates are used to derive the optimal public sector budget and the corresponding portfolio maximising social welfare, but also to rank the set of feasible portfolios given a restricted budget, including sensitivity analyses. The proposed framework is illustrated using an empirical example on urban mobility investments in Amsterdam, The Netherlands.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1755534524000393/pdfft?md5=319a0d1e75426eebf11ab4fa53f74df4&pid=1-s2.0-S1755534524000393-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1016/j.jocm.2024.100505
Paolo Delle Site, Janak Parmar
The econometrics of the Linear Probability Model (LPM) cast as binary choice random utility model and where probabilities are constrained in the [0,1] interval is unexplored. The paper fills this gap. Assumptions are identified under which constrained maximum likelihood estimators exist and are unique, consistent and asymptotically normal. A consistent estimator of the covariance matrix is provided. Statistics that can be used to evaluate the prediction validity of binary choice models are reviewed. With income independent choices, the LPM has the merit of closed-form welfare change measure for the sub-population of consumers shifting from one alternative to the other. Two datasets illustrate the theoretical insights. One from the Swiss Mobility and Transport Microcensus related to choices between teleworking and commuting, one from the German Socio-Economic Panel related to add-on health insurance subscription. The signs and statistical significance at 5% level of the coefficients are concordant across LPM, Logit and Probit. Model prioritization based on prediction validity is data specific and dependent on the statistics used.
{"title":"On the Linear Probability Model as binary choice random utility model","authors":"Paolo Delle Site, Janak Parmar","doi":"10.1016/j.jocm.2024.100505","DOIUrl":"https://doi.org/10.1016/j.jocm.2024.100505","url":null,"abstract":"<div><p>The econometrics of the Linear Probability Model (LPM) cast as binary choice random utility model and where probabilities are constrained in the [0,1] interval is unexplored. The paper fills this gap. Assumptions are identified under which constrained maximum likelihood estimators exist and are unique, consistent and asymptotically normal. A consistent estimator of the covariance matrix is provided. Statistics that can be used to evaluate the prediction validity of binary choice models are reviewed. With income independent choices, the LPM has the merit of closed-form welfare change measure for the sub-population of consumers shifting from one alternative to the other. Two datasets illustrate the theoretical insights. One from the Swiss Mobility and Transport Microcensus related to choices between teleworking and commuting, one from the German Socio-Economic Panel related to add-on health insurance subscription. The signs and statistical significance at 5% level of the coefficients are concordant across LPM, Logit and Probit. Model prioritization based on prediction validity is data specific and dependent on the statistics used.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1016/j.jocm.2024.100495
Thomas O. Hancock, Stephane Hess, Charisma F. Choudhury, Panagiotis Tsoleridis
Decision field theory (DFT) is a model originally developed in cognitive psychology to explain behavioural phenomena such as context effects and decision-making under time pressure. Given this focus, the model has primarily been used to explain choices observed under controlled laboratory settings, with little attention paid to generalisability. Recent work has improved the mathematical foundations of DFT, making it a tractable model that is easier to apply to a wider variety of choice contexts. In particular, the inclusion of attribute importance parameters has led to successful applications to multi-alternative multi-attribute choice settings, notably with stated preference data in transport. However, thus far, implementations to real-life behaviour (i.e., revealed preference, RP, data) have been limited. The aim of this paper is to extend DFT for larger and more real-world applications, where data may be more ‘noisy’ and prone to larger variances of the error term. A theoretical extension for the model is presented, relaxing the assumption of independent normal error terms to capture heteroskedasticity. We apply the new model specification to two large-scale revealed preference datasets, also incorporating a range of sociodemographic variables. The new ‘heteroskedastic’ DFT model substantially outperforms the original version of DFT, as well as choice models based on econometric theory, in both estimation and validation subsets.
{"title":"Decision field theory: An extension for real-world settings","authors":"Thomas O. Hancock, Stephane Hess, Charisma F. Choudhury, Panagiotis Tsoleridis","doi":"10.1016/j.jocm.2024.100495","DOIUrl":"https://doi.org/10.1016/j.jocm.2024.100495","url":null,"abstract":"<div><p>Decision field theory (DFT) is a model originally developed in cognitive psychology to explain behavioural phenomena such as context effects and decision-making under time pressure. Given this focus, the model has primarily been used to explain choices observed under controlled laboratory settings, with little attention paid to generalisability. Recent work has improved the mathematical foundations of DFT, making it a tractable model that is easier to apply to a wider variety of choice contexts. In particular, the inclusion of attribute importance parameters has led to successful applications to multi-alternative multi-attribute choice settings, notably with stated preference data in transport. However, thus far, implementations to real-life behaviour (i.e., revealed preference, RP, data) have been limited. The aim of this paper is to extend DFT for larger and more real-world applications, where data may be more ‘noisy’ and prone to larger variances of the error term. A theoretical extension for the model is presented, relaxing the assumption of independent normal error terms to capture heteroskedasticity. We apply the new model specification to two large-scale revealed preference datasets, also incorporating a range of sociodemographic variables. The new ‘heteroskedastic’ DFT model substantially outperforms the original version of DFT, as well as choice models based on econometric theory, in both estimation and validation subsets.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1755534524000277/pdfft?md5=2d9ee9009a15ccd43255dbe9f642dafa&pid=1-s2.0-S1755534524000277-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141434863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1016/j.jocm.2024.100494
Marcel F. Jonker
Previous work has identified attribute level overlap and level color coding as effective and attractive strategies to reduce task complexity and improve behavioral efficiency in discrete choice experiments (DCEs). However, the simultaneous and combined impact of level overlap and level color coding on attribute non-attendance and choice consistency has not yet been investigated. To address this limitation and to strengthen the available evidence base, this paper re-analyzed an existing randomized controlled DCE from the Netherlands (N = 2,731) and analyzed a new randomized controlled DCE conducted in the United Kingdom (N = 3,084) using heteroskedastic attribute non-attendance mixed logit models. Both randomized controlled experiments were based on a relatively complex instrument with 5 attributes with 5 levels each and the results from both experiments were remarkably similar. In the base-case study arms without level overlap and color coding, only about half of the attributes are attended to. Level color coding as a stand-alone strategy improves attribute attendance but reduces respondents' choice consistency. In contrast, level overlap as a stand-alone strategy improves attribute attendance while simultaneously increasing respondents' choice consistency. The combination of level overlap and color coding is even more effective: it results in approximately full attribute attendance and a 30% increase in respondents' choice consistency. Experimental designs with level overlap are therefore recommended as a default design strategy and level color coding recommended to further increase respondents’ behavioral efficiency in complex DCEs.
{"title":"Level overlap and level color coding revisited: Improved attribute attendance and higher choice consistency in discrete choice experiments","authors":"Marcel F. Jonker","doi":"10.1016/j.jocm.2024.100494","DOIUrl":"https://doi.org/10.1016/j.jocm.2024.100494","url":null,"abstract":"<div><p>Previous work has identified attribute level overlap and level color coding as effective and attractive strategies to reduce task complexity and improve behavioral efficiency in discrete choice experiments (DCEs). However, the simultaneous and combined impact of level overlap and level color coding on attribute non-attendance and choice consistency has not yet been investigated. To address this limitation and to strengthen the available evidence base, this paper re-analyzed an existing randomized controlled DCE from the Netherlands (N = 2,731) and analyzed a new randomized controlled DCE conducted in the United Kingdom (N = 3,084) using heteroskedastic attribute non-attendance mixed logit models. Both randomized controlled experiments were based on a relatively complex instrument with 5 attributes with 5 levels each and the results from both experiments were remarkably similar. In the base-case study arms without level overlap and color coding, only about half of the attributes are attended to. Level color coding as a stand-alone strategy improves attribute attendance but reduces respondents' choice consistency. In contrast, level overlap as a stand-alone strategy improves attribute attendance while simultaneously increasing respondents' choice consistency. The combination of level overlap and color coding is even more effective: it results in approximately full attribute attendance and a 30% increase in respondents' choice consistency. Experimental designs with level overlap are therefore recommended as a default design strategy and level color coding recommended to further increase respondents’ behavioral efficiency in complex DCEs.</p></div>","PeriodicalId":46863,"journal":{"name":"Journal of Choice Modelling","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}