{"title":"Assessing the Validity of Prevalence Estimates in Double List Experiments","authors":"Gustavo Diaz","doi":"10.1017/xps.2023.24","DOIUrl":null,"url":null,"abstract":"Abstract Social scientists use list experiments in surveys to estimate the prevalence of sensitive attitudes and behaviors in a population of interest. However, the cumulative evidence suggests that the list experiment estimator is underpowered to capture the extent of sensitivity bias in common applications. The literature suggests double list experiments (DLEs) as an alternative to improve along the bias-variance frontier. This variant of the research design brings the additional burden of justifying the list experiment identification assumptions in both lists, which raises concerns over the validity of DLE estimates. To overcome this difficulty, this paper outlines two statistical tests to detect strategic misreporting that follows from violations to the identification assumptions. I illustrate their implementation with data from a study on support toward anti-immigration organizations in California and explore their properties via simulation.","PeriodicalId":37558,"journal":{"name":"Journal of Experimental Political Science","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Political Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/xps.2023.24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"POLITICAL SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Social scientists use list experiments in surveys to estimate the prevalence of sensitive attitudes and behaviors in a population of interest. However, the cumulative evidence suggests that the list experiment estimator is underpowered to capture the extent of sensitivity bias in common applications. The literature suggests double list experiments (DLEs) as an alternative to improve along the bias-variance frontier. This variant of the research design brings the additional burden of justifying the list experiment identification assumptions in both lists, which raises concerns over the validity of DLE estimates. To overcome this difficulty, this paper outlines two statistical tests to detect strategic misreporting that follows from violations to the identification assumptions. I illustrate their implementation with data from a study on support toward anti-immigration organizations in California and explore their properties via simulation.
期刊介绍:
The Journal of Experimental Political Science (JEPS) features cutting-edge research that utilizes experimental methods or experimental reasoning based on naturally occurring data. We define experimental methods broadly: research featuring random (or quasi-random) assignment of subjects to different treatments in an effort to isolate causal relationships in the sphere of politics. JEPS embraces all of the different types of experiments carried out as part of political science research, including survey experiments, laboratory experiments, field experiments, lab experiments in the field, natural and neurological experiments. We invite authors to submit concise articles (around 4000 words or fewer) that immediately address the subject of the research. We do not require lengthy explanations regarding and justifications of the experimental method. Nor do we expect extensive literature reviews of pros and cons of the methodological approaches involved in the experiment unless the goal of the article is to explore these methodological issues. We expect readers to be familiar with experimental methods and therefore to not need pages of literature reviews to be convinced that experimental methods are a legitimate methodological approach. We will consider longer articles in rare, but appropriate cases, as in the following examples: when a new experimental method or approach is being introduced and discussed or when novel theoretical results are being evaluated through experimentation. Finally, we strongly encourage authors to submit manuscripts that showcase informative null findings or inconsistent results from well-designed, executed, and analyzed experiments.