Motivated misreporting occurs when respondents give incorrect responses to survey questions to shorten the interview; studies have detected this behavior across many modes, topics, and countries. This paper tests whether motivated misreporting affects responses in a large survey of household purchases, the US Consumer Expenditure Interview Survey. The data from this survey inform the calculation of the official measure of inflation, among other uses. Using a parallel web survey and multiple imputation, this article estimates the size of the misreporting effect without experimentally manipulating questions in the survey itself. Results suggest that household purchases are underreported by approximately five percentage points in three sections of the first wave of the survey. The approach used here, involving a web survey built to mimic the expenditure survey, could be applied in other large surveys where budget or logistical constraints prevent experimentation.
{"title":"Underreporting of Purchases in the US Consumer Expenditure Survey","authors":"S. Eckman","doi":"10.1093/jssam/smab024","DOIUrl":"https://doi.org/10.1093/jssam/smab024","url":null,"abstract":"\u0000 Motivated misreporting occurs when respondents give incorrect responses to survey questions to shorten the interview; studies have detected this behavior across many modes, topics, and countries. This paper tests whether motivated misreporting affects responses in a large survey of household purchases, the US Consumer Expenditure Interview Survey. The data from this survey inform the calculation of the official measure of inflation, among other uses. Using a parallel web survey and multiple imputation, this article estimates the size of the misreporting effect without experimentally manipulating questions in the survey itself. Results suggest that household purchases are underreported by approximately five percentage points in three sections of the first wave of the survey. The approach used here, involving a web survey built to mimic the expenditure survey, could be applied in other large surveys where budget or logistical constraints prevent experimentation.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46501698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 2013 FECOND (Fertility, Contraception, and Sexual Dysfunction) probability telephone survey aims to monitor sexual health behaviors among fifteen to forty-nine year olds in France. We conducted a random experiment to compare a classic telephone survey (group T, n = 3,846 respondents) with two Internet-telephone mixed-mode protocols: a sequential Internet-telephone protocol (group S, n = 762, among which there were 462 Internet questionnaires), and a concurrent protocol (group C, n = 1,165, among which there were 208 Internet questionnaires). We compare telephone (T), sequential (S), and concurrent (C) samples on cooperation rates, break-off, and item nonresponse rates, sociodemographic characteristics, health behaviors, and seven sexual health behaviors and personal opinions questions. Reports on the most sensitive behaviors were expected to be more truthful and more prevalent on the Internet—and thus in the mixed-mode samples—than in the telephone sample. The cooperation rate (i.e., the response rate among the possible respondents selected during the initial telephone call) was higher in the classic telephone survey than in the sequential and concurrent mixed-mode protocols (88 percent for T versus 77 percent for S and 55 percent for C), where break-off and item nonresponse rates were also higher. Despite these lower response rates, mixed-mode samples showed better representativeness: their marginal distribution of sociodemographic characteristics was closer to that of the 2013 census, and they had higher R-indicators. A causal estimation of the measurement effect resulting from Internet administration found higher prevalence of three out of the seven sexual health behaviors and personal opinions in the sequential protocol compared to the classic telephone group; a similar pattern was found in the concurrent protocol. In addition, the variance of the weights of the mixed-mode protocols is lower, especially for the sequential design. Sequential telephone-Internet mixed-mode protocols nested in a probability telephone survey may be a good way to improve survey research on sensitive behaviors.
{"title":"Sequential and Concurrent Internet-Telephone Mixed-Mode Designs in Sexual Health Behavior Research","authors":"S. Legleye, Géraldine Charrance","doi":"10.1093/jssam/smab026","DOIUrl":"https://doi.org/10.1093/jssam/smab026","url":null,"abstract":"\u0000 The 2013 FECOND (Fertility, Contraception, and Sexual Dysfunction) probability telephone survey aims to monitor sexual health behaviors among fifteen to forty-nine year olds in France. We conducted a random experiment to compare a classic telephone survey (group T, n = 3,846 respondents) with two Internet-telephone mixed-mode protocols: a sequential Internet-telephone protocol (group S, n = 762, among which there were 462 Internet questionnaires), and a concurrent protocol (group C, n = 1,165, among which there were 208 Internet questionnaires). We compare telephone (T), sequential (S), and concurrent (C) samples on cooperation rates, break-off, and item nonresponse rates, sociodemographic characteristics, health behaviors, and seven sexual health behaviors and personal opinions questions. Reports on the most sensitive behaviors were expected to be more truthful and more prevalent on the Internet—and thus in the mixed-mode samples—than in the telephone sample. The cooperation rate (i.e., the response rate among the possible respondents selected during the initial telephone call) was higher in the classic telephone survey than in the sequential and concurrent mixed-mode protocols (88 percent for T versus 77 percent for S and 55 percent for C), where break-off and item nonresponse rates were also higher. Despite these lower response rates, mixed-mode samples showed better representativeness: their marginal distribution of sociodemographic characteristics was closer to that of the 2013 census, and they had higher R-indicators. A causal estimation of the measurement effect resulting from Internet administration found higher prevalence of three out of the seven sexual health behaviors and personal opinions in the sequential protocol compared to the classic telephone group; a similar pattern was found in the concurrent protocol. In addition, the variance of the weights of the mixed-mode protocols is lower, especially for the sequential design. Sequential telephone-Internet mixed-mode protocols nested in a probability telephone survey may be a good way to improve survey research on sensitive behaviors.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47716223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henning Silber, E. Tvinnereim, T. Stark, A. Blom, J. Krosnick, M. Bošnjak, S. Clement, Anne Cornilleau, Anne-Sophie Cousteaux, M. John, G. Jónsdóttir, K. Lawson, Peter Lynn, Johan Martinsson, Ditte Shamshiri-Petersen, Su-Hao Tu
In the context of the current “replication crisis” across the sciences, failures to reproduce a finding are often viewed as discrediting it. This paper shows how such a conclusion can be incorrect. In 1981, Schuman and Presser showed that including the word “freedom” in a survey question significantly increased approval of allowing a speech against religion in the USA. New experiments in probability sample surveys (n = 23,370) in the USA and 10 other countries showed that the wording effect replicated in the USA and appeared in four other countries (Canada, Germany, Taiwan, and the Netherlands) but not in the remaining countries. The effect appeared only in countries in which the value of freedom is especially salient and endorsed. Thus, public support for a proposition was enhanced by portraying it as embodying a salient principle of a nation’s culture. Instead of questioning initial findings, inconsistent results across countries signal limits on generalizability and identify an important moderator.
{"title":"Lack of Replication or Generalization? Cultural Values Explain a Question Wording Effect","authors":"Henning Silber, E. Tvinnereim, T. Stark, A. Blom, J. Krosnick, M. Bošnjak, S. Clement, Anne Cornilleau, Anne-Sophie Cousteaux, M. John, G. Jónsdóttir, K. Lawson, Peter Lynn, Johan Martinsson, Ditte Shamshiri-Petersen, Su-Hao Tu","doi":"10.1093/jssam/smab007","DOIUrl":"https://doi.org/10.1093/jssam/smab007","url":null,"abstract":"\u0000 In the context of the current “replication crisis” across the sciences, failures to reproduce a finding are often viewed as discrediting it. This paper shows how such a conclusion can be incorrect. In 1981, Schuman and Presser showed that including the word “freedom” in a survey question significantly increased approval of allowing a speech against religion in the USA. New experiments in probability sample surveys (n = 23,370) in the USA and 10 other countries showed that the wording effect replicated in the USA and appeared in four other countries (Canada, Germany, Taiwan, and the Netherlands) but not in the remaining countries. The effect appeared only in countries in which the value of freedom is especially salient and endorsed. Thus, public support for a proposition was enhanced by portraying it as embodying a salient principle of a nation’s culture. Instead of questioning initial findings, inconsistent results across countries signal limits on generalizability and identify an important moderator.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45204446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, web-push strategies have been developed in cross-sectional mixed-mode surveys to improve response rates and reduce the costs of data collection. However, pushing respondents into the more cost-efficient web mode has rarely been examined in the context of panel surveys. This study evaluates how a web-push intervention affects the willingness of panel members to switch survey modes from mail to web. We tested three web-push strategies in a German probability-based mixed-mode panel by randomly assigning 1,895 panelists of the mail mode to one of three conditions: (1) the web option was offered to panelists concurrently with the paper questionnaire including a promised €10 incentive for completing the survey on the web, (2) the web option was presented sequentially two weeks before sending the paper questionnaire and respondents were also promised an incentive of €10, or (3) same sequential web-first approach as for condition 2, but with a prepaid €10 incentive instead of a promised incentive. The study found that a sequential presentation of the web option significantly increases the web response in a single survey but may not motivate more panelists to switch to the web mode permanently. Contrary to our expectation, offering prepaid incentives neither improves the web response nor the proportion of mode switchers. Overall, all three web-push strategies show the potential to effectively reduce survey costs without causing differences in panel attrition after five consecutive waves. Condition 2, the sequential web-first design combined with a promised incentive was most effective in pushing respondents to switch to the web mode and in reducing costs.
{"title":"An Experimental Comparison of Three Strategies for Converting Mail Respondents in a Probability-Based Mixed-Mode Panel to Internet Respondents","authors":"David Bretschi, Ines Schaurer, D. Dillman","doi":"10.1093/jssam/smab002","DOIUrl":"https://doi.org/10.1093/jssam/smab002","url":null,"abstract":"\u0000 In recent years, web-push strategies have been developed in cross-sectional mixed-mode surveys to improve response rates and reduce the costs of data collection. However, pushing respondents into the more cost-efficient web mode has rarely been examined in the context of panel surveys. This study evaluates how a web-push intervention affects the willingness of panel members to switch survey modes from mail to web. We tested three web-push strategies in a German probability-based mixed-mode panel by randomly assigning 1,895 panelists of the mail mode to one of three conditions: (1) the web option was offered to panelists concurrently with the paper questionnaire including a promised €10 incentive for completing the survey on the web, (2) the web option was presented sequentially two weeks before sending the paper questionnaire and respondents were also promised an incentive of €10, or (3) same sequential web-first approach as for condition 2, but with a prepaid €10 incentive instead of a promised incentive. The study found that a sequential presentation of the web option significantly increases the web response in a single survey but may not motivate more panelists to switch to the web mode permanently. Contrary to our expectation, offering prepaid incentives neither improves the web response nor the proportion of mode switchers. Overall, all three web-push strategies show the potential to effectively reduce survey costs without causing differences in panel attrition after five consecutive waves. Condition 2, the sequential web-first design combined with a promised incentive was most effective in pushing respondents to switch to the web mode and in reducing costs.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44504804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emerging sectors of agriculture, such as organics, urban, and local food, tend to be dominated by farms that are smaller, more transient, more diverse, and more dispersed than the traditional farms in the rural areas of the United States. As a consequence, a list frame of all farms within one of these sectors is difficult to construct and, even with the best of efforts, is incomplete. The United States Department of Agriculture’s (USDA’s) National Agricultural Statistics Service (NASS) maintains a list frame of all known and potential U.S. farms and uses this list frame as the sampling frame for most of its surveys. Traditionally, NASS has used its area frame to assess undercoverage. However, getting a good measure of the incompleteness of the NASS list frame using an area frame is cost prohibitive for farms in these emerging sectors that tend to be located within and near urban areas. In 2016, NASS conducted the Local Food Marketing Practices (LFMP) survey. Independent samples were drawn from (1) the NASS list frame and (2) a web-scraped list of local food farms. Using these two samples and capture–recapture methods, the total number and sales of local food operations at the United States, regional, and state levels were estimated. To our knowledge, the LFMP survey is the first survey in which a web-scraped list frame has been used to assess undercoverage in a capture–recapture setting to produce official statistics. In this article, the methods are presented, and the challenges encountered are reviewed. Best practices and open research questions for conducting surveys using web-scraped list frames and capture–recapture methods are discussed.
{"title":"Capture–Recapture Estimation of Characteristics of U.S. Local Food Farms Using a Web-Scraped List Frame","authors":"Michael Hyman, L. Sartore, L. Young","doi":"10.1093/jssam/smab008","DOIUrl":"https://doi.org/10.1093/jssam/smab008","url":null,"abstract":"\u0000 The emerging sectors of agriculture, such as organics, urban, and local food, tend to be dominated by farms that are smaller, more transient, more diverse, and more dispersed than the traditional farms in the rural areas of the United States. As a consequence, a list frame of all farms within one of these sectors is difficult to construct and, even with the best of efforts, is incomplete. The United States Department of Agriculture’s (USDA’s) National Agricultural Statistics Service (NASS) maintains a list frame of all known and potential U.S. farms and uses this list frame as the sampling frame for most of its surveys. Traditionally, NASS has used its area frame to assess undercoverage. However, getting a good measure of the incompleteness of the NASS list frame using an area frame is cost prohibitive for farms in these emerging sectors that tend to be located within and near urban areas. In 2016, NASS conducted the Local Food Marketing Practices (LFMP) survey. Independent samples were drawn from (1) the NASS list frame and (2) a web-scraped list of local food farms. Using these two samples and capture–recapture methods, the total number and sales of local food operations at the United States, regional, and state levels were estimated. To our knowledge, the LFMP survey is the first survey in which a web-scraped list frame has been used to assess undercoverage in a capture–recapture setting to produce official statistics. In this article, the methods are presented, and the challenges encountered are reviewed. Best practices and open research questions for conducting surveys using web-scraped list frames and capture–recapture methods are discussed.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46183458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In sample surveys that collect information on skewed variables, it is often desirable to assess the influence of sample units on the sampling error of survey-weighted estimators of finite population parameters. The conditional bias is an attractive measure of influence that accounts for the sampling design and the estimation method. It is defined as the design expectation of the sampling error conditional on a given unit being selected in the sample. The estimation of the conditional bias is relatively straightforward for simple sampling designs and estimators. However, for complex designs or complex estimators, it may be tedious to derive an explicit expression for the conditional bias. In those complex surveys, variance estimation is often achieved through replication methods such as the bootstrap. Bootstrap methods of variance estimation are typically implemented by producing a set of bootstrap weights that is provided to users along with the survey data. In this article, we show how to use these bootstrap weights to obtain an estimator of the conditional bias. Our bootstrap estimator is evaluated in a simulation study and illustrated using data from the Canadian Survey of Household Spending.
{"title":"Bootstrap Estimation of the Conditional Bias for Measuring Influence in Complex Surveys","authors":"J. Beaumont, Cynthia Bocci, Michel St-Louis","doi":"10.1093/jssam/smab029","DOIUrl":"https://doi.org/10.1093/jssam/smab029","url":null,"abstract":"\u0000 In sample surveys that collect information on skewed variables, it is often desirable to assess the influence of sample units on the sampling error of survey-weighted estimators of finite population parameters. The conditional bias is an attractive measure of influence that accounts for the sampling design and the estimation method. It is defined as the design expectation of the sampling error conditional on a given unit being selected in the sample. The estimation of the conditional bias is relatively straightforward for simple sampling designs and estimators. However, for complex designs or complex estimators, it may be tedious to derive an explicit expression for the conditional bias. In those complex surveys, variance estimation is often achieved through replication methods such as the bootstrap. Bootstrap methods of variance estimation are typically implemented by producing a set of bootstrap weights that is provided to users along with the survey data. In this article, we show how to use these bootstrap weights to obtain an estimator of the conditional bias. Our bootstrap estimator is evaluated in a simulation study and illustrated using data from the Canadian Survey of Household Spending.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46294511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For regression models where data are obtained from sampling surveies, the statistical analysis is often based on approaches that are either non-robust or inefficient. The handling of survey data requires more appropriate techniques, as the classical methods usually result in biased and inefficient estimates of the underlying model parameters. This article is concerned with the development of a new approach of obtaining robust and efficient estimates of regression model parameters when dealing with survey sampling data. Asymptotic properties of such estimators are established under mild regularity conditions. To demonstrate the performance of the proposed method, Monte Carlo simulation experiments are carried out and show that the estimators obtained from the proposed methodology are robust and more efficient than many of those obtained from existing approaches, mainly if the survey data tend to result in residuals with heavy-tailed or skewed distributions and/or when there are few gross outliers. Finally, the proposed approach is illustrated with a real data example.
{"title":"Rank-Based Inference for Survey Sampling Data","authors":"A. Adekpedjou, H. Bindele","doi":"10.1093/jssam/smab019","DOIUrl":"https://doi.org/10.1093/jssam/smab019","url":null,"abstract":"\u0000 For regression models where data are obtained from sampling surveies, the statistical analysis is often based on approaches that are either non-robust or inefficient. The handling of survey data requires more appropriate techniques, as the classical methods usually result in biased and inefficient estimates of the underlying model parameters. This article is concerned with the development of a new approach of obtaining robust and efficient estimates of regression model parameters when dealing with survey sampling data. Asymptotic properties of such estimators are established under mild regularity conditions. To demonstrate the performance of the proposed method, Monte Carlo simulation experiments are carried out and show that the estimators obtained from the proposed methodology are robust and more efficient than many of those obtained from existing approaches, mainly if the survey data tend to result in residuals with heavy-tailed or skewed distributions and/or when there are few gross outliers. Finally, the proposed approach is illustrated with a real data example.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44965476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Text answers to open-ended questions are typically manually coded into one of several codes. Usually, a random subset of text answers is double-coded to assess intercoder reliability, but most of the data remain single-coded. Any disagreement between the two coders points to an error by one of the coders. When the budget allows double coding additional text answers, we propose employing statistical learning models to predict which single-coded answers have a high risk of a coding error. Specifically, we train a model on the double-coded random subset and predict the probability that the single-coded codes are correct. Then, text answers with the highest risk are double-coded to verify. In experiments with three data sets, we found that this method identifies two to three times as many coding errors in the additional text answers as compared to random guessing, on average. We conclude that this method is preferred if the budget permits additional double-coding. When there are a lot of intercoder disagreements, the benefit can be substantial.
{"title":"A Model-Assisted Approach for Finding Coding Errors in Manual Coding of Open-Ended Questions","authors":"Zhoushanyue He, Matthias Schonlau","doi":"10.1093/jssam/smab022","DOIUrl":"https://doi.org/10.1093/jssam/smab022","url":null,"abstract":"\u0000 Text answers to open-ended questions are typically manually coded into one of several codes. Usually, a random subset of text answers is double-coded to assess intercoder reliability, but most of the data remain single-coded. Any disagreement between the two coders points to an error by one of the coders. When the budget allows double coding additional text answers, we propose employing statistical learning models to predict which single-coded answers have a high risk of a coding error. Specifically, we train a model on the double-coded random subset and predict the probability that the single-coded codes are correct. Then, text answers with the highest risk are double-coded to verify. In experiments with three data sets, we found that this method identifies two to three times as many coding errors in the additional text answers as compared to random guessing, on average. We conclude that this method is preferred if the budget permits additional double-coding. When there are a lot of intercoder disagreements, the benefit can be substantial.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45517153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enrijeta Shino, Michael D. Martinez, Michael Binder
With the increasing usage of dual-mode data collection, researchers of public opinion have shown considerable interest in understanding response differences across different interview modes. Are mode effects an outcome of representation or measurement differences across modes? We conducted a dual-mode survey (web and telephone) using Florida’s voter file as the sampling frame, randomly assigning registered voters into one mode versus the other. Having a priori information about the respondents allows us to gauge whether and how sample composition differences may be driven by mode effects, and whether mode affects estimated models of political behavior. Survey mode effects are still significant for issue voting even when sampling design is similar for both modes.
{"title":"DETERMINED BY MODE? REPRESENTATION AND MEASUREMENT EFFECTS IN A DUAL-MODE STATEWIDE SURVEY","authors":"Enrijeta Shino, Michael D. Martinez, Michael Binder","doi":"10.1093/jssam/smab012","DOIUrl":"https://doi.org/10.1093/jssam/smab012","url":null,"abstract":"\u0000 With the increasing usage of dual-mode data collection, researchers of public opinion have shown considerable interest in understanding response differences across different interview modes. Are mode effects an outcome of representation or measurement differences across modes? We conducted a dual-mode survey (web and telephone) using Florida’s voter file as the sampling frame, randomly assigning registered voters into one mode versus the other. Having a priori information about the respondents allows us to gauge whether and how sample composition differences may be driven by mode effects, and whether mode affects estimated models of political behavior. Survey mode effects are still significant for issue voting even when sampling design is similar for both modes.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45317436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Response burden has been a concern in survey research for some time. One area of concern is the negative impact that response burden can have on response rates. In an effort to mitigate negative impacts on response rates, survey research organizations try to minimize the burden respondents are exposed to and maximize the likelihood of response. Many organizations also try to be mindful of the role burden may play in respondents’ likelihood to participate in future surveys by implementing rest periods or survey holidays. Recently, new evidence from a study of cross-sectional household surveys provided an interesting lens to examine burden. The evidence demonstrated that those sampled in two independent surveys are more likely to respond to the second survey if the first survey was more difficult to complete, and that this effect was not significantly influenced by the rest period in between the two surveys. These findings are compelling, and since the mechanisms influencing response in household and establishment surveys differ in important ways, a similar examination in an establishment survey context is warranted. To accomplish this, data are used from the National Agricultural Statistics Service. Overall, our research finds that prior survey features such as questionnaire complexity (or burden), prior response disposition and rest period are significantly associated with response to subsequent surveys. We also find that sample units first receiving a more complex questionnaire have significantly higher probabilities of response to a subsequent survey than do those receiving a simpler questionnaire first. The findings in this paper have implications for nonresponse adjustments and identification of subgroups for adaptive design data collection.
{"title":"QUESTIONNAIRE COMPLEXITY, REST PERIOD, AND RESPONSE LIKELIHOOD IN ESTABLISHMENT SURVEYS","authors":"J. Rodhouse, T. Wilson, Heather E Ridolfo","doi":"10.1093/JSSAM/SMAB017","DOIUrl":"https://doi.org/10.1093/JSSAM/SMAB017","url":null,"abstract":"\u0000 Response burden has been a concern in survey research for some time. One area of concern is the negative impact that response burden can have on response rates. In an effort to mitigate negative impacts on response rates, survey research organizations try to minimize the burden respondents are exposed to and maximize the likelihood of response. Many organizations also try to be mindful of the role burden may play in respondents’ likelihood to participate in future surveys by implementing rest periods or survey holidays. Recently, new evidence from a study of cross-sectional household surveys provided an interesting lens to examine burden. The evidence demonstrated that those sampled in two independent surveys are more likely to respond to the second survey if the first survey was more difficult to complete, and that this effect was not significantly influenced by the rest period in between the two surveys. These findings are compelling, and since the mechanisms influencing response in household and establishment surveys differ in important ways, a similar examination in an establishment survey context is warranted. To accomplish this, data are used from the National Agricultural Statistics Service. Overall, our research finds that prior survey features such as questionnaire complexity (or burden), prior response disposition and rest period are significantly associated with response to subsequent surveys. We also find that sample units first receiving a more complex questionnaire have significantly higher probabilities of response to a subsequent survey than do those receiving a simpler questionnaire first. The findings in this paper have implications for nonresponse adjustments and identification of subgroups for adaptive design data collection.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2021-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41839493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}