PERCEIVED BURDEN, FOCUS OF ATTENTION, AND THE URGE TO JUSTIFY: THE IMPACT OF THE NUMBER OF SCREENS AND PROBE ORDER ON THE RESPONSE BEHAVIOR OF PROBING QUESTIONS
Katharina Meitinger, A. Toroslu, Klara Raiber, Michael Braun
{"title":"PERCEIVED BURDEN, FOCUS OF ATTENTION, AND THE URGE TO JUSTIFY: THE IMPACT OF THE NUMBER OF SCREENS AND PROBE ORDER ON THE RESPONSE BEHAVIOR OF PROBING QUESTIONS","authors":"Katharina Meitinger, A. Toroslu, Klara Raiber, Michael Braun","doi":"10.1093/JSSAM/SMAA043","DOIUrl":null,"url":null,"abstract":"\n Web probing is a valuable tool to assess the validity and comparability of survey items. It uses different probe types—such as category-selection probes and specific probes—to inquire about different aspects of an item. Previous web probing studies often asked one probe type per item, but research situations exist where it might be preferable to test potentially problematic items with multiple probes. However, the response behavior might be affected by two factors: question order and the visual presentation of probes on one screen versus multiple screens as well as their interaction. In this study, we report evidence from a web experiment that was conducted with 532 respondents from Germany in September 2013. Experimental groups varied by screen number (1 versus 2) and probe order (category-selection probe first versus specific probe first). We assessed the impact of these manipulations on several indicators of response quality, probe answer content, and the respondents’ motivation with logistic regressions and two-way ANOVAs. We reveal that multiple mechanisms push response behavior in this context: perceived response burden, the focus of attention, the need for justification, and verbal context effects. We find that response behavior in the condition with two screens and category-selection probe first outperforms all other experimental conditions. We recommend this implementation in all but one scenario: if the goal is to test an item that includes a key term with a potentially too large lexical scope, we recommend starting with a specific probe but on the same screen as the category-selection probe.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2021-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Survey Statistics and Methodology","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1093/JSSAM/SMAA043","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, MATHEMATICAL METHODS","Score":null,"Total":0}
引用次数: 1
Abstract
Web probing is a valuable tool to assess the validity and comparability of survey items. It uses different probe types—such as category-selection probes and specific probes—to inquire about different aspects of an item. Previous web probing studies often asked one probe type per item, but research situations exist where it might be preferable to test potentially problematic items with multiple probes. However, the response behavior might be affected by two factors: question order and the visual presentation of probes on one screen versus multiple screens as well as their interaction. In this study, we report evidence from a web experiment that was conducted with 532 respondents from Germany in September 2013. Experimental groups varied by screen number (1 versus 2) and probe order (category-selection probe first versus specific probe first). We assessed the impact of these manipulations on several indicators of response quality, probe answer content, and the respondents’ motivation with logistic regressions and two-way ANOVAs. We reveal that multiple mechanisms push response behavior in this context: perceived response burden, the focus of attention, the need for justification, and verbal context effects. We find that response behavior in the condition with two screens and category-selection probe first outperforms all other experimental conditions. We recommend this implementation in all but one scenario: if the goal is to test an item that includes a key term with a potentially too large lexical scope, we recommend starting with a specific probe but on the same screen as the category-selection probe.
期刊介绍:
The Journal of Survey Statistics and Methodology, sponsored by AAPOR and the American Statistical Association, began publishing in 2013. Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data. It aims to be the flagship journal for research on survey statistics and methodology. Topics of interest include survey sample design, statistical inference, nonresponse, measurement error, the effects of modes of data collection, paradata and responsive survey design, combining data from multiple sources, record linkage, disclosure limitation, and other issues in survey statistics and methodology. The journal publishes both theoretical and applied papers, provided the theory is motivated by an important applied problem and the applied papers report on research that contributes generalizable knowledge to the field. Review papers are also welcomed. Papers on a broad range of surveys are encouraged, including (but not limited to) surveys concerning business, economics, marketing research, social science, environment, epidemiology, biostatistics and official statistics. The journal has three sections. The Survey Statistics section presents papers on innovative sampling procedures, imputation, weighting, measures of uncertainty, small area inference, new methods of analysis, and other statistical issues related to surveys. The Survey Methodology section presents papers that focus on methodological research, including methodological experiments, methods of data collection and use of paradata. The Applications section contains papers involving innovative applications of methods and providing practical contributions and guidance, and/or significant new findings.