PERCEIVED BURDEN, FOCUS OF ATTENTION, AND THE URGE TO JUSTIFY: THE IMPACT OF THE NUMBER OF SCREENS AND PROBE ORDER ON THE RESPONSE BEHAVIOR OF PROBING QUESTIONS

IF 1.6 4区 数学 Q2 SOCIAL SCIENCES, MATHEMATICAL METHODS Journal of Survey Statistics and Methodology Pub Date : 2021-06-12 DOI:10.1093/JSSAM/SMAA043
Katharina Meitinger, A. Toroslu, Klara Raiber, Michael Braun
{"title":"PERCEIVED BURDEN, FOCUS OF ATTENTION, AND THE URGE TO JUSTIFY: THE IMPACT OF THE NUMBER OF SCREENS AND PROBE ORDER ON THE RESPONSE BEHAVIOR OF PROBING QUESTIONS","authors":"Katharina Meitinger, A. Toroslu, Klara Raiber, Michael Braun","doi":"10.1093/JSSAM/SMAA043","DOIUrl":null,"url":null,"abstract":"\n Web probing is a valuable tool to assess the validity and comparability of survey items. It uses different probe types—such as category-selection probes and specific probes—to inquire about different aspects of an item. Previous web probing studies often asked one probe type per item, but research situations exist where it might be preferable to test potentially problematic items with multiple probes. However, the response behavior might be affected by two factors: question order and the visual presentation of probes on one screen versus multiple screens as well as their interaction. In this study, we report evidence from a web experiment that was conducted with 532 respondents from Germany in September 2013. Experimental groups varied by screen number (1 versus 2) and probe order (category-selection probe first versus specific probe first). We assessed the impact of these manipulations on several indicators of response quality, probe answer content, and the respondents’ motivation with logistic regressions and two-way ANOVAs. We reveal that multiple mechanisms push response behavior in this context: perceived response burden, the focus of attention, the need for justification, and verbal context effects. We find that response behavior in the condition with two screens and category-selection probe first outperforms all other experimental conditions. We recommend this implementation in all but one scenario: if the goal is to test an item that includes a key term with a potentially too large lexical scope, we recommend starting with a specific probe but on the same screen as the category-selection probe.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2021-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Survey Statistics and Methodology","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1093/JSSAM/SMAA043","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, MATHEMATICAL METHODS","Score":null,"Total":0}
引用次数: 1

Abstract

Web probing is a valuable tool to assess the validity and comparability of survey items. It uses different probe types—such as category-selection probes and specific probes—to inquire about different aspects of an item. Previous web probing studies often asked one probe type per item, but research situations exist where it might be preferable to test potentially problematic items with multiple probes. However, the response behavior might be affected by two factors: question order and the visual presentation of probes on one screen versus multiple screens as well as their interaction. In this study, we report evidence from a web experiment that was conducted with 532 respondents from Germany in September 2013. Experimental groups varied by screen number (1 versus 2) and probe order (category-selection probe first versus specific probe first). We assessed the impact of these manipulations on several indicators of response quality, probe answer content, and the respondents’ motivation with logistic regressions and two-way ANOVAs. We reveal that multiple mechanisms push response behavior in this context: perceived response burden, the focus of attention, the need for justification, and verbal context effects. We find that response behavior in the condition with two screens and category-selection probe first outperforms all other experimental conditions. We recommend this implementation in all but one scenario: if the goal is to test an item that includes a key term with a potentially too large lexical scope, we recommend starting with a specific probe but on the same screen as the category-selection probe.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
感知负担、注意焦点和证明的冲动:屏幕数量和探究顺序对探究性问题反应行为的影响
网络探测是评估调查项目有效性和可比性的重要工具。它使用不同的探测类型(例如类别选择探测和特定探测)来查询项目的不同方面。以前的网络探测研究通常要求每个项目使用一种探测类型,但研究情况可能更可取的是使用多个探测来测试潜在的问题项目。然而,反应行为可能受到两个因素的影响:问题顺序和探针在一个屏幕上与多个屏幕上的视觉呈现以及它们之间的相互作用。在这项研究中,我们报告了2013年9月对德国532名受访者进行的网络实验的证据。实验组根据筛选数量(1对2)和探针顺序(类别选择探针优先还是特定探针优先)而变化。我们用逻辑回归和双向方差分析评估了这些操作对回答质量、探针回答内容和受访者动机等几个指标的影响。我们发现在这种情况下,多种机制推动了反应行为:知觉反应负担、注意焦点、辩护需要和言语语境效应。我们发现在两个屏幕和类别选择探针条件下的反应行为首先优于所有其他实验条件。除了一种情况外,我们建议在所有场景中都使用这种实现:如果目标是测试包含一个词法范围可能太大的关键术语的项,我们建议从一个特定的探测开始,但与类别选择探测在同一屏幕上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.30
自引率
9.50%
发文量
40
期刊介绍: The Journal of Survey Statistics and Methodology, sponsored by AAPOR and the American Statistical Association, began publishing in 2013. Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data. It aims to be the flagship journal for research on survey statistics and methodology. Topics of interest include survey sample design, statistical inference, nonresponse, measurement error, the effects of modes of data collection, paradata and responsive survey design, combining data from multiple sources, record linkage, disclosure limitation, and other issues in survey statistics and methodology. The journal publishes both theoretical and applied papers, provided the theory is motivated by an important applied problem and the applied papers report on research that contributes generalizable knowledge to the field. Review papers are also welcomed. Papers on a broad range of surveys are encouraged, including (but not limited to) surveys concerning business, economics, marketing research, social science, environment, epidemiology, biostatistics and official statistics. The journal has three sections. The Survey Statistics section presents papers on innovative sampling procedures, imputation, weighting, measures of uncertainty, small area inference, new methods of analysis, and other statistical issues related to surveys. The Survey Methodology section presents papers that focus on methodological research, including methodological experiments, methods of data collection and use of paradata. The Applications section contains papers involving innovative applications of methods and providing practical contributions and guidance, and/or significant new findings.
期刊最新文献
Real World Data Versus Probability Surveys for Estimating Health Conditions at the State Level. Small Area Poverty Estimation under Heteroskedasticity Investigating Respondent Attention to Experimental Text Lengths A Catch-22—the Test–Retest Method of Reliability Estimation Poverty Mapping Under Area-Level Random Regression Coefficient Poisson Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1