自我管理调查问题的可读性测量和计算机辅助问题评估工具的比较

IF 1.1 3区 社会学 Q2 ANTHROPOLOGY Field Methods Pub Date : 2022-10-14 DOI:10.1177/1525822x221124469
Rachel Stenger, Kristen Olson, Jolene D Smyth
{"title":"自我管理调查问题的可读性测量和计算机辅助问题评估工具的比较","authors":"Rachel Stenger, Kristen Olson, Jolene D Smyth","doi":"10.1177/1525822x221124469","DOIUrl":null,"url":null,"abstract":"Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in calculated readability across measures. Some question formats, including those that are part of a battery, require important decisions that have large effects on the estimated readability of survey items. Other question evaluation tools, such as the Question Understanding Aid (QUAID) and the Survey Quality Predictor (SQP), may identify similar problems in questions, making readability measures less useful. We find little overlap between QUAID, SQP, and the readability measures, and little differentiation in the tools’ prediction of item nonresponse rates. Questionnaire designers are encouraged to use multiple question evaluation tools and develop readability measures specifically for survey questions.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing Readability Measures and Computer‐assisted Question Evaluation Tools for Self‐administered Survey Questions\",\"authors\":\"Rachel Stenger, Kristen Olson, Jolene D Smyth\",\"doi\":\"10.1177/1525822x221124469\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in calculated readability across measures. Some question formats, including those that are part of a battery, require important decisions that have large effects on the estimated readability of survey items. Other question evaluation tools, such as the Question Understanding Aid (QUAID) and the Survey Quality Predictor (SQP), may identify similar problems in questions, making readability measures less useful. We find little overlap between QUAID, SQP, and the readability measures, and little differentiation in the tools’ prediction of item nonresponse rates. Questionnaire designers are encouraged to use multiple question evaluation tools and develop readability measures specifically for survey questions.\",\"PeriodicalId\":48060,\"journal\":{\"name\":\"Field Methods\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2022-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Field Methods\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/1525822x221124469\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ANTHROPOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Field Methods","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/1525822x221124469","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ANTHROPOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

问卷设计者使用可读性措施来确保问题能够被目标人群理解。最常见的衡量标准是Flesch-Kincaid等级,但也存在其他公式。本文在一份自填问卷中对150个问题的六种不同可读性指标进行了比较,发现不同指标的计算可读性存在显著差异。一些问题格式,包括那些属于一组的格式,需要做出对调查项目的估计可读性有很大影响的重要决定。其他问题评估工具,如问题理解辅助工具(QUAID)和调查质量预测工具(SQP),可能会发现问题中的类似问题,从而降低可读性度量的用处。我们发现QUAID、SQP和可读性度量之间几乎没有重叠,工具对项目无应答率的预测也几乎没有差异。鼓励问卷设计者使用多个问题评估工具,并专门针对调查问题制定可读性措施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Comparing Readability Measures and Computer‐assisted Question Evaluation Tools for Self‐administered Survey Questions
Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in calculated readability across measures. Some question formats, including those that are part of a battery, require important decisions that have large effects on the estimated readability of survey items. Other question evaluation tools, such as the Question Understanding Aid (QUAID) and the Survey Quality Predictor (SQP), may identify similar problems in questions, making readability measures less useful. We find little overlap between QUAID, SQP, and the readability measures, and little differentiation in the tools’ prediction of item nonresponse rates. Questionnaire designers are encouraged to use multiple question evaluation tools and develop readability measures specifically for survey questions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Field Methods
Field Methods Multiple-
CiteScore
2.70
自引率
5.90%
发文量
41
期刊介绍: Field Methods (formerly Cultural Anthropology Methods) is devoted to articles about the methods used by field wzorkers in the social and behavioral sciences and humanities for the collection, management, and analysis data about human thought and/or human behavior in the natural world. Articles should focus on innovations and issues in the methods used, rather than on the reporting of research or theoretical/epistemological questions about research. High-quality articles using qualitative and quantitative methods-- from scientific or interpretative traditions-- dealing with data collection and analysis in applied and scholarly research from writers in the social sciences, humanities, and related professions are all welcome in the pages of the journal.
期刊最新文献
ChatGPTest: Opportunities and Cautionary Tales of Utilizing AI for Questionnaire Pretesting What predicts willingness to participate in a follow-up panel study among respondents to a national web/mail survey? Invited Review: Collecting Data through Dyadic Interviews: A Systematic Review Offering Web Response as a Refusal Conversion Technique in a Mixed-mode Survey Network of Categories: A Method to Aggregate Egocentric Network Survey Data into a Whole Network Structure
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1