Pub Date : 2022-09-29DOI: 10.1177/1525822X221115831
Tanja Kunz, Katharina Meitinger
Although list-style open-ended questions generally help us gain deeper insights into respondents’ thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our results showed that a follow-up design achieved longer responses with more themes and theme areas than a static design. In contrast, the dynamic design produced the shortest answers with the fewest themes and theme areas. No differences in item nonresponse and only minor differences in additional response burden were found among the three list-style designs. Our study shows that design features and timing are crucial to clarify the desired response format and motivate respondents to give high-quality answers to list-style open-ended questions.
{"title":"A Comparison of Three Designs for List-style Open-ended Questions in Web Surveys","authors":"Tanja Kunz, Katharina Meitinger","doi":"10.1177/1525822X221115831","DOIUrl":"https://doi.org/10.1177/1525822X221115831","url":null,"abstract":"Although list-style open-ended questions generally help us gain deeper insights into respondents’ thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our results showed that a follow-up design achieved longer responses with more themes and theme areas than a static design. In contrast, the dynamic design produced the shortest answers with the fewest themes and theme areas. No differences in item nonresponse and only minor differences in additional response burden were found among the three list-style designs. Our study shows that design features and timing are crucial to clarify the desired response format and motivate respondents to give high-quality answers to list-style open-ended questions.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41497024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-29DOI: 10.1177/1525822X221115829
Rainer Schnell, Sarah Redlich, A. Göritz
Frequency of behaviors or amounts of variables of interest are essential topics in many surveys. The use of heuristics might cause rounded answers, resulting in the increased occurrence of end-digits (called heaping or digit-preference). For web surveys (or CASI), we propose using a conditional prompt as input validation if digits indicating heaping are entered. We report an experiment, where respondents in an online access panel (n = 2,590) were randomly assigned to one of three groups: (1) no input validation; (2) conditional input validation if rounding was presumed; and (3) input validation every time a numerical value was entered. Conditional input validation reduces heaping for variables with high proportions of heaped values. Unconditional input validation seems to be less effective.
{"title":"Conditional Pop-up Reminders Reduce Incidence of Rounding in Web Surveys","authors":"Rainer Schnell, Sarah Redlich, A. Göritz","doi":"10.1177/1525822X221115829","DOIUrl":"https://doi.org/10.1177/1525822X221115829","url":null,"abstract":"Frequency of behaviors or amounts of variables of interest are essential topics in many surveys. The use of heuristics might cause rounded answers, resulting in the increased occurrence of end-digits (called heaping or digit-preference). For web surveys (or CASI), we propose using a conditional prompt as input validation if digits indicating heaping are entered. We report an experiment, where respondents in an online access panel (n = 2,590) were randomly assigned to one of three groups: (1) no input validation; (2) conditional input validation if rounding was presumed; and (3) input validation every time a numerical value was entered. Conditional input validation reduces heaping for variables with high proportions of heaped values. Unconditional input validation seems to be less effective.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48387016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1177/1525822X221114226
Yali Liu, L. Buckingham
To date, research on elite interviews has primarily focused on political or business settings in European and Anglo-American contexts. In this study, we examine the procedures involved in conducting elite interviews in academic settings, drawing on fieldwork with 53 senior scholars at 10 universities across five regions of northern China. We provide a detailed, critically reflective account of strategies to gain access, develop trust, and manage the power imbalance. Our account reveals the importance of the researcher’s professional identity in gaining participants’ trust and determining adequate forms of reciprocity.
{"title":"A Critical Approach to Interviewing Academic Elites: Access, Trust, and Power","authors":"Yali Liu, L. Buckingham","doi":"10.1177/1525822X221114226","DOIUrl":"https://doi.org/10.1177/1525822X221114226","url":null,"abstract":"To date, research on elite interviews has primarily focused on political or business settings in European and Anglo-American contexts. In this study, we examine the procedures involved in conducting elite interviews in academic settings, drawing on fieldwork with 53 senior scholars at 10 universities across five regions of northern China. We provide a detailed, critically reflective account of strategies to gain access, develop trust, and manage the power imbalance. Our account reveals the importance of the researcher’s professional identity in gaining participants’ trust and determining adequate forms of reciprocity.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43521189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-16DOI: 10.1177/1525822X221115506
E. Aizpurua, Gianmaria Bottoni, R. Fitzgerald
Despite the widespread use of examples in survey questions, very few studies have examined their impact on survey responses, and the evidence is mainly based on data collected in the United States using questionnaires in English. This study builds on previous research by examining the effects of providing examples using data from a cross-national probability-based web panel implemented in Estonia (n = 730), Great Britain (n = 685), and Slovenia (n = 529) during Round 8 of the European Social Survey (2017/18). Respondents were randomly assigned a survey question measuring confidence in social media using Facebook and Twitter as examples, or another condition in which no examples were offered. The results show that confidence in social media was significantly lower in the example condition, although the effect size was small. Confidence in social media varied across countries, and the effect of providing examples was heterogeneous across countries and education levels. The implications of these findings are discussed.
{"title":"The Devil Is in the Details: A Randomized Experiment Assessing the Effect of Providing Examples in a Survey Question across Countries","authors":"E. Aizpurua, Gianmaria Bottoni, R. Fitzgerald","doi":"10.1177/1525822X221115506","DOIUrl":"https://doi.org/10.1177/1525822X221115506","url":null,"abstract":"Despite the widespread use of examples in survey questions, very few studies have examined their impact on survey responses, and the evidence is mainly based on data collected in the United States using questionnaires in English. This study builds on previous research by examining the effects of providing examples using data from a cross-national probability-based web panel implemented in Estonia (n = 730), Great Britain (n = 685), and Slovenia (n = 529) during Round 8 of the European Social Survey (2017/18). Respondents were randomly assigned a survey question measuring confidence in social media using Facebook and Twitter as examples, or another condition in which no examples were offered. The results show that confidence in social media was significantly lower in the example condition, although the effect size was small. Confidence in social media varied across countries, and the effect of providing examples was heterogeneous across countries and education levels. The implications of these findings are discussed.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46631737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-15DOI: 10.1177/1525822X221113181
Matthew Jannetti, A. Carroll-Scott, Erikka Gilliam, Irene E. Headen, Maggie Beverly, F. Lê-Scherban
Place-based initiatives often use resident surveys to inform and evaluate interventions. Sampling based on well-defined sampling frames is important but challenging for initiatives that target subpopulations. Databases that enumerate total population counts can produce overinclusive sampling frames, resulting in costly outreach to ineligible participants. Quantifying eligibility before sampling using machine learning algorithms can improve efficiency and reduce costs. We developed a model to improve sampling for the West Philly Promise Neighborhood’s biennial population-representative survey of households with children within a geographic footprint. This study proposes a method to estimate probability of study eligibility by building a well-calibrated predictive model using existing administrative data sources. Six machine-learning models were evaluated; logistic regression provided the best balance of accuracy and understandable probabilities. This approach can be a blueprint for other population-based studies whose sampling frames cannot be well defined using traditional sources.
{"title":"Improving Sampling Probability Definitions with Predictive Algorithms","authors":"Matthew Jannetti, A. Carroll-Scott, Erikka Gilliam, Irene E. Headen, Maggie Beverly, F. Lê-Scherban","doi":"10.1177/1525822X221113181","DOIUrl":"https://doi.org/10.1177/1525822X221113181","url":null,"abstract":"Place-based initiatives often use resident surveys to inform and evaluate interventions. Sampling based on well-defined sampling frames is important but challenging for initiatives that target subpopulations. Databases that enumerate total population counts can produce overinclusive sampling frames, resulting in costly outreach to ineligible participants. Quantifying eligibility before sampling using machine learning algorithms can improve efficiency and reduce costs. We developed a model to improve sampling for the West Philly Promise Neighborhood’s biennial population-representative survey of households with children within a geographic footprint. This study proposes a method to estimate probability of study eligibility by building a well-calibrated predictive model using existing administrative data sources. Six machine-learning models were evaluated; logistic regression provided the best balance of accuracy and understandable probabilities. This approach can be a blueprint for other population-based studies whose sampling frames cannot be well defined using traditional sources.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47100891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-28DOI: 10.1177/1525822X221115838
O. Lipps, Gian-Andrea Monsch
Telephone surveys face more and more criticism because of decreasing coverage and increasing costs, and the risk of producing socially desirable answers. Consequently, survey administrators consider switching their surveys to the web mode, although the web mode is more susceptible to item nonresponse. Still, we do not know whether this is true for all question types. In this article, we analyze to what extent item nonresponse depends on question characteristics such as their form or difficulty in the telephone and the web mode. We use data from an experiment in which individuals randomly sampled from a population register are experimentally assigned to these two modes. Distinguishing effects on the frequency of don’t know responses, item refusals, and mid-scale responding, we find more don’t know responses and item refusals for the web mode generally, but no differences for mid-scale responding. However, this relationship depends on the characteristics of the question.
{"title":"Effects of Question Characteristics on Item Nonresponse in Telephone and Web Survey Modes","authors":"O. Lipps, Gian-Andrea Monsch","doi":"10.1177/1525822X221115838","DOIUrl":"https://doi.org/10.1177/1525822X221115838","url":null,"abstract":"Telephone surveys face more and more criticism because of decreasing coverage and increasing costs, and the risk of producing socially desirable answers. Consequently, survey administrators consider switching their surveys to the web mode, although the web mode is more susceptible to item nonresponse. Still, we do not know whether this is true for all question types. In this article, we analyze to what extent item nonresponse depends on question characteristics such as their form or difficulty in the telephone and the web mode. We use data from an experiment in which individuals randomly sampled from a population register are experimentally assigned to these two modes. Distinguishing effects on the frequency of don’t know responses, item refusals, and mid-scale responding, we find more don’t know responses and item refusals for the web mode generally, but no differences for mid-scale responding. However, this relationship depends on the characteristics of the question.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42100080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-04DOI: 10.1177/1525822X221113178
Jeffrey G. Snodgrass, A. Brewis, H. Dengah, W. Dressler, B. Kaiser, B. Kohrt, Emily Mendenhall, Seth Sagstetter, L. Weaver, K. X. Zhao
We review ethnographic methods that allow researchers to assess distress in a culturally sensitive manner. We begin with an overview of standardized biomedical and psychological approaches to assessing distress cross-culturally. We then focus on literature describing the development of reliable and valid culturally sensitive assessment tools that can serve as complements or alternatives to biomedical categories and diagnostic frameworks. The methods we describe are useful in identifying forms of suffering—expressed in culturally salient idioms of distress—that might be misidentified by biomedical classifications. We highlight the utility of a cognitive anthropological theoretical approach for developing measures that attend to local cultural categories of knowledge and experience. Attending to cultural insider perspectives is necessary because expressions of distress, thresholds of tolerance for distress, expectations about stress inherent in life, conceptions of the good life, symptom expression, and modes of help-seeking vary across cultures.
{"title":"Ethnographic Methods for Identifying Cultural Concepts of Distress: Developing Reliable and Valid Measures","authors":"Jeffrey G. Snodgrass, A. Brewis, H. Dengah, W. Dressler, B. Kaiser, B. Kohrt, Emily Mendenhall, Seth Sagstetter, L. Weaver, K. X. Zhao","doi":"10.1177/1525822X221113178","DOIUrl":"https://doi.org/10.1177/1525822X221113178","url":null,"abstract":"We review ethnographic methods that allow researchers to assess distress in a culturally sensitive manner. We begin with an overview of standardized biomedical and psychological approaches to assessing distress cross-culturally. We then focus on literature describing the development of reliable and valid culturally sensitive assessment tools that can serve as complements or alternatives to biomedical categories and diagnostic frameworks. The methods we describe are useful in identifying forms of suffering—expressed in culturally salient idioms of distress—that might be misidentified by biomedical classifications. We highlight the utility of a cognitive anthropological theoretical approach for developing measures that attend to local cultural categories of knowledge and experience. Attending to cultural insider perspectives is necessary because expressions of distress, thresholds of tolerance for distress, expectations about stress inherent in life, conceptions of the good life, symptom expression, and modes of help-seeking vary across cultures.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45451889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1177/1525822X221077398
Sela R Harcey, Robin Gauthier, Kelly L Markowski, Jeffrey A Smith
Conducting field research with a vulnerable population is difficult under the most auspicious conditions, and these difficulties only increase during a pandemic. Here, we describe the practical challenges and ethical considerations surrounding a recent data collection effort with a high-risk population during the COVID-19 pandemic. We detail our strategies related to research design, site selection, and ethical review.
{"title":"Short Take: Collecting Data from a Vulnerable Population during the COVID-19 Pandemic.","authors":"Sela R Harcey, Robin Gauthier, Kelly L Markowski, Jeffrey A Smith","doi":"10.1177/1525822X221077398","DOIUrl":"https://doi.org/10.1177/1525822X221077398","url":null,"abstract":"<p><p>Conducting field research with a vulnerable population is difficult under the most auspicious conditions, and these difficulties only increase during a pandemic. Here, we describe the practical challenges and ethical considerations surrounding a recent data collection effort with a high-risk population during the COVID-19 pandemic. We detail our strategies related to research design, site selection, and ethical review.</p>","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8968433/pdf/10.1177_1525822X221077398.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9730306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1177/1525822X221105940
Ádám Stefkovics
A number of previous studies have shown that the direction of rating scales may affect the distribution of responses. There is also considerable evidence that the cognitive process of answering a survey question differ by survey mode, which suggests that scale direction effects may interact with mode effects. The aim of this study was to explore scale direction effect differences between experimental data collected by face-to-face, phone, and online interviews. Three different scales were used in the survey. Few signs of scale direction effects were found in the interviewer-administered surveys, while in the online survey, in the case of the 0–10 scale, responses were affected by the direction of the scale. The anchoring-and-adjustment heuristic may explain these mode differences and the results suggest that the theory provides a better theoretical ground than satisficing theory in the case of scalar questions.
{"title":"Are Scale Direction Effects the Same in Different Survey Modes? Comparison of a Face-to-Face, a Telephone, and an Online Survey Experiment","authors":"Ádám Stefkovics","doi":"10.1177/1525822X221105940","DOIUrl":"https://doi.org/10.1177/1525822X221105940","url":null,"abstract":"A number of previous studies have shown that the direction of rating scales may affect the distribution of responses. There is also considerable evidence that the cognitive process of answering a survey question differ by survey mode, which suggests that scale direction effects may interact with mode effects. The aim of this study was to explore scale direction effect differences between experimental data collected by face-to-face, phone, and online interviews. Three different scales were used in the survey. Few signs of scale direction effects were found in the interviewer-administered surveys, while in the online survey, in the case of the 0–10 scale, responses were affected by the direction of the scale. The anchoring-and-adjustment heuristic may explain these mode differences and the results suggest that the theory provides a better theoretical ground than satisficing theory in the case of scalar questions.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47463988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-21DOI: 10.1177/1525822X221107053
Catherine Billington, Gonzalo Rivero, Andrew Jannett, Jiating Chen
During data collection, field interviewers often append notes or comments to a case in open text fields to request updates to case-level data. Processing these comments can improve data quality, but many are non-actionable, and processing remains a costly manual task. This article presents a case study using a novel application of machine learning tools to assist in the evaluation of these comments. Using over 5,000 comments from the Medical Expenditure Panel Survey, we built features that were fed to a machine learning model to predict a grouping category for each comment as previously assigned by data technicians to expedite processing. The model achieved high top-3 accuracy and was incorporated into a production tool for editing. A qualitative evaluation of the tool also provided encouraging results. This application of machine learning tools allowed a small but worthwhile increase in processing efficiency, while maintaining exacting standards for data quality.
{"title":"A Machine Learning Model Helps Process Interviewer Comments in Computer-assisted Personal Interview Instruments: A Case Study","authors":"Catherine Billington, Gonzalo Rivero, Andrew Jannett, Jiating Chen","doi":"10.1177/1525822X221107053","DOIUrl":"https://doi.org/10.1177/1525822X221107053","url":null,"abstract":"During data collection, field interviewers often append notes or comments to a case in open text fields to request updates to case-level data. Processing these comments can improve data quality, but many are non-actionable, and processing remains a costly manual task. This article presents a case study using a novel application of machine learning tools to assist in the evaluation of these comments. Using over 5,000 comments from the Medical Expenditure Panel Survey, we built features that were fed to a machine learning model to predict a grouping category for each comment as previously assigned by data technicians to expedite processing. The model achieved high top-3 accuracy and was incorporated into a production tool for editing. A qualitative evaluation of the tool also provided encouraging results. This application of machine learning tools allowed a small but worthwhile increase in processing efficiency, while maintaining exacting standards for data quality.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41918066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}