Pub Date : 2020-09-25DOI: 10.18148/SRM/2020.V14I5.7617
F. Hubbard, F. Conrad, Christopher Antoun
By clarifying the meaning of survey questions, interviewers help assure that respondents and researchers interpret questions the same way. This practice is at the heart of conversational interviewing and has been shown to improve response accuracy relative to standardized interviewing. This research investigates two issues: (1) Does conversational interviewing lead to improved response quality for opinion questions as it does for factual questions? and (2) Are some interviewers better suited to conduct conversational interviews than others? 490 respondents in the University of Michigan Surveys of Consumers participated in standardized telephone interviews after which they were re-asked five factual and five opinion questions. These questions were re-administered in conversational interviews for half the respondents; for the remaining half they were re-administered in standardized interviews. Interviewers also completed a nonverbal sensitivity questionnaire. Using response change between the two administrations of each question to measure response quality, the conversational technique improved quality while increasing interview duration. The comprehension benefits of conversational interviewing were no greater for opinion than factual questions. Moreover, interviewers low in nonverbal sensitivity more often gave definitions before respondents were able to speak, but this did not affect data quality (response change). Taken togetherthese results suggest that conversational interviewing can beeffectively administered by a range of professional interviewers,although those who are more attuned to respondents' comprehensionwill be more efficient, and the technique will equally benefit thequality of responses to questions about objective and subjectivephenomena.
{"title":"The Benefits of Conversational Interviewing Depend on Who Asks the Questions and the Types of Questions They Ask","authors":"F. Hubbard, F. Conrad, Christopher Antoun","doi":"10.18148/SRM/2020.V14I5.7617","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I5.7617","url":null,"abstract":"By clarifying the meaning of survey questions, interviewers help assure that respondents and researchers interpret questions the same way. This practice is at the heart of conversational interviewing and has been shown to improve response accuracy relative to standardized interviewing. This research investigates two issues: (1) Does conversational interviewing lead to improved response quality for opinion questions as it does for factual questions? and (2) Are some interviewers better suited to conduct conversational interviews than others? 490 respondents in the University of Michigan Surveys of Consumers participated in standardized telephone interviews after which they were re-asked five factual and five opinion questions. These questions were re-administered in conversational interviews for half the respondents; for the remaining half they were re-administered in standardized interviews. Interviewers also completed a nonverbal sensitivity questionnaire. Using response change between the two administrations of each question to measure response quality, the conversational technique improved quality while increasing interview duration. The comprehension benefits of conversational interviewing were no greater for opinion than factual questions. Moreover, interviewers low in nonverbal sensitivity more often gave definitions before respondents were able to speak, but this did not affect data quality (response change). Taken togetherthese results suggest that conversational interviewing can beeffectively administered by a range of professional interviewers,although those who are more attuned to respondents' comprehensionwill be more efficient, and the technique will equally benefit thequality of responses to questions about objective and subjectivephenomena.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"515-531"},"PeriodicalIF":4.8,"publicationDate":"2020-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42567724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-25DOI: 10.18148/SRM/2020.V14I5.7647
M. Truong
There are increasing efforts to incorporate biology into our study of the social determinants of health. While there is increasing utilisation of biosocial methods in child health and health disparities research, protocols for collecting biomeasures in community contexts involving children are underdeveloped. This paper is based on the Speak Out Against Racism (SOAR) project which collected anthropometric, blood pressure and biosamples (buccal swabs and saliva) from a diverse sample of 124 children (aged 10-12) at 3 public schools. Students participated in hands-on science tutorials that provided an overview of the study protocol and goals. This paper describes the methods employed, as well as the practical and ethical considerations necessary for biomeasure data collection within schools. This study discusses the feasibility of collecting biological data in school settings, including the considerable preparation and resources required for recruitment, planning and data collection. Lessons learned and suggestions to inform future research and practice in this area are discussed.
{"title":"Collecting biomarkers and biological samples in Australian primary schools: Insights from the field","authors":"M. Truong","doi":"10.18148/SRM/2020.V14I5.7647","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I5.7647","url":null,"abstract":"There are increasing efforts to incorporate biology into our study of the social determinants of health. While there is increasing utilisation of biosocial methods in child health and health disparities research, protocols for collecting biomeasures in community contexts involving children are underdeveloped. This paper is based on the Speak Out Against Racism (SOAR) project which collected anthropometric, blood pressure and biosamples (buccal swabs and saliva) from a diverse sample of 124 children (aged 10-12) at 3 public schools. Students participated in hands-on science tutorials that provided an overview of the study protocol and goals. This paper describes the methods employed, as well as the practical and ethical considerations necessary for biomeasure data collection within schools. This study discusses the feasibility of collecting biological data in school settings, including the considerable preparation and resources required for recruitment, planning and data collection. Lessons learned and suggestions to inform future research and practice in this area are discussed.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"477-486"},"PeriodicalIF":4.8,"publicationDate":"2020-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43094238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"\"A Methodological Review on Analyzing the Effect of National Basic Livelihood Security Program\"","authors":"Wonjin Lee","doi":"10.20997/sr.21.3.7","DOIUrl":"https://doi.org/10.20997/sr.21.3.7","url":null,"abstract":"","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"15 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85535101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-10DOI: 10.18148/SRM/2020.V14I3.7639
Zhoushanyue He, Matthias Schonlau
Responses to open-ended questions in surveys are usually coded into pre-specified classes, manually or automatically using a statistical learning algorithm. Automatic coding of open-ended responses relies on a set of manually coded responses, based on which a statistical learning model is fitted. In this paper, we investigate whether and how double coding can help improve the automatic classification of open-ended responses. We evaluate four strategies for training the statistical algorithm on double coded data, using experiments on simulated and real data. We find that, when the data are already double-coded (i.e. double coding does not incur additional costs), double coding where an expert resolves intercoder disagreement leads to the greatest classification accuracy. However, when we have a fixed budget for manually coding, single coding is preferable if the coding error rate is anticipated to be less than about 35% to 45%.
{"title":"Automatic Coding of Open-ended Questions into Multiple Classes: Whether and How to Use Double Coded Data","authors":"Zhoushanyue He, Matthias Schonlau","doi":"10.18148/SRM/2020.V14I3.7639","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I3.7639","url":null,"abstract":"Responses to open-ended questions in surveys are usually coded into pre-specified classes, manually or automatically using a statistical learning algorithm. Automatic coding of open-ended responses relies on a set of manually coded responses, based on which a statistical learning model is fitted. In this paper, we investigate whether and how double coding can help improve the automatic classification of open-ended responses. We evaluate four strategies for training the statistical algorithm on double coded data, using experiments on simulated and real data. We find that, when the data are already double-coded (i.e. double coding does not incur additional costs), double coding where an expert resolves intercoder disagreement leads to the greatest classification accuracy. However, when we have a fixed budget for manually coding, single coding is preferable if the coding error rate is anticipated to be less than about 35% to 45%.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"267-287"},"PeriodicalIF":4.8,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46333020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-10DOI: 10.18148/SRM/2020.V14I3.7613
W. Saris, I. Gallhofer
There are many concepts in the social sciences that are measured using multiple indicators. Such concepts have been called by Blalock (1968) concepts-by-postulation because one needs a theoretical argument to define them. Within the set of concepts-by-postulation, a distinction has been made between concepts with reflective indicators and concepts with formative indicators. This distinction refers to the assumption whether the latent concepts determine the observed indicators (reflective) or that the indicators together determine the latent concept of interest (formative). Blalock complained that developing measurement procedures for complex concepts, researchers think mainly about questions not about concepts that these questions measure. In this way, the questions used contain unique components which reduce the quality of the composite score based on these questions as measure for the complex concept of interest. Saris and Gallhofer have shown how alternative formulated questions can be developed to measure so-called concepts-by-intuition. In this paper, we will show that the same procedure can be used to avoid unique components in the measurement of complex concepts with reflective indicators and that in this way the quality of the composite score for complex concepts can considerably be increased.
{"title":"Designing Better Questions for Complex Concepts with Reflective Indicators","authors":"W. Saris, I. Gallhofer","doi":"10.18148/SRM/2020.V14I3.7613","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I3.7613","url":null,"abstract":"There are many concepts in the social sciences that are measured using multiple indicators. Such concepts have been called by Blalock (1968) concepts-by-postulation because one needs a theoretical argument to define them. Within the set of concepts-by-postulation, a distinction has been made between concepts with reflective indicators and concepts with formative indicators. This distinction refers to the assumption whether the latent concepts determine the observed indicators (reflective) or that the indicators together determine the latent concept of interest (formative). Blalock complained that developing measurement procedures for complex concepts, researchers think mainly about questions not about concepts that these questions measure. In this way, the questions used contain unique components which reduce the quality of the composite score based on these questions as measure for the complex concept of interest. Saris and Gallhofer have shown how alternative formulated questions can be developed to measure so-called concepts-by-intuition. In this paper, we will show that the same procedure can be used to avoid unique components in the measurement of complex concepts with reflective indicators and that in this way the quality of the composite score for complex concepts can considerably be increased.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"253-266"},"PeriodicalIF":4.8,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41878165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-10DOI: 10.18148/SRM/2020.V14I3.7579
Hannah Schwarz, M. Revilla, Wiebke Weber
It is common to repeat survey questions in the social sciences, for example to estimate test-retest reliability or in pretest-posttest experimental designs. An underlying assumption is that the repetition of questions leads to independent measurements. Critics point to respondents’ memory as a source of bias for the resulting estimates. Yet there is little empirical evidence showing how large memory effects are within the same survey and none showing whether memory effects can be decreased through purposeful intervention during a survey. We aim to address both of these points based on data from a lab-based web survey containing an experiment. We repeated one of the initial questions at the end of the survey (on average 127 questions later) and asked respondents if they recall their previous answer and to reproduce it. Furthermore, we compared respondents’ memory of previously given responses between two experimental groups: A control group, where regular survey questions were asked in between repetitions and a treatment group which, additionally, received a memory interference task aimed at decreasing memory. We found that, after an average 20-minute interval, 60% of the respondents were able to correctly reproduce their previous answer, of which we estimated 17% to do so due to memory. We did not observe a decrease in memory as time intervals between repetitions become longer. This indicates a serious challenge to using repeated questions within the same survey. Moreover, the tested memory interference task did not reduce respondents’ recall of their previously given answer or the memory effect.
{"title":"Memory Effects in Repeated Survey Questions: Reviving the Empirical Investigation of the Independent Measurements Assumption","authors":"Hannah Schwarz, M. Revilla, Wiebke Weber","doi":"10.18148/SRM/2020.V14I3.7579","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I3.7579","url":null,"abstract":"It is common to repeat survey questions in the social sciences, for example to estimate test-retest reliability or in pretest-posttest experimental designs. An underlying assumption is that the repetition of questions leads to independent measurements. Critics point to respondents’ memory as a source of bias for the resulting estimates. Yet there is little empirical evidence showing how large memory effects are within the same survey and none showing whether memory effects can be decreased through purposeful intervention during a survey. We aim to address both of these points based on data from a lab-based web survey containing an experiment. We repeated one of the initial questions at the end of the survey (on average 127 questions later) and asked respondents if they recall their previous answer and to reproduce it. Furthermore, we compared respondents’ memory of previously given responses between two experimental groups: A control group, where regular survey questions were asked in between repetitions and a treatment group which, additionally, received a memory interference task aimed at decreasing memory. We found that, after an average 20-minute interval, 60% of the respondents were able to correctly reproduce their previous answer, of which we estimated 17% to do so due to memory. We did not observe a decrease in memory as time intervals between repetitions become longer. This indicates a serious challenge to using repeated questions within the same survey. Moreover, the tested memory interference task did not reduce respondents’ recall of their previously given answer or the memory effect.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"325-344"},"PeriodicalIF":4.8,"publicationDate":"2020-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41990406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-10DOI: 10.18148/SRM/2020.V14I3.7621
F. Bittmann
Previous research shows that sociodemographic (mis)matches between respondents and interviewers can affect unit and item nonresponse in survey situations. The current paper attempts to deepen the understanding of these findings and investigates the effect of matching with regard to gender and age on item nonresponse, reluctance to answer items and the probability that a third person is interfering with the interview. Using multilevel European Social Survey data from 23 countries, it is demonstrated that some constellations of matching improve data quality significantly. The results corroborate and extend previous findings and underline that sociodemographic matching has the potential to enhance data quality in face to face situations.
{"title":"The More Similar, the Better?","authors":"F. Bittmann","doi":"10.18148/SRM/2020.V14I3.7621","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I3.7621","url":null,"abstract":"Previous research shows that sociodemographic (mis)matches between respondents and interviewers can affect unit and item nonresponse in survey situations. The current paper attempts to deepen the understanding of these findings and investigates the effect of matching with regard to gender and age on item nonresponse, reluctance to answer items and the probability that a third person is interfering with the interview. Using multilevel European Social Survey data from 23 countries, it is demonstrated that some constellations of matching improve data quality significantly. The results corroborate and extend previous findings and underline that sociodemographic matching has the potential to enhance data quality in face to face situations.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"301-323"},"PeriodicalIF":4.8,"publicationDate":"2020-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45370009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-10DOI: 10.18148/SRM/2020.V14I3.7465
Mengyao Hu, Edmundo Roberto Melipillán, B. West, John A. Kirlin, Ilse Paniagua
In multi-day diary surveys, respondents make participation decisions every day. Some respondents remain committed throughout, whereas others drop out after the first few days or in the later days of the survey, leading to item nonresponse. Such item nonresponse at the day level can introduce nonresponse and underreporting error, reduce statistical power and bias survey estimates. Despite its critical influence on survey data quality, the important issue of day-level item nonresponse in diary surveys has received surprisingly little attention. This study evaluates different response patterns in a seven-day diary survey and considers how these patterns might inform adaptive designs for future diary surveys. We analyzed data from the U.S. National Household Food Acquisition and Purchase Survey (FoodAPS), a nationally representative survey designed to collect comprehensive data on household food purchases and acquisitions during one-week time periods. In total, there were 4,826 households with 14,317 individuals that responded to the survey. To evaluate how response patterns differed across respondents and across the diary period, we employed a latent class growth analysis (LCGA), which enables the identification of different groups of respondents based on their reporting patterns. Our analysis identified five classes of respondents, ranging from highly-motivated respondents to those exhibiting minimal effort. We also compared the identified classes in terms of covariate profiles and distributions on key FoodAPS outcomes. Respondents who showed low-motivated response patterns were found to record fewer events in the diary for the key variables. To inform adaptive designs in future diary data collections, we examined respondent characteristics that were known before the diary portion of the survey and significantly associated with class membership. Several respondent characteristics (e.g., low education) and paradata features (e.g., longer initial interviews) were linked with the probability of having low-motivation response patterns. Our findings have implications for future designs of multi-day diary surveys.
{"title":"Response Patterns in a Multi-day Diary Survey: Implications for Adaptive Survey Design","authors":"Mengyao Hu, Edmundo Roberto Melipillán, B. West, John A. Kirlin, Ilse Paniagua","doi":"10.18148/SRM/2020.V14I3.7465","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I3.7465","url":null,"abstract":"In multi-day diary surveys, respondents make participation decisions every day. Some respondents remain committed throughout, whereas others drop out after the first few days or in the later days of the survey, leading to item nonresponse. Such item nonresponse at the day level can introduce nonresponse and underreporting error, reduce statistical power and bias survey estimates. Despite its critical influence on survey data quality, the important issue of day-level item nonresponse in diary surveys has received surprisingly little attention. This study evaluates different response patterns in a seven-day diary survey and considers how these patterns might inform adaptive designs for future diary surveys. We analyzed data from the U.S. National Household Food Acquisition and Purchase Survey (FoodAPS), a nationally representative survey designed to collect comprehensive data on household food purchases and acquisitions during one-week time periods. In total, there were 4,826 households with 14,317 individuals that responded to the survey. To evaluate how response patterns differed across respondents and across the diary period, we employed a latent class growth analysis (LCGA), which enables the identification of different groups of respondents based on their reporting patterns. Our analysis identified five classes of respondents, ranging from highly-motivated respondents to those exhibiting minimal effort. We also compared the identified classes in terms of covariate profiles and distributions on key FoodAPS outcomes. Respondents who showed low-motivated response patterns were found to record fewer events in the diary for the key variables. To inform adaptive designs in future diary data collections, we examined respondent characteristics that were known before the diary portion of the survey and significantly associated with class membership. Several respondent characteristics (e.g., low education) and paradata features (e.g., longer initial interviews) were linked with the probability of having low-motivation response patterns. Our findings have implications for future designs of multi-day diary surveys.","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"289-300"},"PeriodicalIF":4.8,"publicationDate":"2020-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41933852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-04DOI: 10.18148/SRM/2020.V14I2.7769
U. Kohler
Good things take time—this saying is certainly a firm part of the scientific belief system. Scientists prefer valid and reliable evidence to rapid results, and they are therefore open to running additional checks to rule out yet another artifact hypothesis, and patiently bear the challenges of lengthy reviewing processes. It is our implicit understanding that all this serves well in the joint effort of scientists to approximate the truth, at least in the long run. In these rapidly changing times, our belief system is challenged. The COVID-19 crisis created an enormous demand for rapid research results—and rightly so. Policymakers have every reason to demand scientific evidence to make informed decisions. What else should they be asking for? Journalists also have every reason to demand scientific evidence in order to efficiently evaluate and report on policymakers’decisions. And last but not least, everyday people also have every reason to insist that the harsh restrictions of their personal freedoms are at least this: evidence based. In the current situation the demand for scientific evidence is targeted predominantly at medical and epidemiological research—again with good reasons, since we are confronted with a pandemic disease. However, it is our firm conviction that survey research can and should contribute to scientific discovery in the realm of the COVID-19 crisis. Most obviously, because epidemiologists use survey research methods to estimate SARS-CoV-2 prevalence and incidence. But there are other reasons, as well. The various non-pharmacological interventions (NPIs) such as stay-home or shelter-in-place orders, rules for social distancing etc.that restrict people’s everyday lives not only affect their behavior, but also their attitudes and values. The way people react to the NPIs affect the NPI’s probability of success, and their effects on the economy and society as a whole. There are an enormous number of research questions currently being
{"title":"Survey Research Methods during the COVID-19 Crisis","authors":"U. Kohler","doi":"10.18148/SRM/2020.V14I2.7769","DOIUrl":"https://doi.org/10.18148/SRM/2020.V14I2.7769","url":null,"abstract":"Good things take time—this saying is certainly a firm part of the scientific belief system. Scientists prefer valid and reliable evidence to rapid results, and they are therefore open to running additional checks to rule out yet another artifact hypothesis, and patiently bear the challenges of lengthy reviewing processes. It is our implicit understanding that all this serves well in the joint effort of scientists to approximate the truth, at least in the long run. In these rapidly changing times, our belief system is challenged. The COVID-19 crisis created an enormous demand for rapid research results—and rightly so. Policymakers have every reason to demand scientific evidence to make informed decisions. What else should they be asking for? Journalists also have every reason to demand scientific evidence in order to efficiently evaluate and report on policymakers’decisions. And last but not least, everyday people also have every reason to insist that the harsh restrictions of their personal freedoms are at least this: evidence based. In the current situation the demand for scientific evidence is targeted predominantly at medical and epidemiological research—again with good reasons, since we are confronted with a pandemic disease. However, it is our firm conviction that survey research can and should contribute to scientific discovery in the realm of the COVID-19 crisis. Most obviously, because epidemiologists use survey research methods to estimate SARS-CoV-2 prevalence and incidence. But there are other reasons, as well. The various non-pharmacological interventions (NPIs) such as stay-home or shelter-in-place orders, rules for social distancing etc.that restrict people’s everyday lives not only affect their behavior, but also their attitudes and values. The way people react to the NPIs affect the NPI’s probability of success, and their effects on the economy and society as a whole. There are an enormous number of research questions currently being","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 1","pages":"93-94"},"PeriodicalIF":4.8,"publicationDate":"2020-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45556071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-04DOI: 10.18148/srm/2020.v14i2.7752
Narayan Sastry, Katherine McGonagle, Paula Fomby
Two major supplements to the Panel Study of Income Dynamics (PSID) were in the field during the COVID-19 outbreak in the United States: the 2019 waves of the PSID Child Development Supplement (CDS-19) and the PSID Transition into Adulthood Supplement (TAS-19). Both CDS-19 and TAS-19 abruptly terminated all face-to-face fieldwork and, for TAS-19, shifted interviewers from working in a centralized call center to working from their homes. Overall, COVID-19 had a net negative effect on response rates in CDS-19 and terminated all home visits that represented an important study component. For TAS-19, the overall effect of Covid-19 was uncertain, but negative. The costs were high of adapting to COVID-19 and providing paid time-off benefits to staff affected by the pandemic. Longitudinal surveys, such as CDS, TAS, and PSID, that span the pandemic will provide valuable information on its life course and intergenerational consequences, making ongoing data collection of vital importance.
{"title":"Effects of the COVID-19 crisis on survey fieldwork: Experience and lessons from two major supplements to the U.S. Panel Study of Income Dynamics.","authors":"Narayan Sastry, Katherine McGonagle, Paula Fomby","doi":"10.18148/srm/2020.v14i2.7752","DOIUrl":"10.18148/srm/2020.v14i2.7752","url":null,"abstract":"<p><p>Two major supplements to the Panel Study of Income Dynamics (PSID) were in the field during the COVID-19 outbreak in the United States: the 2019 waves of the PSID Child Development Supplement (CDS-19) and the PSID Transition into Adulthood Supplement (TAS-19). Both CDS-19 and TAS-19 abruptly terminated all face-to-face fieldwork and, for TAS-19, shifted interviewers from working in a centralized call center to working from their homes. Overall, COVID-19 had a net negative effect on response rates in CDS-19 and terminated all home visits that represented an important study component. For TAS-19, the overall effect of Covid-19 was uncertain, but negative. The costs were high of adapting to COVID-19 and providing paid time-off benefits to staff affected by the pandemic. Longitudinal surveys, such as CDS, TAS, and PSID, that span the pandemic will provide valuable information on its life course and intergenerational consequences, making ongoing data collection of vital importance.</p>","PeriodicalId":46454,"journal":{"name":"Survey Research Methods","volume":"14 2","pages":"241-245"},"PeriodicalIF":4.8,"publicationDate":"2020-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8168972/pdf/nihms-1633552.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38987301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}