Abstract Rotations are often used for panel surveys, where the observations remain in the sample for a predefined number of periods and then rotate out. The information of previous waves can be exploited to improve current estimates. We propose a multivariate regression estimator which captures all information available from both waves. By adding additional auxiliary variables describing the information of the rotational design, the proposed estimator captures the sample correlation between waves. It can be used for the estimation of levels and changes.
{"title":"A Multivariate Regression Estimator of Levels and Change for Surveys Over Time","authors":"Anne Konrad, Y. Berger","doi":"10.2478/jos-2023-0002","DOIUrl":"https://doi.org/10.2478/jos-2023-0002","url":null,"abstract":"Abstract Rotations are often used for panel surveys, where the observations remain in the sample for a predefined number of periods and then rotate out. The information of previous waves can be exploited to improve current estimates. We propose a multivariate regression estimator which captures all information available from both waves. By adding additional auxiliary variables describing the information of the rotational design, the proposed estimator captures the sample correlation between waves. It can be used for the estimation of levels and changes.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41860741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Grimaccia, A. Naccarato, G. Gallo, Novella Cecconi, Alessandro Fratoni
Abstract In order to provide useful tools for researchers in the design of actions to promote participation in web surveys, it is key to study the characteristics that define the profile of a “web respondent”, so that specific interventions can be planned. In this contribution, which draws on data collected during the 2019 housing population census in Italy, we define the set of familial and geographical characteristics that correspond to a greater probability that the interviewed household will choose to respond online, by estimating a multilevel model. The profile of a “computer-assisted web interview household” (CAWI-H) is then defined, on the basis of the structural characteristics of this population. Moreover, the geographical distribution of households is studied according to their distance from the CAWI-H profile. The results show that households that are more distant from the CAWI-H profile have characteristics that correspond to segments of the population generally affected by economic and social fragility; they are mainly elderly, foreigners, residents in small towns, and people with a low level of education. It is to these households in particular that survey designers can address specific actions that can enhance their willingness to participate in web surveys.
{"title":"Characteristics of Respondents to Web-Based or Traditional Interviews in Mixed-Mode Surveys. Evidence from the Italian Permanent Population Census","authors":"E. Grimaccia, A. Naccarato, G. Gallo, Novella Cecconi, Alessandro Fratoni","doi":"10.2478/jos-2023-0001","DOIUrl":"https://doi.org/10.2478/jos-2023-0001","url":null,"abstract":"Abstract In order to provide useful tools for researchers in the design of actions to promote participation in web surveys, it is key to study the characteristics that define the profile of a “web respondent”, so that specific interventions can be planned. In this contribution, which draws on data collected during the 2019 housing population census in Italy, we define the set of familial and geographical characteristics that correspond to a greater probability that the interviewed household will choose to respond online, by estimating a multilevel model. The profile of a “computer-assisted web interview household” (CAWI-H) is then defined, on the basis of the structural characteristics of this population. Moreover, the geographical distribution of households is studied according to their distance from the CAWI-H profile. The results show that households that are more distant from the CAWI-H profile have characteristics that correspond to segments of the population generally affected by economic and social fragility; they are mainly elderly, foreigners, residents in small towns, and people with a low level of education. It is to these households in particular that survey designers can address specific actions that can enhance their willingness to participate in web surveys.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47611145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-24DOI: 10.48550/arXiv.2301.10213
K. Muralidhar, J. Domingo-Ferrer
Abstract In recent years, it has been claimed that releasing accurate statistical information on a database is likely to allow its complete reconstruction. Differential privacy has been suggested as the appropriate methodology to prevent these attacks. These claims have recently been taken very seriously by the U.S. Census Bureau and led them to adopt differential privacy for releasing U.S. Census data. This in turn has caused consternation among users of the Census data due to the lack of accuracy of the protected outputs. It has also brought legal action against the U.S. Department of Commerce. In this article, we trace the origins of the claim that releasing information on a database automatically makes it vulnerable to being exposed by reconstruction attacks and we show that this claim is, in fact, incorrect. We also show that reconstruction can be averted by properly using traditional statistical disclosure control (SDC) techniques. We further show that the geographic level at which exact counts are released is even more relevant to protection than the actual SDC method employed. Finally, we caution against confusing reconstruction and reidentification: using the quality of reconstruction as a metric of reidentification results in exaggerated reidentification risk figures.
{"title":"Database Reconstruction Is Not So Easy and Is Different from Reidentification","authors":"K. Muralidhar, J. Domingo-Ferrer","doi":"10.48550/arXiv.2301.10213","DOIUrl":"https://doi.org/10.48550/arXiv.2301.10213","url":null,"abstract":"Abstract In recent years, it has been claimed that releasing accurate statistical information on a database is likely to allow its complete reconstruction. Differential privacy has been suggested as the appropriate methodology to prevent these attacks. These claims have recently been taken very seriously by the U.S. Census Bureau and led them to adopt differential privacy for releasing U.S. Census data. This in turn has caused consternation among users of the Census data due to the lack of accuracy of the protected outputs. It has also brought legal action against the U.S. Department of Commerce. In this article, we trace the origins of the claim that releasing information on a database automatically makes it vulnerable to being exposed by reconstruction attacks and we show that this claim is, in fact, incorrect. We also show that reconstruction can be averted by properly using traditional statistical disclosure control (SDC) techniques. We further show that the geographic level at which exact counts are released is even more relevant to protection than the actual SDC method employed. Finally, we caution against confusing reconstruction and reidentification: using the quality of reconstruction as a metric of reidentification results in exaggerated reidentification risk figures.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44139678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Index to Volume 38, 2022 Contents of Volume 38, Numbers 1–4","authors":"","doi":"10.2478/jos-2022-0054","DOIUrl":"https://doi.org/10.2478/jos-2022-0054","url":null,"abstract":"","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47276945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai Rene Ong, Robert Schultz, Sofi Sinozich, Brady T West, James Wagner, Jennifer Sinibaldi, John Finamore
Large-scale, nationally representative surveys serve many vital functions, but these surveys are often long and burdensome for respondents. Cutting survey length can help to reduce respondent burden and may improve data quality but removing items from these surveys is not a trivial matter. We propose a method to empirically assess item importance and associated burden in national surveys and guide this decision-making process using different research products produced from such surveys. This method is demonstrated using the Survey of Doctorate Recipients (SDR), a biennial survey administered to individuals with a Science, Engineering, and Health doctorate. We used three main sources of information on the SDR variables: 1) a bibliography of documents using the SDR data, 2) the SDR website that allows users to download summary data, and 3) web timing paradata and break-off rates. The bibliography was coded for SDR variable usage and citation counts. Putting this information together, we identified 35 unused items (17% of the survey) by any of these sources and found that the most burdensome items are highly important. We conclude with general recommendations for those hoping to employ similar methodologies in the future.
{"title":"A User-Driven Method for Using Research Products to Empirically Assess Item Importance in National Surveys.","authors":"Ai Rene Ong, Robert Schultz, Sofi Sinozich, Brady T West, James Wagner, Jennifer Sinibaldi, John Finamore","doi":"10.2478/jos-2022-0052","DOIUrl":"10.2478/jos-2022-0052","url":null,"abstract":"<p><p>Large-scale, nationally representative surveys serve many vital functions, but these surveys are often long and burdensome for respondents. Cutting survey length can help to reduce respondent burden and may improve data quality but removing items from these surveys is not a trivial matter. We propose a method to empirically assess item importance and associated burden in national surveys and guide this decision-making process using different research products produced from such surveys. This method is demonstrated using the Survey of Doctorate Recipients (SDR), a biennial survey administered to individuals with a Science, Engineering, and Health doctorate. We used three main sources of information on the SDR variables: 1) a bibliography of documents using the SDR data, 2) the SDR website that allows users to download summary data, and 3) web timing paradata and break-off rates. The bibliography was coded for SDR variable usage and citation counts. Putting this information together, we identified 35 unused items (17% of the survey) by any of these sources and found that the most burdensome items are highly important. We conclude with general recommendations for those hoping to employ similar methodologies in the future.</p>","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":0.5,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11293792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49516661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We conducted an idiographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. We discuss the implications for panel management and advocate targeted interventions for the small subgroups whose response probability may be negatively impacted by longer questionnaires or frequent surveys.
{"title":"Relationship Between Past Survey Burden and Response Probability to a New Survey in a Probability-Based Online Panel","authors":"Haomiao Jin, A. Kapteyn","doi":"10.2478/jos-2022-0045","DOIUrl":"https://doi.org/10.2478/jos-2022-0045","url":null,"abstract":"Abstract We conducted an idiographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. We discuss the implications for panel management and advocate targeted interventions for the small subgroups whose response probability may be negatively impacted by longer questionnaires or frequent surveys.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47094506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Providing an exact answer to open-ended numeric questions can be a burdensome task for respondents. Researchers often assume that adding an invitation to estimate (e.g., “Your best estimate is fine”) to these questions reduces cognitive burden, and in turn, reduces rates of undesirable response behaviors like item nonresponse, nonsubstantive answers, and answers that must be processed into a final response (e.g., qualified answers like “about 12” and ranges). Yet there is little research investigating this claim. Additionally, explicitly inviting estimation may lead respondents to round their answers, which may affect survey estimates. In this study, we investigate the effect of adding an invitation to estimate to 22 open-ended numeric questions in a mail survey and three questions in a separate telephone survey. Generally, we find that explicitly inviting estimation does not significantly change rates of item nonresponse, rounding, or qualified/range answers in either mode, though it does slightly reduce nonsubstantive answers for mail respondents. In the telephone survey, an invitation to estimate results in fewer conversational turns and shorter response times. Our results indicate that an invitation to estimate may simplify the interaction between interviewers and respondents in telephone surveys, and neither hurts nor helps data quality in mail surveys.
{"title":"Your Best Estimate is Fine. Or is It?","authors":"Jerry Timbrook, Kristen Olson, Jolene D Smyth","doi":"10.2478/jos-2022-0047","DOIUrl":"https://doi.org/10.2478/jos-2022-0047","url":null,"abstract":"Abstract Providing an exact answer to open-ended numeric questions can be a burdensome task for respondents. Researchers often assume that adding an invitation to estimate (e.g., “Your best estimate is fine”) to these questions reduces cognitive burden, and in turn, reduces rates of undesirable response behaviors like item nonresponse, nonsubstantive answers, and answers that must be processed into a final response (e.g., qualified answers like “about 12” and ranges). Yet there is little research investigating this claim. Additionally, explicitly inviting estimation may lead respondents to round their answers, which may affect survey estimates. In this study, we investigate the effect of adding an invitation to estimate to 22 open-ended numeric questions in a mail survey and three questions in a separate telephone survey. Generally, we find that explicitly inviting estimation does not significantly change rates of item nonresponse, rounding, or qualified/range answers in either mode, though it does slightly reduce nonsubstantive answers for mail respondents. In the telephone survey, an invitation to estimate results in fewer conversational turns and shorter response times. Our results indicate that an invitation to estimate may simplify the interaction between interviewers and respondents in telephone surveys, and neither hurts nor helps data quality in mail surveys.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44435894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Roberts, J. Herzing, Marc Asensio Manjon, Philip Abbet, D. Gática-Pérez
Abstract Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.
{"title":"Response Burden and Dropout in a Probability-Based Online Panel Study – A Comparison between an App and Browser-Based Design","authors":"C. Roberts, J. Herzing, Marc Asensio Manjon, Philip Abbet, D. Gática-Pérez","doi":"10.2478/jos-2022-0043","DOIUrl":"https://doi.org/10.2478/jos-2022-0043","url":null,"abstract":"Abstract Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48128122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Higher levels of perceived burden by respondents can lead to ambiguous responses to a questionnaire, item nonresponse, or refusals to continue participation in the survey which can introduce bias and downgrade the quality of the data. Therefore, it is important to understand what might influence the perception of burden in respondents. In this article, we demonstrate, using U.S. Consumer Expenditure Survey data, how regression tree models can be used to analyze the associations between perceived burden and objective burden measures conditioning on household demographics and other explanatory variables. The structure of the tree models allows these associations to easily be explored. Our analysis shows a relationship between perceived burden and some of the objective measures after conditioning on different demographic and household variables and that these relationships are quite affected by different respondent characteristics and the mode of the survey. Since the tree models were constructed using an algorithm that accounts for the sample design, inferences from the analysis can be made about the population. Therefore, any insights could be used to help guide future decisions about survey design and data collection to help reduce respondent burden.
{"title":"Analyzing the Association of Objective Burden Measures to Perceived Burden with Regression Trees","authors":"Daniel K. Yang, Daniell Toth","doi":"10.2478/jos-2022-0048","DOIUrl":"https://doi.org/10.2478/jos-2022-0048","url":null,"abstract":"Abstract Higher levels of perceived burden by respondents can lead to ambiguous responses to a questionnaire, item nonresponse, or refusals to continue participation in the survey which can introduce bias and downgrade the quality of the data. Therefore, it is important to understand what might influence the perception of burden in respondents. In this article, we demonstrate, using U.S. Consumer Expenditure Survey data, how regression tree models can be used to analyze the associations between perceived burden and objective burden measures conditioning on household demographics and other explanatory variables. The structure of the tree models allows these associations to easily be explored. Our analysis shows a relationship between perceived burden and some of the objective measures after conditioning on different demographic and household variables and that these relationships are quite affected by different respondent characteristics and the mode of the survey. Since the tree models were constructed using an algorithm that accounts for the sample design, inferences from the analysis can be made about the population. Therefore, any insights could be used to help guide future decisions about survey design and data collection to help reduce respondent burden.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46378880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Minimizing respondent survey burden may help decrease nonresponse and increase data quality, but the measurement of burden has varied widely. Recent efforts have paid more attention to respondents’ subjective perceptions of burden, measured through the addition of questions to a survey. Despite reliance on these questions as key measures, little qualitative research has been conducted for household surveys. This study used focus groups to examine respondents’ reactions to possible sources of burden in the American Community Survey (ACS) such as survey length, sensitivity, and contact strategy; respondents’ knowledge, attitudes, and beliefs about burden; and overall perceptions of burden. Feedback was used to guide subsequent selection and cognitive testing of questions on subjective perceptions of burden. Generally, respondents did not find the ACS to be burdensome. When deciding whether it was burdensome, respondents thought about the process of responding to the questionnaire, the value of the data, that response is mandatory, and to a lesser extent, the contacts they received, suggesting these constructs are key components of burden in the ACS. There were some differences by response mode and household characteristics. Findings reinforce the importance of conducting qualitative research to ensure questions capture important respondent burden perceptions for a particular survey.
{"title":"Exploring Burden Perceptions of Household Survey Respondents in the American Community Survey","authors":"J. Holzberg, Jonathan Katz","doi":"10.2478/jos-2022-0050","DOIUrl":"https://doi.org/10.2478/jos-2022-0050","url":null,"abstract":"Abstract Minimizing respondent survey burden may help decrease nonresponse and increase data quality, but the measurement of burden has varied widely. Recent efforts have paid more attention to respondents’ subjective perceptions of burden, measured through the addition of questions to a survey. Despite reliance on these questions as key measures, little qualitative research has been conducted for household surveys. This study used focus groups to examine respondents’ reactions to possible sources of burden in the American Community Survey (ACS) such as survey length, sensitivity, and contact strategy; respondents’ knowledge, attitudes, and beliefs about burden; and overall perceptions of burden. Feedback was used to guide subsequent selection and cognitive testing of questions on subjective perceptions of burden. Generally, respondents did not find the ACS to be burdensome. When deciding whether it was burdensome, respondents thought about the process of responding to the questionnaire, the value of the data, that response is mandatory, and to a lesser extent, the contacts they received, suggesting these constructs are key components of burden in the ACS. There were some differences by response mode and household characteristics. Findings reinforce the importance of conducting qualitative research to ensure questions capture important respondent burden perceptions for a particular survey.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45299283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}