{"title":"Index to Volume 38, 2022 Contents of Volume 38, Numbers 1–4","authors":"","doi":"10.2478/jos-2022-0054","DOIUrl":"https://doi.org/10.2478/jos-2022-0054","url":null,"abstract":"","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47276945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai Rene Ong, Robert Schultz, Sofi Sinozich, Brady T West, James Wagner, Jennifer Sinibaldi, John Finamore
Large-scale, nationally representative surveys serve many vital functions, but these surveys are often long and burdensome for respondents. Cutting survey length can help to reduce respondent burden and may improve data quality but removing items from these surveys is not a trivial matter. We propose a method to empirically assess item importance and associated burden in national surveys and guide this decision-making process using different research products produced from such surveys. This method is demonstrated using the Survey of Doctorate Recipients (SDR), a biennial survey administered to individuals with a Science, Engineering, and Health doctorate. We used three main sources of information on the SDR variables: 1) a bibliography of documents using the SDR data, 2) the SDR website that allows users to download summary data, and 3) web timing paradata and break-off rates. The bibliography was coded for SDR variable usage and citation counts. Putting this information together, we identified 35 unused items (17% of the survey) by any of these sources and found that the most burdensome items are highly important. We conclude with general recommendations for those hoping to employ similar methodologies in the future.
{"title":"A User-Driven Method for Using Research Products to Empirically Assess Item Importance in National Surveys.","authors":"Ai Rene Ong, Robert Schultz, Sofi Sinozich, Brady T West, James Wagner, Jennifer Sinibaldi, John Finamore","doi":"10.2478/jos-2022-0052","DOIUrl":"10.2478/jos-2022-0052","url":null,"abstract":"<p><p>Large-scale, nationally representative surveys serve many vital functions, but these surveys are often long and burdensome for respondents. Cutting survey length can help to reduce respondent burden and may improve data quality but removing items from these surveys is not a trivial matter. We propose a method to empirically assess item importance and associated burden in national surveys and guide this decision-making process using different research products produced from such surveys. This method is demonstrated using the Survey of Doctorate Recipients (SDR), a biennial survey administered to individuals with a Science, Engineering, and Health doctorate. We used three main sources of information on the SDR variables: 1) a bibliography of documents using the SDR data, 2) the SDR website that allows users to download summary data, and 3) web timing paradata and break-off rates. The bibliography was coded for SDR variable usage and citation counts. Putting this information together, we identified 35 unused items (17% of the survey) by any of these sources and found that the most burdensome items are highly important. We conclude with general recommendations for those hoping to employ similar methodologies in the future.</p>","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"1235-1251"},"PeriodicalIF":0.5,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11293792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49516661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We conducted an idiographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. We discuss the implications for panel management and advocate targeted interventions for the small subgroups whose response probability may be negatively impacted by longer questionnaires or frequent surveys.
{"title":"Relationship Between Past Survey Burden and Response Probability to a New Survey in a Probability-Based Online Panel","authors":"Haomiao Jin, A. Kapteyn","doi":"10.2478/jos-2022-0045","DOIUrl":"https://doi.org/10.2478/jos-2022-0045","url":null,"abstract":"Abstract We conducted an idiographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. We discuss the implications for panel management and advocate targeted interventions for the small subgroups whose response probability may be negatively impacted by longer questionnaires or frequent surveys.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"1051 - 1067"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47094506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Providing an exact answer to open-ended numeric questions can be a burdensome task for respondents. Researchers often assume that adding an invitation to estimate (e.g., “Your best estimate is fine”) to these questions reduces cognitive burden, and in turn, reduces rates of undesirable response behaviors like item nonresponse, nonsubstantive answers, and answers that must be processed into a final response (e.g., qualified answers like “about 12” and ranges). Yet there is little research investigating this claim. Additionally, explicitly inviting estimation may lead respondents to round their answers, which may affect survey estimates. In this study, we investigate the effect of adding an invitation to estimate to 22 open-ended numeric questions in a mail survey and three questions in a separate telephone survey. Generally, we find that explicitly inviting estimation does not significantly change rates of item nonresponse, rounding, or qualified/range answers in either mode, though it does slightly reduce nonsubstantive answers for mail respondents. In the telephone survey, an invitation to estimate results in fewer conversational turns and shorter response times. Our results indicate that an invitation to estimate may simplify the interaction between interviewers and respondents in telephone surveys, and neither hurts nor helps data quality in mail surveys.
{"title":"Your Best Estimate is Fine. Or is It?","authors":"Jerry Timbrook, Kristen Olson, Jolene D Smyth","doi":"10.2478/jos-2022-0047","DOIUrl":"https://doi.org/10.2478/jos-2022-0047","url":null,"abstract":"Abstract Providing an exact answer to open-ended numeric questions can be a burdensome task for respondents. Researchers often assume that adding an invitation to estimate (e.g., “Your best estimate is fine”) to these questions reduces cognitive burden, and in turn, reduces rates of undesirable response behaviors like item nonresponse, nonsubstantive answers, and answers that must be processed into a final response (e.g., qualified answers like “about 12” and ranges). Yet there is little research investigating this claim. Additionally, explicitly inviting estimation may lead respondents to round their answers, which may affect survey estimates. In this study, we investigate the effect of adding an invitation to estimate to 22 open-ended numeric questions in a mail survey and three questions in a separate telephone survey. Generally, we find that explicitly inviting estimation does not significantly change rates of item nonresponse, rounding, or qualified/range answers in either mode, though it does slightly reduce nonsubstantive answers for mail respondents. In the telephone survey, an invitation to estimate results in fewer conversational turns and shorter response times. Our results indicate that an invitation to estimate may simplify the interaction between interviewers and respondents in telephone surveys, and neither hurts nor helps data quality in mail surveys.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"1097 - 1123"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44435894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Roberts, J. Herzing, Marc Asensio Manjon, Philip Abbet, D. Gática-Pérez
Abstract Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.
{"title":"Response Burden and Dropout in a Probability-Based Online Panel Study – A Comparison between an App and Browser-Based Design","authors":"C. Roberts, J. Herzing, Marc Asensio Manjon, Philip Abbet, D. Gática-Pérez","doi":"10.2478/jos-2022-0043","DOIUrl":"https://doi.org/10.2478/jos-2022-0043","url":null,"abstract":"Abstract Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"987 - 1017"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48128122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Higher levels of perceived burden by respondents can lead to ambiguous responses to a questionnaire, item nonresponse, or refusals to continue participation in the survey which can introduce bias and downgrade the quality of the data. Therefore, it is important to understand what might influence the perception of burden in respondents. In this article, we demonstrate, using U.S. Consumer Expenditure Survey data, how regression tree models can be used to analyze the associations between perceived burden and objective burden measures conditioning on household demographics and other explanatory variables. The structure of the tree models allows these associations to easily be explored. Our analysis shows a relationship between perceived burden and some of the objective measures after conditioning on different demographic and household variables and that these relationships are quite affected by different respondent characteristics and the mode of the survey. Since the tree models were constructed using an algorithm that accounts for the sample design, inferences from the analysis can be made about the population. Therefore, any insights could be used to help guide future decisions about survey design and data collection to help reduce respondent burden.
{"title":"Analyzing the Association of Objective Burden Measures to Perceived Burden with Regression Trees","authors":"Daniel K. Yang, Daniell Toth","doi":"10.2478/jos-2022-0048","DOIUrl":"https://doi.org/10.2478/jos-2022-0048","url":null,"abstract":"Abstract Higher levels of perceived burden by respondents can lead to ambiguous responses to a questionnaire, item nonresponse, or refusals to continue participation in the survey which can introduce bias and downgrade the quality of the data. Therefore, it is important to understand what might influence the perception of burden in respondents. In this article, we demonstrate, using U.S. Consumer Expenditure Survey data, how regression tree models can be used to analyze the associations between perceived burden and objective burden measures conditioning on household demographics and other explanatory variables. The structure of the tree models allows these associations to easily be explored. Our analysis shows a relationship between perceived burden and some of the objective measures after conditioning on different demographic and household variables and that these relationships are quite affected by different respondent characteristics and the mode of the survey. Since the tree models were constructed using an algorithm that accounts for the sample design, inferences from the analysis can be made about the population. Therefore, any insights could be used to help guide future decisions about survey design and data collection to help reduce respondent burden.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"1125 - 1144"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46378880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Statistical offices frequently use cutoff sampling to determine which businesses in a population should be surveyed. Examples include business surveys about international trade, production, innovation, ICT usage and so on. Cutoff thresholds are typically set in terms of key variables of interest and aim to satisfy a minimum coverage ratio–the share of aggregate values of reporting units. In this article we propose a simple cost-benefit approach to determination of the sampling cutoff by taking into account the response burden. In line with existing practice, we use the coverage ratio as our measure of accuracy and provide either analytical or numerical solutions to cutoff determination. Using a business survey on response burden of reporting trade flows within the EU (Intrastat), we present an application that illustrates our approach to cutoff determination. An important practical implication is the possibility to set industry-contingent cutoffs.
{"title":"Determination of the Threshold in Cutoff Sampling Using Response Burden with an Application to Intrastat","authors":"Sašo Polanec, Paul A. Smith, M. Bavdaž","doi":"10.2478/jos-2022-0051","DOIUrl":"https://doi.org/10.2478/jos-2022-0051","url":null,"abstract":"Abstract Statistical offices frequently use cutoff sampling to determine which businesses in a population should be surveyed. Examples include business surveys about international trade, production, innovation, ICT usage and so on. Cutoff thresholds are typically set in terms of key variables of interest and aim to satisfy a minimum coverage ratio–the share of aggregate values of reporting units. In this article we propose a simple cost-benefit approach to determination of the sampling cutoff by taking into account the response burden. In line with existing practice, we use the coverage ratio as our measure of accuracy and provide either analytical or numerical solutions to cutoff determination. Using a business survey on response burden of reporting trade flows within the EU (Intrastat), we present an application that illustrates our approach to cutoff determination. An important practical implication is the possibility to set industry-contingent cutoffs.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"1205 - 1234"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44059572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Concerns about the burden that surveys place on respondents have a long history in the survey field. This article reviews existing conceptualizations and measurements of response burden in the survey literature. Instead of conceptualizing response burden as a one-time overall outcome, we expand the conceptual framework of response burden by positing response burden as reflecting a continuous evaluation of the requirements imposed on respondents throughout the survey process. We specifically distinguish response burden at three time points: initial burden at the time of the survey request, cumulative burden that respondents experience after starting the interview, and continuous burden for those asked to participate in a later round of interviews in a longitudinal setting. At each time point, survey and question features affect response burden. In addition, respondent characteristics can affect response burden directly, or they can moderate or mediate the relationship between survey and question characteristics and the end perception of burden. Our conceptual framework reflects the dynamic and complex interactive nature of response burden at different time points over the course of a survey. We show how this framework can be used to explain conflicting empirical findings and guide methodological research.
{"title":"Response Burden – Review and Conceptual Framework","authors":"Ting Yan, Douglas Williams","doi":"10.2478/jos-2022-0041","DOIUrl":"https://doi.org/10.2478/jos-2022-0041","url":null,"abstract":"Abstract Concerns about the burden that surveys place on respondents have a long history in the survey field. This article reviews existing conceptualizations and measurements of response burden in the survey literature. Instead of conceptualizing response burden as a one-time overall outcome, we expand the conceptual framework of response burden by positing response burden as reflecting a continuous evaluation of the requirements imposed on respondents throughout the survey process. We specifically distinguish response burden at three time points: initial burden at the time of the survey request, cumulative burden that respondents experience after starting the interview, and continuous burden for those asked to participate in a later round of interviews in a longitudinal setting. At each time point, survey and question features affect response burden. In addition, respondent characteristics can affect response burden directly, or they can moderate or mediate the relationship between survey and question characteristics and the end perception of burden. Our conceptual framework reflects the dynamic and complex interactive nature of response burden at different time points over the course of a survey. We show how this framework can be used to explain conflicting empirical findings and guide methodological research.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"939 - 961"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46729833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Minimizing respondent survey burden may help decrease nonresponse and increase data quality, but the measurement of burden has varied widely. Recent efforts have paid more attention to respondents’ subjective perceptions of burden, measured through the addition of questions to a survey. Despite reliance on these questions as key measures, little qualitative research has been conducted for household surveys. This study used focus groups to examine respondents’ reactions to possible sources of burden in the American Community Survey (ACS) such as survey length, sensitivity, and contact strategy; respondents’ knowledge, attitudes, and beliefs about burden; and overall perceptions of burden. Feedback was used to guide subsequent selection and cognitive testing of questions on subjective perceptions of burden. Generally, respondents did not find the ACS to be burdensome. When deciding whether it was burdensome, respondents thought about the process of responding to the questionnaire, the value of the data, that response is mandatory, and to a lesser extent, the contacts they received, suggesting these constructs are key components of burden in the ACS. There were some differences by response mode and household characteristics. Findings reinforce the importance of conducting qualitative research to ensure questions capture important respondent burden perceptions for a particular survey.
{"title":"Exploring Burden Perceptions of Household Survey Respondents in the American Community Survey","authors":"J. Holzberg, Jonathan Katz","doi":"10.2478/jos-2022-0050","DOIUrl":"https://doi.org/10.2478/jos-2022-0050","url":null,"abstract":"Abstract Minimizing respondent survey burden may help decrease nonresponse and increase data quality, but the measurement of burden has varied widely. Recent efforts have paid more attention to respondents’ subjective perceptions of burden, measured through the addition of questions to a survey. Despite reliance on these questions as key measures, little qualitative research has been conducted for household surveys. This study used focus groups to examine respondents’ reactions to possible sources of burden in the American Community Survey (ACS) such as survey length, sensitivity, and contact strategy; respondents’ knowledge, attitudes, and beliefs about burden; and overall perceptions of burden. Feedback was used to guide subsequent selection and cognitive testing of questions on subjective perceptions of burden. Generally, respondents did not find the ACS to be burdensome. When deciding whether it was burdensome, respondents thought about the process of responding to the questionnaire, the value of the data, that response is mandatory, and to a lesser extent, the contacts they received, suggesting these constructs are key components of burden in the ACS. There were some differences by response mode and household characteristics. Findings reinforce the importance of conducting qualitative research to ensure questions capture important respondent burden perceptions for a particular survey.","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"1177 - 1203"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45299283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin L. Kaplan, J. Holzberg, S. Eckman, D. Giesen
{"title":"Preface Overview of the Special Issue on Respondent Burden","authors":"Robin L. Kaplan, J. Holzberg, S. Eckman, D. Giesen","doi":"10.2478/jos-2022-0040","DOIUrl":"https://doi.org/10.2478/jos-2022-0040","url":null,"abstract":"","PeriodicalId":51092,"journal":{"name":"Journal of Official Statistics","volume":"38 1","pages":"929 - 938"},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45548856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}