Pub Date : 2023-01-15DOI: 10.1177/1525822x231151314
Natalja Menold
While numerical bipolar rating scales may evoke positivity bias, little is known about the corresponding bias in verbal bipolar rating scales. The choice of verbalization of the middle category may lead to response bias, particularly if it is not in line with the scale polarity. Unipolar and bipolar seven-category rating scales in which the verbalizations of the middle categories matched or did not match the implemented polarity were investigated in randomized experiments using a non-probabilistic online access panel in Germany. Bipolar rating scales exhibited higher positivity bias and acquiescence than unipolar rating scales. Reliability, validity, and equidistance tended to be violated if the verbalizations of the middle category did not match scale polarity. The results provide a rationale for rating scale verbalization.
{"title":"Verbalization of Rating Scales Taking Account of Their Polarity","authors":"Natalja Menold","doi":"10.1177/1525822x231151314","DOIUrl":"https://doi.org/10.1177/1525822x231151314","url":null,"abstract":"While numerical bipolar rating scales may evoke positivity bias, little is known about the corresponding bias in verbal bipolar rating scales. The choice of verbalization of the middle category may lead to response bias, particularly if it is not in line with the scale polarity. Unipolar and bipolar seven-category rating scales in which the verbalizations of the middle categories matched or did not match the implemented polarity were investigated in randomized experiments using a non-probabilistic online access panel in Germany. Bipolar rating scales exhibited higher positivity bias and acquiescence than unipolar rating scales. Reliability, validity, and equidistance tended to be violated if the verbalizations of the middle category did not match scale polarity. The results provide a rationale for rating scale verbalization.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46414775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-28DOI: 10.1177/1525822X221141226
Cheng K. Fred Wen, Stefan Schneider, M. Weigensberg, B. Weerman, D. Spruijt-Metz
Accurate assessment of saliva sampling time is essential for studies that collect cortisol sample in ambulatory settings. This study examined the sampling time assessed by user-submitted photos via a mobile application (ZEMI) compared with MEMSCaps™. Intra-class correlation coefficient (ICC) of the time differences between when the 16 adolescents in the study were prompted to collect the sample and (1) when the MEMSCaps™ was opened (TimeM), and (2) when photos of the corresponding sample were submitted (TimeZ) was computed to evaluate the agreement of sampling times. The average TimeM and TimeZ 12.06 ± 65.80 and 16.13 ± 52.07 minutes, respectively. The pooled ICC between TimeM and TimeZ was 0.986 (95% CI: 0.959–0.995), suggesting excellent correspondence between the two measurements.
{"title":"Accuracy of a Photo-based Smartphone Application to Assess Salivary Cortisol Sampling Time in Adolescents","authors":"Cheng K. Fred Wen, Stefan Schneider, M. Weigensberg, B. Weerman, D. Spruijt-Metz","doi":"10.1177/1525822X221141226","DOIUrl":"https://doi.org/10.1177/1525822X221141226","url":null,"abstract":"Accurate assessment of saliva sampling time is essential for studies that collect cortisol sample in ambulatory settings. This study examined the sampling time assessed by user-submitted photos via a mobile application (ZEMI) compared with MEMSCaps™. Intra-class correlation coefficient (ICC) of the time differences between when the 16 adolescents in the study were prompted to collect the sample and (1) when the MEMSCaps™ was opened (TimeM), and (2) when photos of the corresponding sample were submitted (TimeZ) was computed to evaluate the agreement of sampling times. The average TimeM and TimeZ 12.06 ± 65.80 and 16.13 ± 52.07 minutes, respectively. The pooled ICC between TimeM and TimeZ was 0.986 (95% CI: 0.959–0.995), suggesting excellent correspondence between the two measurements.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46692080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-27DOI: 10.1177/1525822x221132425
Soomi Lee, C. Mu, R. Joshi, Arooj Khan
Ecological momentary assessment (EMA) can capture how sleep, stress, and well-being are related within individuals. However, the use of EMA involves participant burden, which may be a major barrier when studying at-risk populations like frontline workers. To guide future research interested in using EMA, this study examined variance components in sleep, stress, and well-being variables collected from health care workers. Two samples of hospital nurses (60 inpatient, 84 outpatient) responded to 2-week smartphone-based EMA. Adherence to the EMA protocol was good in both samples. Results from intraclass correlations showed more momentary variability in stressors and uplifts, more daily variability in sleep, fatigue, and physical symptoms, and more between-person variability in affect, rumination, and work quality. Across the variables, however, there was substantial within-person variability. Variance components were relatively consistent between workdays and non-workdays and between week 1 and week 2. Some nuanced between-sample differences were noted.
{"title":"Daily and Momentary Variability in Sleep, Stress, and Well-being Data in Two Samples of Health Care Workers","authors":"Soomi Lee, C. Mu, R. Joshi, Arooj Khan","doi":"10.1177/1525822x221132425","DOIUrl":"https://doi.org/10.1177/1525822x221132425","url":null,"abstract":"Ecological momentary assessment (EMA) can capture how sleep, stress, and well-being are related within individuals. However, the use of EMA involves participant burden, which may be a major barrier when studying at-risk populations like frontline workers. To guide future research interested in using EMA, this study examined variance components in sleep, stress, and well-being variables collected from health care workers. Two samples of hospital nurses (60 inpatient, 84 outpatient) responded to 2-week smartphone-based EMA. Adherence to the EMA protocol was good in both samples. Results from intraclass correlations showed more momentary variability in stressors and uplifts, more daily variability in sleep, fatigue, and physical symptoms, and more between-person variability in affect, rumination, and work quality. Across the variables, however, there was substantial within-person variability. Variance components were relatively consistent between workdays and non-workdays and between week 1 and week 2. Some nuanced between-sample differences were noted.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41287023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-26DOI: 10.1177/1525822X221132401
G. Haas, Marieke Volkert, M. Senghaas
Even small monetary incentives, e.g., a one-dollar bill in a postal invitation letter, can increase the response rate in a web survey. However, in the euro currency area, the smallest amount of monetary incentive for a postal invitation is a five-euro bill, which is costly. As such, we conducted a random experiment with prepaid stamp and postcard incentives as affordable alternatives. We compare the effect of our experimental groups with a control group in terms of response rates, response rates in a subsequent wave, data linkage consent, and data collection costs. Compared with the control group, the postcard incentive has no effect on our outcomes except overall costs. Using a prepaid stamp incentive increases the response rate overall but with different effect sizes for subgroups. We find no effect of stamp incentives on response rates in a subsequent wave or data linkage consent.
{"title":"Effects of Prepaid Postage Stamps and Postcard Incentives in a Web Survey Experiment","authors":"G. Haas, Marieke Volkert, M. Senghaas","doi":"10.1177/1525822X221132401","DOIUrl":"https://doi.org/10.1177/1525822X221132401","url":null,"abstract":"Even small monetary incentives, e.g., a one-dollar bill in a postal invitation letter, can increase the response rate in a web survey. However, in the euro currency area, the smallest amount of monetary incentive for a postal invitation is a five-euro bill, which is costly. As such, we conducted a random experiment with prepaid stamp and postcard incentives as affordable alternatives. We compare the effect of our experimental groups with a control group in terms of response rates, response rates in a subsequent wave, data linkage consent, and data collection costs. Compared with the control group, the postcard incentive has no effect on our outcomes except overall costs. Using a prepaid stamp incentive increases the response rate overall but with different effect sizes for subgroups. We find no effect of stamp incentives on response rates in a subsequent wave or data linkage consent.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46937338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-25DOI: 10.1177/1525822x221135369
Ruchika Joshi, J. McManus, Karan Nagpal, Andrew Fraker
We examine the use of publicly available voter rolls for household survey sampling as an alternative to household listings or field-based sampling methods. Using voter rolls for sampling can save most of the cost of constructing a sampling frame relative to a household listing, but there is limited evidence about their accuracy and completeness. We conducted a household listing in 13 polling stations in India comprising 2,416 households across four states and compared the listing to voter rolls for the same polling stations. We show that voter rolls include 91% of the households found in the household listing. We conduct simulations to show that sampling from voter rolls produces estimates of household-level economic variables with minimal bias. These results suggest that voter rolls may be suitable for constructing household sampling frames, particularly in rural India.
{"title":"Are Voter Rolls Suitable Sampling Frames for Household Surveys? Evidence from India","authors":"Ruchika Joshi, J. McManus, Karan Nagpal, Andrew Fraker","doi":"10.1177/1525822x221135369","DOIUrl":"https://doi.org/10.1177/1525822x221135369","url":null,"abstract":"We examine the use of publicly available voter rolls for household survey sampling as an alternative to household listings or field-based sampling methods. Using voter rolls for sampling can save most of the cost of constructing a sampling frame relative to a household listing, but there is limited evidence about their accuracy and completeness. We conducted a household listing in 13 polling stations in India comprising 2,416 households across four states and compared the listing to voter rolls for the same polling stations. We show that voter rolls include 91% of the households found in the household listing. We conduct simulations to show that sampling from voter rolls produces estimates of household-level economic variables with minimal bias. These results suggest that voter rolls may be suitable for constructing household sampling frames, particularly in rural India.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42389676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-25DOI: 10.1177/1525822X221132940
Barbara Felderer, J. Herzing
Even though the proportion of individuals who are not equipped to participate in online surveys is constantly decreasing, many surveys face an under-representation of individuals who do not feel IT literate enough to participate. Using experimental data from a probability-based online panel, we study which recruitment survey mode strategy performs best in recruiting less IT-literate persons for an online panel. The sampled individuals received postal invitations to conduct the recruitment survey in a self-completion mode. We experimentally vary four recruitment survey mode strategies: one online mode strategy, two sequential mixed-mode strategies, and one concurrent mode strategy. We find the recruitment survey mode strategies to have a major effect on the sample composition of the recruitment survey, but the differences between the strategies vanish once respondents are asked to proceed with the panel online.
{"title":"What about the Less IT Literate? A Comparison of Different Postal Recruitment Strategies to an Online Panel of the General Population","authors":"Barbara Felderer, J. Herzing","doi":"10.1177/1525822X221132940","DOIUrl":"https://doi.org/10.1177/1525822X221132940","url":null,"abstract":"Even though the proportion of individuals who are not equipped to participate in online surveys is constantly decreasing, many surveys face an under-representation of individuals who do not feel IT literate enough to participate. Using experimental data from a probability-based online panel, we study which recruitment survey mode strategy performs best in recruiting less IT-literate persons for an online panel. The sampled individuals received postal invitations to conduct the recruitment survey in a self-completion mode. We experimentally vary four recruitment survey mode strategies: one online mode strategy, two sequential mixed-mode strategies, and one concurrent mode strategy. We find the recruitment survey mode strategies to have a major effect on the sample composition of the recruitment survey, but the differences between the strategies vanish once respondents are asked to proceed with the panel online.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49627250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-18DOI: 10.1177/1525822x221134756
Nestor Hernandez, Kristen Olson, Jolene D Smyth
Questionnaire designers are encouraged to write questions as complete sentences. In self-administered surveys, incomplete question stems may reduce visual clutter but may also increase burden when respondents need to scan the response options to fully complete the question. We experimentally examine the effects of three categories of incomplete question stems (incomplete conversational, incomplete ordinal, and incomplete nominal questions) versus complete question stems on 53 items in a probability web-mail survey. We examine item nonresponse, response time, selection of the first and last response options, and response distributions. We find that incomplete question stems take slightly longer to answer and slightly reduce the selection of the last response option but have no effect on item nonresponse rates or selection of the first response option. We conclude that questionnaire designers should follow current best practices to write complete questions, but deviations from complete questions will likely have limited effects.
{"title":"“Are You …”: An Examination of Incomplete Question Stems in Self-administered Surveys","authors":"Nestor Hernandez, Kristen Olson, Jolene D Smyth","doi":"10.1177/1525822x221134756","DOIUrl":"https://doi.org/10.1177/1525822x221134756","url":null,"abstract":"Questionnaire designers are encouraged to write questions as complete sentences. In self-administered surveys, incomplete question stems may reduce visual clutter but may also increase burden when respondents need to scan the response options to fully complete the question. We experimentally examine the effects of three categories of incomplete question stems (incomplete conversational, incomplete ordinal, and incomplete nominal questions) versus complete question stems on 53 items in a probability web-mail survey. We examine item nonresponse, response time, selection of the first and last response options, and response distributions. We find that incomplete question stems take slightly longer to answer and slightly reduce the selection of the last response option but have no effect on item nonresponse rates or selection of the first response option. We conclude that questionnaire designers should follow current best practices to write complete questions, but deviations from complete questions will likely have limited effects.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43988017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-14DOI: 10.1177/1525822x221124469
Rachel Stenger, Kristen Olson, Jolene D Smyth
Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in calculated readability across measures. Some question formats, including those that are part of a battery, require important decisions that have large effects on the estimated readability of survey items. Other question evaluation tools, such as the Question Understanding Aid (QUAID) and the Survey Quality Predictor (SQP), may identify similar problems in questions, making readability measures less useful. We find little overlap between QUAID, SQP, and the readability measures, and little differentiation in the tools’ prediction of item nonresponse rates. Questionnaire designers are encouraged to use multiple question evaluation tools and develop readability measures specifically for survey questions.
{"title":"Comparing Readability Measures and Computer‐assisted Question Evaluation Tools for Self‐administered Survey Questions","authors":"Rachel Stenger, Kristen Olson, Jolene D Smyth","doi":"10.1177/1525822x221124469","DOIUrl":"https://doi.org/10.1177/1525822x221124469","url":null,"abstract":"Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in calculated readability across measures. Some question formats, including those that are part of a battery, require important decisions that have large effects on the estimated readability of survey items. Other question evaluation tools, such as the Question Understanding Aid (QUAID) and the Survey Quality Predictor (SQP), may identify similar problems in questions, making readability measures less useful. We find little overlap between QUAID, SQP, and the readability measures, and little differentiation in the tools’ prediction of item nonresponse rates. Questionnaire designers are encouraged to use multiple question evaluation tools and develop readability measures specifically for survey questions.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42385232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1177/1525822X221115844
Michael Daniel, Alexey Koshevoy, I. Schurov, N. Dobrushina
In this article, we address the issue of reliability of quantitative data on multilingualism of the past obtained as recall data. More specifically, we investigate whether the interviewees’ assessments of the language repertoires of their late relatives (indirect data) provide results that are quantitatively similar to those obtained from the people of the same age range themselves (direct data). The empirical data we use come from an ongoing field study of traditional multilingualism in Daghestan (Russia). We trained machine learning models to see whether they can detect differences in indirect and direct data. We conclude that our indirect quantitative data on L2 other than Russian are essentially similar to direct data, while there may be a small but systematic underestimation when reporting others’ knowledge of Russian.
{"title":"Can Recall Data Be Trusted? Evaluating Reliability of Interview Data on Traditional Multilingualism in Highland Daghestan","authors":"Michael Daniel, Alexey Koshevoy, I. Schurov, N. Dobrushina","doi":"10.1177/1525822X221115844","DOIUrl":"https://doi.org/10.1177/1525822X221115844","url":null,"abstract":"In this article, we address the issue of reliability of quantitative data on multilingualism of the past obtained as recall data. More specifically, we investigate whether the interviewees’ assessments of the language repertoires of their late relatives (indirect data) provide results that are quantitatively similar to those obtained from the people of the same age range themselves (direct data). The empirical data we use come from an ongoing field study of traditional multilingualism in Daghestan (Russia). We trained machine learning models to see whether they can detect differences in indirect and direct data. We conclude that our indirect quantitative data on L2 other than Russian are essentially similar to direct data, while there may be a small but systematic underestimation when reporting others’ knowledge of Russian.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48799728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-30DOI: 10.1177/1525822X221115830
Henning Silber, Joss Roßmann, Tobias Gummer
Attention checks detect inattentiveness by instructing respondents to perform a specific task. However, while respondents may correctly process the task, they may choose to not comply with the instructions. We investigated the issue of noncompliance in attention checks in two web surveys. In Study 1, we measured respondents’ attitudes toward attention checks and their self-reported compliance. In Study 2, we experimentally varied the reasons given to respondents for conducting the attention check. Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.
{"title":"The Issue of Noncompliance in Attention Check Questions: False Positives in Instructed Response Items","authors":"Henning Silber, Joss Roßmann, Tobias Gummer","doi":"10.1177/1525822X221115830","DOIUrl":"https://doi.org/10.1177/1525822X221115830","url":null,"abstract":"Attention checks detect inattentiveness by instructing respondents to perform a specific task. However, while respondents may correctly process the task, they may choose to not comply with the instructions. We investigated the issue of noncompliance in attention checks in two web surveys. In Study 1, we measured respondents’ attitudes toward attention checks and their self-reported compliance. In Study 2, we experimentally varied the reasons given to respondents for conducting the attention check. Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given.","PeriodicalId":48060,"journal":{"name":"Field Methods","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47430637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}