Trenton J Combs, Kyle W English, Barbara G Dodd, Hyeon-Ah Kang
Computerized adaptive testing (CAT) is an attractive alternative to traditional paper-and-pencil testing because it can provide accurate trait estimates while administering fewer items than a linear test form. A stopping rule is an important factor in determining an assessments efficiency. This simulation compares three variable-length stopping rules-standard error (SE) of .3, minimum information (MI) of .7 and change in trait (CT) of .02 - with and without a maximum number of items (20) imposed. We use fixed-length criteria of 10 and 20 items as a comparison for two versions of a linear assessment. The MI rules resulted in longer assessments with more biased trait estimates in comparison to other rules. The CT rule resulted in more biased estimates at the higher end of the trait scale and larger standard errors. The SE rules performed well across the trait scale in terms of both measurement precision and efficiency.
{"title":"Computer Adaptive Test Stopping Rules Applied to The Flexilevel Shoulder Functioning Test.","authors":"Trenton J Combs, Kyle W English, Barbara G Dodd, Hyeon-Ah Kang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Computerized adaptive testing (CAT) is an attractive alternative to traditional paper-and-pencil testing because it can provide accurate trait estimates while administering fewer items than a linear test form. A stopping rule is an important factor in determining an assessments efficiency. This simulation compares three variable-length stopping rules-standard error (SE) of .3, minimum information (MI) of .7 and change in trait (CT) of .02 - with and without a maximum number of items (20) imposed. We use fixed-length criteria of 10 and 20 items as a comparison for two versions of a linear assessment. The MI rules resulted in longer assessments with more biased trait estimates in comparison to other rules. The CT rule resulted in more biased estimates at the higher end of the trait scale and larger standard errors. The SE rules performed well across the trait scale in terms of both measurement precision and efficiency.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 1","pages":"66-78"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36986094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many sources of evidence for a well-functioning rating-scale. Two of these sources are analyses of measure-to-category and category-to-measure statistics. An absolute cut-value of 40% for these statistics has been suggested. However, no evidence exists in the literature that this value is appropriate. Thus, this paper discusses the results of simulation studies that examined the expected values in different contexts. The study concludes that a static cut-value of 40% should be replaced with expected values for measure-to-category and category-to-measure analyses.
{"title":"Expected Values for Category-To-Measure and Measure-To-Category Statistics: A Simulation Study.","authors":"Eivind Kaspersen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>There are many sources of evidence for a well-functioning rating-scale. Two of these sources are analyses of measure-to-category and category-to-measure statistics. An absolute cut-value of 40% for these statistics has been suggested. However, no evidence exists in the literature that this value is appropriate. Thus, this paper discusses the results of simulation studies that examined the expected values in different contexts. The study concludes that a static cut-value of 40% should be replaced with expected values for measure-to-category and category-to-measure analyses.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"146-153"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Loevinger's specifications for a unidimensional test are discussed. The implications are reviewed using commentary from Guttman's and Rasch's specification for specific objectivity. A large population is sampled to evaluate the implications of this approach in light of Wright's early presentation regarding data analysis. The results of this analysis show the sample follows the specifications of Loevinger and those of Rasch for a unidimensional test.
{"title":"Loevinger on Unidimensional Tests with Reference to Guttman, Rasch, and Wright.","authors":"Mark H Stone, A Jackson Stenner","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Loevinger's specifications for a unidimensional test are discussed. The implications are reviewed using commentary from Guttman's and Rasch's specification for specific objectivity. A large population is sampled to evaluate the implications of this approach in light of Wright's early presentation regarding data analysis. The results of this analysis show the sample follows the specifications of Loevinger and those of Rasch for a unidimensional test.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"123-133"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
School leadership influences school conditions and organizational climate; these conditions in turn impact student outcomes. Accordingly, examining differences in principals' perceptions of leadership activities within and across countries may provide insight into achievement differences. The major purpose of this study was to explore differences in the relative difficulty of principals' leadership activities across four countries that reflect Asian and North American national contexts: (1) Hong Kong SAR, (2) Chinese Taipei, (3) the United States, and (4) Canada. We also sought to illustrate the use of Rasch measurement theory as a modern measurement approach to exploring the psychometric properties of a leadership survey, with a focus on differential item functioning. We applied a rating scale formulation of the Many-facet Rasch model to principals' responses to the Leadership Activities Scale in order to examine the degree to which the overall ordering of leadership activities was invariant across the four countries. Overall, the results suggested that there were significant differences in the difficulty ordering of leadership activities across countries, and that these differences were most pronounced between the two continents. Implications are discussed for research and practice.
{"title":"Cross-Cultural Comparisons of School Leadership using Rasch Measurement.","authors":"Sijia Zhang, Stefanie A Wind","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>School leadership influences school conditions and organizational climate; these conditions in turn impact student outcomes. Accordingly, examining differences in principals' perceptions of leadership activities within and across countries may provide insight into achievement differences. The major purpose of this study was to explore differences in the relative difficulty of principals' leadership activities across four countries that reflect Asian and North American national contexts: (1) Hong Kong SAR, (2) Chinese Taipei, (3) the United States, and (4) Canada. We also sought to illustrate the use of Rasch measurement theory as a modern measurement approach to exploring the psychometric properties of a leadership survey, with a focus on differential item functioning. We applied a rating scale formulation of the Many-facet Rasch model to principals' responses to the Leadership Activities Scale in order to examine the degree to which the overall ordering of leadership activities was invariant across the four countries. Overall, the results suggested that there were significant differences in the difficulty ordering of leadership activities across countries, and that these differences were most pronounced between the two continents. Implications are discussed for research and practice.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"167-183"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of multiple-choice items in assessments in the interest of increased efficiency brings associated challenges, notably the phenomenon of guessing. The purpose of this study is to use Rasch measurement theory to investigate the extent of guessing in a sample of responses taken from the Trends in International Mathematics and Science Study (TIMSS) 2015. A method of checking the extent of the guessing in test data, a tailored analysis, is applied to the data from a sample of 2188 learners on a subset of items. The analysis confirms prior research that showed that as the difficulty of the item increases, the probability of guessing also increases. An outcome of the tailored analysis is that items at the high proficiency end of the continuum, increase in difficulty. A consequence of item difficulties being estimated as relatively lower than they would be without guessing, is that learner proficiency at the higher end is under estimated while the achievement of learners with lower proficiencies are over estimated. Hence, it is important that finer analysis of systemic data takes into account guessing, so that more nuanced information can be obtained to inform subsequent cycles of education planning.
{"title":"Lucky Guess? Applying Rasch Measurement Theory to Grade 5 South African Mathematics Achievement Data.","authors":"Sarah Bansilal, Caroline Long, Andrea Juan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The use of multiple-choice items in assessments in the interest of increased efficiency brings associated challenges, notably the phenomenon of guessing. The purpose of this study is to use Rasch measurement theory to investigate the extent of guessing in a sample of responses taken from the Trends in International Mathematics and Science Study (TIMSS) 2015. A method of checking the extent of the guessing in test data, a tailored analysis, is applied to the data from a sample of 2188 learners on a subset of items. The analysis confirms prior research that showed that as the difficulty of the item increases, the probability of guessing also increases. An outcome of the tailored analysis is that items at the high proficiency end of the continuum, increase in difficulty. A consequence of item difficulties being estimated as relatively lower than they would be without guessing, is that learner proficiency at the higher end is under estimated while the achievement of learners with lower proficiencies are over estimated. Hence, it is important that finer analysis of systemic data takes into account guessing, so that more nuanced information can be obtained to inform subsequent cycles of education planning.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"206-220"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tracy Kline, Corina Owens, Courtney Peasant Bonner, Tara Carney, Felicia A Browne, Wendee M Wechsberg
Hazardous drinking is a risk factor associated with sexual risk, gender-based violence, and HIV transmission in South Africa. Consequently, sound and appropriate measurement of drinking behavior is critical to determining what constitutes hazardous drinking. Many research studies use internal consistency estimates as the determining factor in psychometric assessment; however, deeper assessments are needed to best define a measurement tool. Rasch methodology was used to evaluate a shorter version of the Alcohol Use Disorders Identification Test, the AUDIT-C, in a sample of adolescent girls and young women (AGYW) who use alcohol and other drugs in South Africa (n =100). Investigations of operational response range, item fit, sensitivity, and response option usage provide a richer picture of AUDIT-C functioning than internal consistency alone in women who are vulnerable to hazardous drinking and therefore at risk of HIV. Analyses indicate that the AUDIT-C does not adequately measure this specialized population, and that more validation is needed to determine if the AUDIT-C should continue to be used in HIV prevention intervention studies focused on vulnerable adolescent girls and young women.
{"title":"Accuracy and Utility of the AUDIT-C with Adolescent Girls and Young Women (AGYW) Who Engage in HIV Risk Behaviors in South Africa.","authors":"Tracy Kline, Corina Owens, Courtney Peasant Bonner, Tara Carney, Felicia A Browne, Wendee M Wechsberg","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Hazardous drinking is a risk factor associated with sexual risk, gender-based violence, and HIV transmission in South Africa. Consequently, sound and appropriate measurement of drinking behavior is critical to determining what constitutes hazardous drinking. Many research studies use internal consistency estimates as the determining factor in psychometric assessment; however, deeper assessments are needed to best define a measurement tool. Rasch methodology was used to evaluate a shorter version of the Alcohol Use Disorders Identification Test, the AUDIT-C, in a sample of adolescent girls and young women (AGYW) who use alcohol and other drugs in South Africa (n =100). Investigations of operational response range, item fit, sensitivity, and response option usage provide a richer picture of AUDIT-C functioning than internal consistency alone in women who are vulnerable to hazardous drinking and therefore at risk of HIV. Analyses indicate that the AUDIT-C does not adequately measure this specialized population, and that more validation is needed to determine if the AUDIT-C should continue to be used in HIV prevention intervention studies focused on vulnerable adolescent girls and young women.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 1","pages":"112-122"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10961932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36986025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victoria T Tanaka, George Engelhard, Matthew P Rabbitt
The Household Food Security Survey Module (HFSSM) is a scale used by the U.S. Department of Agriculture to measure the severity of food insecurity experienced by U.S. households. In this study, measurement invariance of the HFSSM is examined across households based on participation in the Supplemental Nutrition Assistance Program (SNAP). Households with children who responded to the HFSSM in 2015 and 2016 (N = 3,931) are examined. The Rasch model is used to analyze differential item functioning (DIF) related to SNAP participation. Analyses suggest a small difference in reported food insecurity between SNAP and non-SNAP participants (27% versus 23% respectively). However, the size and direction of the DIF mitigates the impact on overall estimates of household food insecurity. Person-fit indices suggest that the household aberrant response rate is 6.6% and the number of misfitting households is comparable for SNAP (6.80%) and non-SNAP participants (6.30%). Implications for research and policy related to food insecurity are discussed.
{"title":"Examining Differential Item Functioning in the Household Food Insecurity Scale: Does Participation in SNAP Affect Measurement Invariance?","authors":"Victoria T Tanaka, George Engelhard, Matthew P Rabbitt","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The Household Food Security Survey Module (HFSSM) is a scale used by the U.S. Department of Agriculture to measure the severity of food insecurity experienced by U.S. households. In this study, measurement invariance of the HFSSM is examined across households based on participation in the Supplemental Nutrition Assistance Program (SNAP). Households with children who responded to the HFSSM in 2015 and 2016 (N = 3,931) are examined. The Rasch model is used to analyze differential item functioning (DIF) related to SNAP participation. Analyses suggest a small difference in reported food insecurity between SNAP and non-SNAP participants (27% versus 23% respectively). However, the size and direction of the DIF mitigates the impact on overall estimates of household food insecurity. Person-fit indices suggest that the household aberrant response rate is 6.6% and the number of misfitting households is comparable for SNAP (6.80%) and non-SNAP participants (6.30%). Implications for research and policy related to food insecurity are discussed.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 1","pages":"100-111"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36986023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates a strategy for accounting for correct guessing with the Rasch model that we entitled the Guessing Adjustment. This strategy involves the identification of all person/item encounters where the probability of a correct response is below a specified threshold. These responses are converted to missing data and the calibration is conducted a second time. This simulation study focuses on the effects of different probability thresholds across varying conditions of sample size, amount of correct guessing, and item difficulty. Biases, standard errors, and root mean squared errors were calculated within each condition. Larger probability thresholds were generally associated with reductions in bias and increases in standard errors. Across most conditions, the reduction in bias was more impactful than the decrease in precision, as reflected by the RMSE. The Guessing Adjustment is an effective means for reducing the impact of correct guessing and the choice of probability threshold matters.
{"title":"The Effects of Probability Threshold Choice on an Adjustment for Guessing using the Rasch Model.","authors":"Glenn Thomas Waterbury, Christine E DeMars","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This paper investigates a strategy for accounting for correct guessing with the Rasch model that we entitled the Guessing Adjustment. This strategy involves the identification of all person/item encounters where the probability of a correct response is below a specified threshold. These responses are converted to missing data and the calibration is conducted a second time. This simulation study focuses on the effects of different probability thresholds across varying conditions of sample size, amount of correct guessing, and item difficulty. Biases, standard errors, and root mean squared errors were calculated within each condition. Larger probability thresholds were generally associated with reductions in bias and increases in standard errors. Across most conditions, the reduction in bias was more impactful than the decrease in precision, as reflected by the RMSE. The Guessing Adjustment is an effective means for reducing the impact of correct guessing and the choice of probability threshold matters.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 1","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36986090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W Holmes Finch, Brian F French, Maria E Hernandez Finch
An important aspect of educational and psychological measurement and evaluation of individuals is the selection of scales with appropriate evidence of reliability and validity for inferences and uses of the scores for the population of interest. One aspect of validity is the degree to which a scale fairly assesses the construct(s) of interest for members of different subgroups within the population. Typically, this issue is addressed statistically through assessment of differential item functioning (DIF) of individual items, or differential bundle functioning (DBF) of sets of items. When selecting an assessment to use for a given application (e.g., measuring intelligence), or which form of an assessment to use in a given instance, researchers need to consider the extent to which the scales work with all members of the population. Little research has examined methods for comparing the amount or magnitude of DIF/DBF present in two assessments when deciding which assessment to use. The current simulation study examines 6 different statistics for this purpose. Results show that a method based on the random effects item response theory model may be optimal for instrument comparisons, particularly when the assessments being compared are not of the same length.
{"title":"Quantifying Item Invariance for the Selection of the Least Biased Assessment.","authors":"W Holmes Finch, Brian F French, Maria E Hernandez Finch","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>An important aspect of educational and psychological measurement and evaluation of individuals is the selection of scales with appropriate evidence of reliability and validity for inferences and uses of the scores for the population of interest. One aspect of validity is the degree to which a scale fairly assesses the construct(s) of interest for members of different subgroups within the population. Typically, this issue is addressed statistically through assessment of differential item functioning (DIF) of individual items, or differential bundle functioning (DBF) of sets of items. When selecting an assessment to use for a given application (e.g., measuring intelligence), or which form of an assessment to use in a given instance, researchers need to consider the extent to which the scales work with all members of the population. Little research has examined methods for comparing the amount or magnitude of DIF/DBF present in two assessments when deciding which assessment to use. The current simulation study examines 6 different statistics for this purpose. Results show that a method based on the random effects item response theory model may be optimal for instrument comparisons, particularly when the assessments being compared are not of the same length.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 1","pages":"13-26"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36986091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fairness of raters in music performance assessment has become an important concern in the field of music. The assessment of students' music performance depends in a fundamental way on rater judgements. The quality of rater judgements is crucial to provide fair, meaningful and informative assessments of music performance. There are many external factors that can influence the quality of rater judgements. Previous research has used different measurement models to examine the quality of rater judgements (e.g., generalizability theory). There are limitations with the previous analysis methods that are based on classical test theory and its extensions. In this study, we use modern measurement theory (Rasch measurement theory) to examine the quality of rater judgements. The many-facets Rasch rating scale model is employed to investigate the extent of rater-invariant measurement in the context of music performance assessments related to university degrees in Malaysia (159 students rated by 24 raters). We examine the rating scale structure, the severity levels of the raters, and the judged difficulty of the items. We also examine the interaction effects across musical instrument subgroups (keyboard, strings, woodwinds, brass, percussions and vocal). The results suggest that there were differences in severity levels among the raters. The results of this study also suggest that raters had different severity levels when rating different musical instrument subgroups. The implications for research, theory and practice in the assessment of music performance are included in this paper.
{"title":"Examining Rater Judgements in Music Performance Assessment using Many-Facets Rasch Rating Scale Measurement Model.","authors":"Pey Shin Ooi, George Engelhard","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The fairness of raters in music performance assessment has become an important concern in the field of music. The assessment of students' music performance depends in a fundamental way on rater judgements. The quality of rater judgements is crucial to provide fair, meaningful and informative assessments of music performance. There are many external factors that can influence the quality of rater judgements. Previous research has used different measurement models to examine the quality of rater judgements (e.g., generalizability theory). There are limitations with the previous analysis methods that are based on classical test theory and its extensions. In this study, we use modern measurement theory (Rasch measurement theory) to examine the quality of rater judgements. The many-facets Rasch rating scale model is employed to investigate the extent of rater-invariant measurement in the context of music performance assessments related to university degrees in Malaysia (159 students rated by 24 raters). We examine the rating scale structure, the severity levels of the raters, and the judged difficulty of the items. We also examine the interaction effects across musical instrument subgroups (keyboard, strings, woodwinds, brass, percussions and vocal). The results suggest that there were differences in severity levels among the raters. The results of this study also suggest that raters had different severity levels when rating different musical instrument subgroups. The implications for research, theory and practice in the assessment of music performance are included in this paper.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 1","pages":"79-99"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36986022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}