Richard G Lambert, C Missy Moore, Christopher McCarthy, Bryndle L Bottoms
Research using the National Teacher and Principal Survey (NTPS) has consistently demonstrated that teachers' reported working conditions are related to both intentions to leave the profession and attrition (Tickle, Chang, and Kim, 2011). However, limited research evaluates teacher appraisals of job-related demands and resources as an antecedent to job dissatisfaction. We tested for differential item functioning (DIF) using a partial credit model approach within a Rasch modeling context to examine whether elementary and secondary teachers with similar overall stress levels respond to the NTPS Demands and Resources items in similar ways. For the Demands items, seven of the items displayed differences that were negligible, four were intermediate, and three items indicated large DIF contrasts. For the Resources items, 10 items displayed differences that were negligible, two were intermediate, and zero items indicated large DIF contrasts. These results indicate elementary and secondary teachers exhibit different appraisal patterns, suggesting implications for the development and use of survey data in public school settings in general, and for the use of the NTPS data in particular.
使用国家教师和校长调查(NTPS)的研究一致表明,教师报告的工作条件与离开职业的意图和人员流失有关(Tickle, Chang, and Kim, 2011)。然而,有限的研究评估教师对工作相关需求和资源的评价作为工作不满的先决条件。我们在Rasch模型背景下使用部分学分模型方法测试了差异项目功能(DIF),以检查具有相似总体压力水平的小学和中学教师是否以相似的方式对NTPS需求和资源项目做出反应。在需求项目中,7个项目显示的差异可以忽略不计,4个是中等的,3个项目显示了很大的差异。对于资源项,10项显示的差异可以忽略不计,2项显示中间差异,0项表示较大的DIF差异。这些结果表明,小学和中学教师表现出不同的评估模式,这表明了在公立学校环境中开发和使用调查数据的意义,特别是对NTPS数据的使用。
{"title":"Response Differences in Appraisals of Working Conditions among Elementary and High School Teachers.","authors":"Richard G Lambert, C Missy Moore, Christopher McCarthy, Bryndle L Bottoms","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Research using the National Teacher and Principal Survey (NTPS) has consistently demonstrated that teachers' reported working conditions are related to both intentions to leave the profession and attrition (Tickle, Chang, and Kim, 2011). However, limited research evaluates teacher appraisals of job-related demands and resources as an antecedent to job dissatisfaction. We tested for differential item functioning (DIF) using a partial credit model approach within a Rasch modeling context to examine whether elementary and secondary teachers with similar overall stress levels respond to the NTPS Demands and Resources items in similar ways. For the Demands items, seven of the items displayed differences that were negligible, four were intermediate, and three items indicated large DIF contrasts. For the Resources items, 10 items displayed differences that were negligible, two were intermediate, and zero items indicated large DIF contrasts. These results indicate elementary and secondary teachers exhibit different appraisal patterns, suggesting implications for the development and use of survey data in public school settings in general, and for the use of the NTPS data in particular.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"21 3","pages":"347-360"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38978110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nik Muhammad Hanis Nek Rakami, Nik Ahmad Hisham Ismail, Noor Lide Abu Kassim, Faizah Idrus
This paper describes the process of assessing the unidimensionality and validity of egalitarian education (EE) items based on the Rasch measurement model. Egalitarian education was measured by a self-developed 5 EE items of Likert-scale format. The process of assessing the validity of EE items involved a collection of data from 400 Malay teachers, who are teaching in government school around peninsular of Malaysia where the measurement of construct validity for the overall EE items were established using Winsteps. Various Rasch measurement tools were utilized to demonstrate the true unidimensionality and validity measure of the EE items and in meeting the needs of the Rasch measurement model. The findings show that the validity and unidimensionality of EE items can be truly established and can satisfy the characteristics of the Rasch measurement model.
{"title":"Validation of Egalitarian Education Questionnaire using Rasch Measurement Model.","authors":"Nik Muhammad Hanis Nek Rakami, Nik Ahmad Hisham Ismail, Noor Lide Abu Kassim, Faizah Idrus","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This paper describes the process of assessing the unidimensionality and validity of egalitarian education (EE) items based on the Rasch measurement model. Egalitarian education was measured by a self-developed 5 EE items of Likert-scale format. The process of assessing the validity of EE items involved a collection of data from 400 Malay teachers, who are teaching in government school around peninsular of Malaysia where the measurement of construct validity for the overall EE items were established using Winsteps. Various Rasch measurement tools were utilized to demonstrate the true unidimensionality and validity measure of the EE items and in meeting the needs of the Rasch measurement model. The findings show that the validity and unidimensionality of EE items can be truly established and can satisfy the characteristics of the Rasch measurement model.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"21 1","pages":"91-100"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37704086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In previous studies, researchers have focused on the development and interpretation of measurement tools related to self-efficacy. However, researchers have seldom investigated whether these instruments demonstrate acceptable psychometric properties, including similar item interpretations between subgroups of respondents. The purpose of this study was to explore the extent to which a self-efficacy measure has a consistent interpretation for two self-reported gender subgroups. The researchers utilized Rasch analysis to explore differences in item difficulty between the subgroups. Results suggested differences in item difficulty ordering for certain self-efficacy items. Implications for research and practice are discussed.
{"title":"Exploring The Psychometric Properties of a Self-Efficacy Scale For High School Students.","authors":"Yuan Ge, Stefanie A Wind","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In previous studies, researchers have focused on the development and interpretation of measurement tools related to self-efficacy. However, researchers have seldom investigated whether these instruments demonstrate acceptable psychometric properties, including similar item interpretations between subgroups of respondents. The purpose of this study was to explore the extent to which a self-efficacy measure has a consistent interpretation for two self-reported gender subgroups. The researchers utilized Rasch analysis to explore differences in item difficulty between the subgroups. Results suggested differences in item difficulty ordering for certain self-efficacy items. Implications for research and practice are discussed.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"21 3","pages":"313-328"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38978108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marius Lie Winger, Julie Gausen, Eivind Kaspersen, Trygve Solstd
In this study we investigate whether transformations between different representations of mathematical objects constitute a suitable framework for the assessment of students' comprehension of fraction addition. Participants (N = 164) solved a set of 20 fraction addition problems constructed on the basis of Duval's (2017) theory of the role of representational transformations in mathematical comprehension. Using Rasch measurement theory and principal component analysis, we found that the items could be separated into three levels of difficulty based on the transformation involved. This large-scale structure was consistent across gender and across subgroups of preservice teachers and middle-grade students. On a finer scale, the production of diagrammatic representations, and the type of diagrammatic representation involved, constitute potential subdimensions of the instrument. We conclude that transformations between representations can be productive for the assessment of fraction addition comprehension as long as care is taken to curtail the potential effects of multidimensionality.
{"title":"Using the Rasch Model to Measure Comprehension of Fraction Addition.","authors":"Marius Lie Winger, Julie Gausen, Eivind Kaspersen, Trygve Solstd","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In this study we investigate whether transformations between different representations of mathematical objects constitute a suitable framework for the assessment of students' comprehension of fraction addition. Participants (N = 164) solved a set of 20 fraction addition problems constructed on the basis of Duval's (2017) theory of the role of representational transformations in mathematical comprehension. Using Rasch measurement theory and principal component analysis, we found that the items could be separated into three levels of difficulty based on the transformation involved. This large-scale structure was consistent across gender and across subgroups of preservice teachers and middle-grade students. On a finer scale, the production of diagrammatic representations, and the type of diagrammatic representation involved, constitute potential subdimensions of the instrument. We conclude that transformations between representations can be productive for the assessment of fraction addition comprehension as long as care is taken to curtail the potential effects of multidimensionality.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"21 4","pages":"420-433"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38912687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To understand the role of fit statistics in Rasch measurement is simple: applied researchers can only benefit from the desirable properties of the Rasch model when the data fit the model. The purpose of the current study was to assess the Q-Index robustness (Ostini and Nering, 2006), and its performance was compared to the current popular fit statistics known as MSQ Infit, MSQ Outfit, and standardized Infit and Outfit (ZSTDs) under varying conditions of test length, sample size, item difficulty (normal and uniform), and dimensionality utilizing a Monte Carlo simulation. The Type I and Type II error rates are also examined across fit indices. This study provides applied researchers guidelines the robustness and appropriateness of the use of the Q-Index, which is an alternative to the currently available item fit statistics. The Q-Index was slightly more sensitive to the levels of multidimensionality set in the study while MSQ Infit, Outfit, and standardized Infit and Outfit (ZSTDs) failed to identify the multidimensional conditions. The Type I error rate of the Q-Index was lower than the rest of the fit indices; however, the Type II error rate was higher than the anticipated beta = .20 across all fit indices.
{"title":"Evaluating the Impact of Multidimensionality on Type I and Type II Error Rates using the Q-Index Item Fit Statistic for the Rasch Model.","authors":"Samantha Estrada","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>To understand the role of fit statistics in Rasch measurement is simple: applied researchers can only benefit from the desirable properties of the Rasch model when the data fit the model. The purpose of the current study was to assess the Q-Index robustness (Ostini and Nering, 2006), and its performance was compared to the current popular fit statistics known as MSQ Infit, MSQ Outfit, and standardized Infit and Outfit (ZSTDs) under varying conditions of test length, sample size, item difficulty (normal and uniform), and dimensionality utilizing a Monte Carlo simulation. The Type I and Type II error rates are also examined across fit indices. This study provides applied researchers guidelines the robustness and appropriateness of the use of the Q-Index, which is an alternative to the currently available item fit statistics. The Q-Index was slightly more sensitive to the levels of multidimensionality set in the study while MSQ Infit, Outfit, and standardized Infit and Outfit (ZSTDs) failed to identify the multidimensional conditions. The Type I error rate of the Q-Index was lower than the rest of the fit indices; however, the Type II error rate was higher than the anticipated beta = .20 across all fit indices.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"21 4","pages":"496-514"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38912691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most research on multistage testing (MST) uses simulated data. This study adds to the literature by using both operational test data and simulated data to compare two different MST designs with regard to proficiency estimation accuracy and module exposure rates and by investigating whether simulation studies and operational test studies yield similar results. Two MST designs (1-2 and 1-3-4 designs) from one state's sixth-grade summative mathematics assessment across two years were compared in this study. Both simulation and operational test studies demonstrate similar results: the two MST designs yield no significant performance differences with regard to estimation accuracy and module exposure. These results provide evidence that simulation studies can provide adequate results to inform decisions about MST designs.
{"title":"How Well Do Simulation Studies Inform Decisions About Multistage Testing?","authors":"Wenhao Wang, Jie Chen, Neal Kingston","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Most research on multistage testing (MST) uses simulated data. This study adds to the literature by using both operational test data and simulated data to compare two different MST designs with regard to proficiency estimation accuracy and module exposure rates and by investigating whether simulation studies and operational test studies yield similar results. Two MST designs (1-2 and 1-3-4 designs) from one state's sixth-grade summative mathematics assessment across two years were compared in this study. Both simulation and operational test studies demonstrate similar results: the two MST designs yield no significant performance differences with regard to estimation accuracy and module exposure. These results provide evidence that simulation studies can provide adequate results to inform decisions about MST designs.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"21 3","pages":"271-281"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38978105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rianne Janssen, Jorge Gonzalez, Ernesto San Martin
An examinee- and an item-centered procedure are proposed to set cut scores for counts data. Both procedures assume that the counts data are modelled according to the Rasch Poisson counts model (RPCM). The examinee-centered method is based on Longford's (1996) approach and links contrasting-groups judgements to the RPCM ability scale using a random logistic regression model. In the item-centered method, the judges are asked to describe the minimum performance level of the minimally competent student by giving the minimum number of correct responses (or, equivalently, the maximum number of admissible errors). On the basis of these judgements for each subtest, the position of the minimally competent student on the RPCM ability scale is estimated. Both procedures are illustrated with a standard-setting study on mental arithmetic for students at the end of primary education.
{"title":"Standard-Setting Procedures for Counts Data.","authors":"Rianne Janssen, Jorge Gonzalez, Ernesto San Martin","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>An examinee- and an item-centered procedure are proposed to set cut scores for counts data. Both procedures assume that the counts data are modelled according to the Rasch Poisson counts model (RPCM). The examinee-centered method is based on Longford's (1996) approach and links contrasting-groups judgements to the RPCM ability scale using a random logistic regression model. In the item-centered method, the judges are asked to describe the minimum performance level of the minimally competent student by giving the minimum number of correct responses (or, equivalently, the maximum number of admissible errors). On the basis of these judgements for each subtest, the position of the minimally competent student on the RPCM ability scale is estimated. Both procedures are illustrated with a standard-setting study on mental arithmetic for students at the end of primary education.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"134-145"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Item difficulty and discrimination index are often used to evaluate test items and diagnose possible issues in true score theory. The two statistics are more related than the literature suggests. In particular, the discrimination index can be mathematically determined by the item difficulty and the correlation between the item performance and the total test score.
{"title":"A Note on the Relation between Item Difficulty and Discrimination Index.","authors":"Xiaofeng Steven Liu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Item difficulty and discrimination index are often used to evaluate test items and diagnose possible issues in true score theory. The two statistics are more related than the literature suggests. In particular, the discrimination index can be mathematically determined by the item difficulty and the correlation between the item performance and the total test score.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"221-226"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This simulation study explores the effects of missing data mechanisms, proportions of missing data, sample size, and test length on the biases and standard errors of item parameters using the Rasch measurement model. When responses were missing completely at random (MCAR) or missing at random (MAR), item parameters were unbiased. When responses were missing not at random (MNAR), item parameters were severely biased, especially when the proportion of missing responses was high. Standard errors were primarily affected by sample size, with larger samples associated with smaller standard errors. Standard errors were inflated in MCAR and MAR conditions, while MNAR standard errors were similar to what they would have been, had the data been complete. This paper supports the conclusion that the Rasch model can handle varying amounts of missing data, provided that the missing responses are not MNAR.
{"title":"Missing Data and the Rasch Model: The Effects of Missing Data Mechanisms on Item Parameter Estimation.","authors":"Glenn Thomas Waterbury","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This simulation study explores the effects of missing data mechanisms, proportions of missing data, sample size, and test length on the biases and standard errors of item parameters using the Rasch measurement model. When responses were missing completely at random (MCAR) or missing at random (MAR), item parameters were unbiased. When responses were missing not at random (MNAR), item parameters were severely biased, especially when the proportion of missing responses was high. Standard errors were primarily affected by sample size, with larger samples associated with smaller standard errors. Standard errors were inflated in MCAR and MAR conditions, while MNAR standard errors were similar to what they would have been, had the data been complete. This paper supports the conclusion that the Rasch model can handle varying amounts of missing data, provided that the missing responses are not MNAR.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"154-166"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main objective of this study is to develop and validate a sources of mathematics self-efficacy (SMSE) scale to be used in a polytechnic adopting Problem Based Learning (PBL) as its main instructional strategy. Based on socio-constructivist learning approach, PBL emphasizes collaborative and self-directed learning. A non-experimental cross-sectional design using a questionnaire was employed in this study. The validation process was conducted over three phases. Phase 1 was the initial development stage to generate a pool of items in the questionnaire. In Phase 2, a pilot test was performed to obtain qualitative and quantitative feedback to refine the initial pool of items in the questionnaire. Finally, in Phase 3, the revised scale was administered to the main student cohort taking the mathematics module. The collected data from the questionnaire was subjected to empirical scrutiny, including exploratory factor analysis (EFA) and Rasch analysis. The participants for this study were first year polytechnic students taking a mathematics module. There were 29 participants taking part in Phase 2 of the study, comprising 12 (41%) females and 17 (59%) males. For Phase 3, there were 161 participants, comprising 91 (57%) males and 70 (43%) females. The EFA yielded a three-factor solution, comprising (a) personal experience; (b) vicarious experience; and (c) psychological states. The items in the SMSE scale demonstrated good internal consistency and reliability. The results from the Rasch rating scale analysis showed an acceptable item and person fit statistics. The final 23-item SMSE scale was found to be invariant across gender. Finally, the study showed that the SMSE scale is a psychometrically reliable and valid instrument to measure the sources of mathematics self-efficacy among students. PBL educators could use the results from the SMSE scale in the study to adopt appropriate interventions in curriculum design and delivery to boost self-efficacy of students and hence improve their mathematics achievement.
{"title":"Development of a Mathematics Self-Efficacy Scale: A Rasch Validation Study.","authors":"Song Boon Khing, Tay Eng Guan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The main objective of this study is to develop and validate a sources of mathematics self-efficacy (SMSE) scale to be used in a polytechnic adopting Problem Based Learning (PBL) as its main instructional strategy. Based on socio-constructivist learning approach, PBL emphasizes collaborative and self-directed learning. A non-experimental cross-sectional design using a questionnaire was employed in this study. The validation process was conducted over three phases. Phase 1 was the initial development stage to generate a pool of items in the questionnaire. In Phase 2, a pilot test was performed to obtain qualitative and quantitative feedback to refine the initial pool of items in the questionnaire. Finally, in Phase 3, the revised scale was administered to the main student cohort taking the mathematics module. The collected data from the questionnaire was subjected to empirical scrutiny, including exploratory factor analysis (EFA) and Rasch analysis. The participants for this study were first year polytechnic students taking a mathematics module. There were 29 participants taking part in Phase 2 of the study, comprising 12 (41%) females and 17 (59%) males. For Phase 3, there were 161 participants, comprising 91 (57%) males and 70 (43%) females. The EFA yielded a three-factor solution, comprising (a) personal experience; (b) vicarious experience; and (c) psychological states. The items in the SMSE scale demonstrated good internal consistency and reliability. The results from the Rasch rating scale analysis showed an acceptable item and person fit statistics. The final 23-item SMSE scale was found to be invariant across gender. Finally, the study showed that the SMSE scale is a psychometrically reliable and valid instrument to measure the sources of mathematics self-efficacy among students. PBL educators could use the results from the SMSE scale in the study to adopt appropriate interventions in curriculum design and delivery to boost self-efficacy of students and hence improve their mathematics achievement.</p>","PeriodicalId":73608,"journal":{"name":"Journal of applied measurement","volume":"20 2","pages":"184-205"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37004329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}