Pub Date : 2020-09-01DOI: 10.25384/SAGE.C.4348757.V1
Robin L. Hojnoski, Kristen Missall, Brenna K. Wood
Engagement in early childhood is defined as a child’s level of participation with the environment. Engagement is an important construct in assessment and intervention of social and early learning c...
幼儿期的参与被定义为儿童对环境的参与程度。参与是社会和早期学习评估和干预的重要组成部分。
{"title":"Measuring Engagement in Early Education: Preliminary Evidence for the Behavioral Observation of Students in Schools–Early Education:","authors":"Robin L. Hojnoski, Kristen Missall, Brenna K. Wood","doi":"10.25384/SAGE.C.4348757.V1","DOIUrl":"https://doi.org/10.25384/SAGE.C.4348757.V1","url":null,"abstract":"Engagement in early childhood is defined as a child’s level of participation with the environment. Engagement is an important construct in assessment and intervention of social and early learning c...","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45514271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/1534508418820125
Robin L. Hojnoski, Kristen Missall, Brenna K. Wood
Engagement in early childhood is defined as a child’s level of participation with the environment. Engagement is an important construct in assessment and intervention of social and early learning competence given its link to school achievement. Few tools exist to assess engagement of young children in early education, and there is a need for a systematic direct observation tool that can be applied universally (e.g., with all young children) in these settings. This article describes preliminary evidence of validity and reliability for the Behavioral Observation of Students in Schools–Early Education (BOSS-EE). Specifically, the article describes results from a survey of experts and practitioners in which feedback was solicited on target behaviors and operational definitions, presents reliability data (i.e., interobserver and test–retest), examines correlations with a criterion measure, and describes variability across settings, sites, and methods (i.e., video vs. in vivo). Next steps in measurement development are discussed with attention to the challenges of producing a tool that can be used in a range of early education settings with diverse groups of young children.
{"title":"Measuring Engagement in Early Education: Preliminary Evidence for the Behavioral Observation of Students in Schools–Early Education","authors":"Robin L. Hojnoski, Kristen Missall, Brenna K. Wood","doi":"10.1177/1534508418820125","DOIUrl":"https://doi.org/10.1177/1534508418820125","url":null,"abstract":"Engagement in early childhood is defined as a child’s level of participation with the environment. Engagement is an important construct in assessment and intervention of social and early learning competence given its link to school achievement. Few tools exist to assess engagement of young children in early education, and there is a need for a systematic direct observation tool that can be applied universally (e.g., with all young children) in these settings. This article describes preliminary evidence of validity and reliability for the Behavioral Observation of Students in Schools–Early Education (BOSS-EE). Specifically, the article describes results from a survey of experts and practitioners in which feedback was solicited on target behaviors and operational definitions, presents reliability data (i.e., interobserver and test–retest), examines correlations with a criterion measure, and describes variability across settings, sites, and methods (i.e., video vs. in vivo). Next steps in measurement development are discussed with attention to the challenges of producing a tool that can be used in a range of early education settings with diverse groups of young children.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418820125","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45491672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1177/1534508420947157
R. A. Smith, E. Lembke
This study examined the technical adequacy of Picture Word, a type of Writing Curriculum-Based Measurement, with 73 English learners (ELs) with beginning to intermediate English language proficiency in Grades 1, 2, and 3. The ELs in this study attended schools in one midwestern U.S. school district employing an English-only model of instruction and spoke a variety of native languages. ELs completed two forms of Picture Word in the fall, winter, and spring. The criterion measure, a common English language proficiency assessment, was administered in the winter. Results indicated that Picture Word was not appropriate for the first-grade EL participants but showed promise for second- and third-grade ELs.
{"title":"Aspects of Technical Adequacy of an Early-Writing Measure for English Language Learners in Grades 1 to 3","authors":"R. A. Smith, E. Lembke","doi":"10.1177/1534508420947157","DOIUrl":"https://doi.org/10.1177/1534508420947157","url":null,"abstract":"This study examined the technical adequacy of Picture Word, a type of Writing Curriculum-Based Measurement, with 73 English learners (ELs) with beginning to intermediate English language proficiency in Grades 1, 2, and 3. The ELs in this study attended schools in one midwestern U.S. school district employing an English-only model of instruction and spoke a variety of native languages. ELs completed two forms of Picture Word in the fall, winter, and spring. The criterion measure, a common English language proficiency assessment, was administered in the winter. Results indicated that Picture Word was not appropriate for the first-grade EL participants but showed promise for second- and third-grade ELs.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420947157","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45842540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-04DOI: 10.1177/1534508420946361
Jason C. Chow, E. Ekholm, Christine L. Bae
It is common in intervention research to use measures of working memory either as an explanatory or a control variable. This study examines the contribution of cognitive abilities, including verbal working memory (WM) and attention, to language performance in first- and second-grade children. We assessed children (N = 414) on two forms of verbal WM, one attention, and two standardized assessments of language. Scores from all three measures of cognitive abilities significantly predicted latent language (64% variance). Both verbal WM measures were stronger predictors of a latent language variable compared to attention. Exploratory analyses revealed differences in the role of cognitive variables to language subdomains. The findings deepen our understanding of the relative associations between verbal WM, attention, and language. We conclude that it is important to consider the language demands of tasks when making decisions about assessment of verbal WM, specifically in the context of intervention research in domains that require language.
{"title":"Relative Contribution of Verbal Working Memory and Attention to Child Language","authors":"Jason C. Chow, E. Ekholm, Christine L. Bae","doi":"10.1177/1534508420946361","DOIUrl":"https://doi.org/10.1177/1534508420946361","url":null,"abstract":"It is common in intervention research to use measures of working memory either as an explanatory or a control variable. This study examines the contribution of cognitive abilities, including verbal working memory (WM) and attention, to language performance in first- and second-grade children. We assessed children (N = 414) on two forms of verbal WM, one attention, and two standardized assessments of language. Scores from all three measures of cognitive abilities significantly predicted latent language (64% variance). Both verbal WM measures were stronger predictors of a latent language variable compared to attention. Exploratory analyses revealed differences in the role of cognitive variables to language subdomains. The findings deepen our understanding of the relative associations between verbal WM, attention, and language. We conclude that it is important to consider the language demands of tasks when making decisions about assessment of verbal WM, specifically in the context of intervention research in domains that require language.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420946361","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49457998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-04DOI: 10.1177/1534508420944231
A. Bruhn, Sheila Barron, Bailey A. Copeland, Sara Estrapala, A. Rila, J. Wehby
Self-monitoring interventions for students with challenging behavior are often teacher-managed rather than self-managed. Teachers direct these interventions by completing parallel monitoring procedures, providing feedback, and delivering contingent reinforcement to students when they monitor accurately. However, within self-monitoring interventions, the degree to which teachers and students agree in their assessment of students’ behavior is unknown. In this study, a self-monitoring intervention in which both teachers and students rated the students’ behavior, we analyzed 249 fixed interval ratings of behavior from 19 student/teacher pairs to determine the relationship between ratings within and across teacher/student pairs. We found a strong correlation overall (r =.91), although variability existed within individual pairs and student ratings tended to be higher than teacher ratings. We discuss implications for practice, limitations, and future directions.
{"title":"A Comparison of Teacher and Student Ratings in a Self-Monitoring Intervention","authors":"A. Bruhn, Sheila Barron, Bailey A. Copeland, Sara Estrapala, A. Rila, J. Wehby","doi":"10.1177/1534508420944231","DOIUrl":"https://doi.org/10.1177/1534508420944231","url":null,"abstract":"Self-monitoring interventions for students with challenging behavior are often teacher-managed rather than self-managed. Teachers direct these interventions by completing parallel monitoring procedures, providing feedback, and delivering contingent reinforcement to students when they monitor accurately. However, within self-monitoring interventions, the degree to which teachers and students agree in their assessment of students’ behavior is unknown. In this study, a self-monitoring intervention in which both teachers and students rated the students’ behavior, we analyzed 249 fixed interval ratings of behavior from 19 student/teacher pairs to determine the relationship between ratings within and across teacher/student pairs. We found a strong correlation overall (r =.91), although variability existed within individual pairs and student ratings tended to be higher than teacher ratings. We discuss implications for practice, limitations, and future directions.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420944231","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47266120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-27DOI: 10.1177/1534508420941935
Dandan Yang, Elham Zargar, A. Adams, Stephanie L. Day, C. Connor
Stealth assessment has been successfully embedded in educational games to measure students’ learning in an unobtrusive and supportive way. This study explored the possibility of applying stealth assessment in a digital reading platform and sought to identify potential in-system indicators of students’ digital learning outcomes. Utilizing the user log data from third- to fifth-grade students (n = 573) who read an interactive Word Knowledge E-Book, we examined various user log variables and their associations with word knowledge and strategic reading outcomes. Descriptive analysis provided a depiction of the real-time reading processes and behaviors in which students engaged while digitally reading. Multiple regression analysis with classroom fixed effects demonstrated that user log variables relevant to answering questions and making decisions (i.e., percentage of embedded questions answered correctly; number of attempts to answer the questions; and making implausible decisions) were significantly associated with students’ word knowledge and strategic reading outcomes. Variables indicating reading time and frequency, however, were not significantly associated with these outcomes. This study highlights the potential of interactive e-books as another digital learning environment to establish stealth assessment, which may allow researchers and educators to track students’ reading processes and predict reading outcomes while supporting digital learning.
{"title":"Using Interactive E-Book User Log Variables to Track Reading Processes and Predict Digital Learning Outcomes","authors":"Dandan Yang, Elham Zargar, A. Adams, Stephanie L. Day, C. Connor","doi":"10.1177/1534508420941935","DOIUrl":"https://doi.org/10.1177/1534508420941935","url":null,"abstract":"Stealth assessment has been successfully embedded in educational games to measure students’ learning in an unobtrusive and supportive way. This study explored the possibility of applying stealth assessment in a digital reading platform and sought to identify potential in-system indicators of students’ digital learning outcomes. Utilizing the user log data from third- to fifth-grade students (n = 573) who read an interactive Word Knowledge E-Book, we examined various user log variables and their associations with word knowledge and strategic reading outcomes. Descriptive analysis provided a depiction of the real-time reading processes and behaviors in which students engaged while digitally reading. Multiple regression analysis with classroom fixed effects demonstrated that user log variables relevant to answering questions and making decisions (i.e., percentage of embedded questions answered correctly; number of attempts to answer the questions; and making implausible decisions) were significantly associated with students’ word knowledge and strategic reading outcomes. Variables indicating reading time and frequency, however, were not significantly associated with these outcomes. This study highlights the potential of interactive e-books as another digital learning environment to establish stealth assessment, which may allow researchers and educators to track students’ reading processes and predict reading outcomes while supporting digital learning.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420941935","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48535805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-14DOI: 10.1177/1534508420937781
Panayiota Kendeou, Kristen L. McMaster, Reese Butterfuss, Jasmine Kim, S. Slater, O. Bulut
The overall aim of the current investigation was to develop and validate the initial version of the Minnesota Inference Assessment (MIA). MIA is a web-based measure of inference processes in Grades K–2. MIA leverages the affordances of different media to evaluate inference processes in a nonreading context, using age-appropriate fiction and nonfiction videos coupled with questioning. We evaluated MIA’s technical adequacy in a proof-of-concept study. Taken together, the results support the interpretation that MIA shows promise as a valid and reliable measure of inferencing in a nonreading context for students in Grades K–2. Future directions involve further development of multiple, parallel forms that can be used for progress monitoring in K–2.
{"title":"Development and Validation of the Minnesota Inference Assessment","authors":"Panayiota Kendeou, Kristen L. McMaster, Reese Butterfuss, Jasmine Kim, S. Slater, O. Bulut","doi":"10.1177/1534508420937781","DOIUrl":"https://doi.org/10.1177/1534508420937781","url":null,"abstract":"The overall aim of the current investigation was to develop and validate the initial version of the Minnesota Inference Assessment (MIA). MIA is a web-based measure of inference processes in Grades K–2. MIA leverages the affordances of different media to evaluate inference processes in a nonreading context, using age-appropriate fiction and nonfiction videos coupled with questioning. We evaluated MIA’s technical adequacy in a proof-of-concept study. Taken together, the results support the interpretation that MIA shows promise as a valid and reliable measure of inferencing in a nonreading context for students in Grades K–2. Future directions involve further development of multiple, parallel forms that can be used for progress monitoring in K–2.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420937781","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47937220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-02DOI: 10.1177/1534508420937801
Joseph F. T. Nese, Akihito Kamata
Curriculum-based measurement of oral reading fluency (CBM-R) is widely used across the country as a quick measure of reading proficiency that also serves as a good predictor of comprehension and overall reading achievement, but it has several practical and technical inadequacies, including a large standard error of measurement (SEM). Reducing the SEM of CBM-R scores has positive implications for educators using these measures to screen or monitor student growth. The purpose of this study was to compare the SEM of traditional CBM-R words correct per minute (WCPM) fluency scores and the conditional SEM (CSEM) of model-based WCPM estimates, particularly for students with or at risk of poor reading outcomes. We found (a) the average CSEM for the model-based WCPM estimates was substantially smaller than the reported SEMs of traditional CBM-R systems, especially for scores at/below the 25th percentile, and (b) a large proportion (84%) of sample scores, and an even larger proportion of scores at/below the 25th percentile (about 99%) had a smaller CSEM than the reported SEMs of traditional CBM-R systems.
{"title":"Addressing the Large Standard Error of Traditional CBM-R: Estimating the Conditional Standard Error of a Model-Based Estimate of CBM-R","authors":"Joseph F. T. Nese, Akihito Kamata","doi":"10.1177/1534508420937801","DOIUrl":"https://doi.org/10.1177/1534508420937801","url":null,"abstract":"Curriculum-based measurement of oral reading fluency (CBM-R) is widely used across the country as a quick measure of reading proficiency that also serves as a good predictor of comprehension and overall reading achievement, but it has several practical and technical inadequacies, including a large standard error of measurement (SEM). Reducing the SEM of CBM-R scores has positive implications for educators using these measures to screen or monitor student growth. The purpose of this study was to compare the SEM of traditional CBM-R words correct per minute (WCPM) fluency scores and the conditional SEM (CSEM) of model-based WCPM estimates, particularly for students with or at risk of poor reading outcomes. We found (a) the average CSEM for the model-based WCPM estimates was substantially smaller than the reported SEMs of traditional CBM-R systems, especially for scores at/below the 25th percentile, and (b) a large proportion (84%) of sample scores, and an even larger proportion of scores at/below the 25th percentile (about 99%) had a smaller CSEM than the reported SEMs of traditional CBM-R systems.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420937801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44178479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-03DOI: 10.1177/1534508420926584
J. Sullivan, Victor Villarreal, Evette Flores, A. Gomez, Blaire S. Warren
This article documents the results of a meta-analysis of available correlational validity evidence for the Social Skills Improvement System Performance Screening Guide (SSIS-PSG), which is a brief teacher-completed rating scale designed to be used as part of universal screening procedures. Article inclusion criteria included (a) published in English in a peer-reviewed journal, (b) administration of the PSG, and (c) provided validity evidence representative of the relationship between PSG scores and scores on related variables. Ten studies yielding 147 correlation coefficients met criteria for inclusion. Data were extracted following established procedures in validity generalization and meta-analytic research. Extracted coefficients were of the expected direction and magnitude with theoretically aligned constructs, thereby providing evidence of convergent validity (e.g., PSG Math and Reading items were most strongly correlated with academic performance and academic behavior variables, with effect sizes ranging from .708 to .740; PSG Prosocial Behavior and Motivation to Learn items were most strongly correlated with broadband externalizing/internalizing problems, with effect sizes ranging from −.706 to −.717), although Prosocial Behavior and Motivation to Learn were not as effective at discriminating among divergent constructs. These results generally support the utility of the PSG in correlating with academic and social/behavioral outcomes in the schools.
{"title":"SSIS Performance Screening Guide as an Indicator of Behavior and Academics: A Meta-Analysis","authors":"J. Sullivan, Victor Villarreal, Evette Flores, A. Gomez, Blaire S. Warren","doi":"10.1177/1534508420926584","DOIUrl":"https://doi.org/10.1177/1534508420926584","url":null,"abstract":"This article documents the results of a meta-analysis of available correlational validity evidence for the Social Skills Improvement System Performance Screening Guide (SSIS-PSG), which is a brief teacher-completed rating scale designed to be used as part of universal screening procedures. Article inclusion criteria included (a) published in English in a peer-reviewed journal, (b) administration of the PSG, and (c) provided validity evidence representative of the relationship between PSG scores and scores on related variables. Ten studies yielding 147 correlation coefficients met criteria for inclusion. Data were extracted following established procedures in validity generalization and meta-analytic research. Extracted coefficients were of the expected direction and magnitude with theoretically aligned constructs, thereby providing evidence of convergent validity (e.g., PSG Math and Reading items were most strongly correlated with academic performance and academic behavior variables, with effect sizes ranging from .708 to .740; PSG Prosocial Behavior and Motivation to Learn items were most strongly correlated with broadband externalizing/internalizing problems, with effect sizes ranging from −.706 to −.717), although Prosocial Behavior and Motivation to Learn were not as effective at discriminating among divergent constructs. These results generally support the utility of the PSG in correlating with academic and social/behavioral outcomes in the schools.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420926584","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47677500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1177/1534508418820116
Elizabeth M. Hughes, S. R. Powell, Joo-young Lee
Proficiency with mathematics requires an understanding of mathematical language. Students are required to make sense of both spoken and written mathematical terms. An essential component of mathematical language involves the understanding of the vocabulary of mathematics in which students connect vocabulary terms to mathematical concepts or procedures. In this brief psychometric report, we developed and tested a measure of mathematics vocabulary for students in the late middle-school grades (i.e., Grades 7 and 8) to determine the reliability of such a measure and to learn how students answer questions about mathematics vocabulary terms. The vocabulary terms on the measure were those terms determined as essential by middle-school teachers for success with middle-school mathematical language. Analysis indicates the measure demonstrated high reliability and validity. Student scores were widely distributed and students, on average, only answered two-thirds of vocabulary terms correctly.
{"title":"Development and Psychometric Report of a Middle-School Mathematics Vocabulary Measure","authors":"Elizabeth M. Hughes, S. R. Powell, Joo-young Lee","doi":"10.1177/1534508418820116","DOIUrl":"https://doi.org/10.1177/1534508418820116","url":null,"abstract":"Proficiency with mathematics requires an understanding of mathematical language. Students are required to make sense of both spoken and written mathematical terms. An essential component of mathematical language involves the understanding of the vocabulary of mathematics in which students connect vocabulary terms to mathematical concepts or procedures. In this brief psychometric report, we developed and tested a measure of mathematics vocabulary for students in the late middle-school grades (i.e., Grades 7 and 8) to determine the reliability of such a measure and to learn how students answer questions about mathematics vocabulary terms. The vocabulary terms on the measure were those terms determined as essential by middle-school teachers for success with middle-school mathematical language. Analysis indicates the measure demonstrated high reliability and validity. Student scores were widely distributed and students, on average, only answered two-thirds of vocabulary terms correctly.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418820116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48608217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}