Pub Date : 2020-12-01DOI: 10.1177/1534508418825304
Sarah J. Miller, G. Noell, Meredith T. Harris, Elise B. McIver, J. Alvarez
Research evaluating the variables that influence learning has devoted inadequate attention to the influence of the amount of new material presented at one time. The current study evaluated the impact of varying instructional set size (ISS) on the rate at which elementary school students mastered multiplication facts while receiving constant time delay (CTD) instruction. Instructional time was equated across conditions. Instruction was provided for an ISS of five and 20 using CTD instruction for multiplication facts. ISS 20 was more efficient for two out of the three participants. This suggests a much larger efficient ISS than previous research. The implications of this finding for the importance of the instructional method in attempting to identify an efficient ISS, as well as the study’s connection to prior research, in this area are discussed.
{"title":"Assessing the Effects of Instructional Set Size on Learning","authors":"Sarah J. Miller, G. Noell, Meredith T. Harris, Elise B. McIver, J. Alvarez","doi":"10.1177/1534508418825304","DOIUrl":"https://doi.org/10.1177/1534508418825304","url":null,"abstract":"Research evaluating the variables that influence learning has devoted inadequate attention to the influence of the amount of new material presented at one time. The current study evaluated the impact of varying instructional set size (ISS) on the rate at which elementary school students mastered multiplication facts while receiving constant time delay (CTD) instruction. Instructional time was equated across conditions. Instruction was provided for an ISS of five and 20 using CTD instruction for multiplication facts. ISS 20 was more efficient for two out of the three participants. This suggests a much larger efficient ISS than previous research. The implications of this finding for the importance of the instructional method in attempting to identify an efficient ISS, as well as the study’s connection to prior research, in this area are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418825304","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48627415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/1534508418824743
Y. Petscher, S. Pfeiffer
The authors evaluated measurement-level, factor-level, item-level, and scale-level revisions to the Gifted Rating Scales–School Form (GRS-S). Measurement-level considerations tested the extent to which treating the Likert-type scale rating as categorical or continuous produced different fit across unidimensional, correlated trait, and bifactor latent factor structures. Item- and scale-level analyses demonstrated that the GRS-S could be reduced from a 72-item assessment on a 9-point rating scale down to a 30-item assessment on a 3-point rating scale. Reliability from the reduced assessment was high (ω > .95). Receiver operating characteristic (ROC) curve comparisons between the original and reduced versions of the GRS-S showed that diagnostic accuracy (i.e., area under the curve) of the scales was comparable when considering cut scores of 120, 125, and 130 on the WISC-IV Full Scale (Wechsler Intelligence Scale for Child–Fourth Edition) and verbal IQ and the WIAT-III (Wechsler Individual Achievement Test–Third Edition) composite score. The findings suggest that a brief form of the GRS-S can be used as a universal or selective screener for giftedness without sacrificing key psychometric considerations.
{"title":"Reconsidering the Psychometrics of the GRS-S: Evidence for Parsimony in Measurement","authors":"Y. Petscher, S. Pfeiffer","doi":"10.1177/1534508418824743","DOIUrl":"https://doi.org/10.1177/1534508418824743","url":null,"abstract":"The authors evaluated measurement-level, factor-level, item-level, and scale-level revisions to the Gifted Rating Scales–School Form (GRS-S). Measurement-level considerations tested the extent to which treating the Likert-type scale rating as categorical or continuous produced different fit across unidimensional, correlated trait, and bifactor latent factor structures. Item- and scale-level analyses demonstrated that the GRS-S could be reduced from a 72-item assessment on a 9-point rating scale down to a 30-item assessment on a 3-point rating scale. Reliability from the reduced assessment was high (ω > .95). Receiver operating characteristic (ROC) curve comparisons between the original and reduced versions of the GRS-S showed that diagnostic accuracy (i.e., area under the curve) of the scales was comparable when considering cut scores of 120, 125, and 130 on the WISC-IV Full Scale (Wechsler Intelligence Scale for Child–Fourth Edition) and verbal IQ and the WIAT-III (Wechsler Individual Achievement Test–Third Edition) composite score. The findings suggest that a brief form of the GRS-S can be used as a universal or selective screener for giftedness without sacrificing key psychometric considerations.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418824743","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49554384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-12DOI: 10.1177/1534508420972460
Kirsten W. Newell, Jessie M. Kember, G. Zinn
This brief report summarizes the development and psychometric properties of German reading fluency passages as compared to English reading fluency passages for immersion language learners. Results indicated that scores from German language reading fluency passages alone were (a) somewhat less reliable than scores from English publisher-developed passages, (b) similarly valid measures of reading when compared to scores from English reading fluency passages, and (c) more accurate than publisher-provided English cut-scores but not as accurate as locally developed English cut-scores in the identification of at-risk readers.
{"title":"Creation and Validation of German Oral Reading Fluency Passages With Immersion Language Learners","authors":"Kirsten W. Newell, Jessie M. Kember, G. Zinn","doi":"10.1177/1534508420972460","DOIUrl":"https://doi.org/10.1177/1534508420972460","url":null,"abstract":"This brief report summarizes the development and psychometric properties of German reading fluency passages as compared to English reading fluency passages for immersion language learners. Results indicated that scores from German language reading fluency passages alone were (a) somewhat less reliable than scores from English publisher-developed passages, (b) similarly valid measures of reading when compared to scores from English reading fluency passages, and (c) more accurate than publisher-provided English cut-scores but not as accurate as locally developed English cut-scores in the identification of at-risk readers.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420972460","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43966783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1177/1534508420966383
Amanda P. Goodwin, Y. Petscher, Jamie L. Tock, Sara E. McFadden, D. Reynolds, Tess Lantos, Sara Jones
Assessment of language skills for upper elementary and middle schoolers is important due to the strong link between language and reading comprehension. Yet, currently few practical, reliable, valid, and instructionally informative assessments of language exist. This study provides validation evidence for Monster, P.I., which is a gamified, standardized, computer-adaptive assessment (CAT) of language for fifth to eighth grade students. Creating Monster, P.I. involved an assessment of the dimensionality of morphology and vocabulary and an assessment of syntax. Results using multiple-group item response theory (IRT) with 3,214 fifth through eighth graders indicated morphology and vocabulary were best assessed via bifactor models and syntax unidimensionally. Therefore, Monster, P.I. provides scores on three component areas of language (multidimensional morphology and vocabulary and unidimensional syntax) with the goal of informing instruction. Validity results also suggest that Monster, P.I. scores show moderate correlations with each other and with standardized reading vocabulary and reading comprehension assessments. Furthermore, hierarchical regression results suggest an important link between Monster, P.I. and standardized reading comprehension, explaining between 56% and 75% of the variance. Such results indicate that Monster, P.I. can provide meaningful understandings of language performance which can guide instruction that can impact reading comprehension performance.
{"title":"Monster, P.I.: Validation Evidence for an Assessment of Adolescent Language That Assesses Vocabulary Knowledge, Morphological Knowledge, and Syntactical Awareness","authors":"Amanda P. Goodwin, Y. Petscher, Jamie L. Tock, Sara E. McFadden, D. Reynolds, Tess Lantos, Sara Jones","doi":"10.1177/1534508420966383","DOIUrl":"https://doi.org/10.1177/1534508420966383","url":null,"abstract":"Assessment of language skills for upper elementary and middle schoolers is important due to the strong link between language and reading comprehension. Yet, currently few practical, reliable, valid, and instructionally informative assessments of language exist. This study provides validation evidence for Monster, P.I., which is a gamified, standardized, computer-adaptive assessment (CAT) of language for fifth to eighth grade students. Creating Monster, P.I. involved an assessment of the dimensionality of morphology and vocabulary and an assessment of syntax. Results using multiple-group item response theory (IRT) with 3,214 fifth through eighth graders indicated morphology and vocabulary were best assessed via bifactor models and syntax unidimensionally. Therefore, Monster, P.I. provides scores on three component areas of language (multidimensional morphology and vocabulary and unidimensional syntax) with the goal of informing instruction. Validity results also suggest that Monster, P.I. scores show moderate correlations with each other and with standardized reading vocabulary and reading comprehension assessments. Furthermore, hierarchical regression results suggest an important link between Monster, P.I. and standardized reading comprehension, explaining between 56% and 75% of the variance. Such results indicate that Monster, P.I. can provide meaningful understandings of language performance which can guide instruction that can impact reading comprehension performance.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420966383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43550815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-27DOI: 10.1177/1534508420966390
A. Whitehouse, Songtian Zeng, R. Troeger, A. Cook, T. Minami
Positive school climate is a key determinant factor of students’ psychological well-being, safety, and academic achievement. Although researchers have examined the validity of school climate measures, there is a dearth of research investigating differences in student perceptions of school climate across race and ethnicity. This study evaluated the factor stability of a widely used school climate survey using factor analyses and measurement invariance techniques across racial/ethnic groups. Results of a confirmatory factor analysis indicated a five-factor structure for a school climate survey, and weak measurement invariance was found across Hispanic, Black, and White student groups (ΔCFI = .008). According to paired t tests, significant differences were found among racial/ethnic respondent groups across two factors: teacher and school effectiveness and sense of belonging and care. Validated school climate measures that are culturally and racially responsive to students’ experiences allow for accurate interpretations of school climate data. Discussion and implications are provided.
{"title":"Examining Measurement Invariance of a School Climate Survey Across Race and Ethnicity","authors":"A. Whitehouse, Songtian Zeng, R. Troeger, A. Cook, T. Minami","doi":"10.1177/1534508420966390","DOIUrl":"https://doi.org/10.1177/1534508420966390","url":null,"abstract":"Positive school climate is a key determinant factor of students’ psychological well-being, safety, and academic achievement. Although researchers have examined the validity of school climate measures, there is a dearth of research investigating differences in student perceptions of school climate across race and ethnicity. This study evaluated the factor stability of a widely used school climate survey using factor analyses and measurement invariance techniques across racial/ethnic groups. Results of a confirmatory factor analysis indicated a five-factor structure for a school climate survey, and weak measurement invariance was found across Hispanic, Black, and White student groups (ΔCFI = .008). According to paired t tests, significant differences were found among racial/ethnic respondent groups across two factors: teacher and school effectiveness and sense of belonging and care. Validated school climate measures that are culturally and racially responsive to students’ experiences allow for accurate interpretations of school climate data. Discussion and implications are provided.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420966390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46809365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-14DOI: 10.1177/1534508420963193
Stephanie M. Hammerschmidt‐Snidarich, Dana L. Wagner, David C. Parker, Kyle Wagner
This study examined reading tutors’ interpretation of reading progress-monitoring graphs. A think-aloud procedure was used to evaluate tutors at two points in time, before and after a year of service as an AmeriCorps reading tutor. During their service, the reading tutors received extensive training and ongoing coaching. Descriptive results showed a positive change from the Time 1–think-aloud (pretest) to the Time 2–think aloud (posttest). There were statistically significant changes from Time 1 to Time 2 for the majority of graph interpretation variables measured. Data suggest that the right type of support and training may serve to enable reading tutors to develop the skills to contribute to data-based decision-making within multitiered systems.
{"title":"Reading Tutors’ Interpretation of Curriculum-Based Measurement Graphs","authors":"Stephanie M. Hammerschmidt‐Snidarich, Dana L. Wagner, David C. Parker, Kyle Wagner","doi":"10.1177/1534508420963193","DOIUrl":"https://doi.org/10.1177/1534508420963193","url":null,"abstract":"This study examined reading tutors’ interpretation of reading progress-monitoring graphs. A think-aloud procedure was used to evaluate tutors at two points in time, before and after a year of service as an AmeriCorps reading tutor. During their service, the reading tutors received extensive training and ongoing coaching. Descriptive results showed a positive change from the Time 1–think-aloud (pretest) to the Time 2–think aloud (posttest). There were statistically significant changes from Time 1 to Time 2 for the majority of graph interpretation variables measured. Data suggest that the right type of support and training may serve to enable reading tutors to develop the skills to contribute to data-based decision-making within multitiered systems.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420963193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48065657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-30DOI: 10.1177/1534508420959439
Rhea Wagle, E. Dowdy, M. Furlong, Karen Nylund-Gibson, D. Carter, T. Hinton
Schools are an essential setting for mental health supports and services for students. To support student well-being, schools engage in universal mental health screening to identify students in need of support and to provide surveillance data for district-wide or state-wide policy changes. Mental health data have been collected via anonymous and self-identified response formats depending on the purpose of the screening (i.e., surveillance and screening, respectively). However, most surveys do not provide psychometric evidence for use in both types of response formats. The current study examined whether responses to the Social Emotional Health Survey–Secondary (SEHS-S), a school mental health survey, are comparable when administered using anonymous versus self-identified response formats. The study participants were from one high school and completed the SEHS-S using self-identified (n = 1,700) and anonymous (n = 1,667) formats. Full measurement invariance was found across the two response formats. Both substantial and minimal latent mean differences were detected. Implications for the use and interpretation of the SEHS-S for schoolwide mental health are discussed.
{"title":"Anonymous Versus Self-Identified Response Formats for School Mental Health Screening","authors":"Rhea Wagle, E. Dowdy, M. Furlong, Karen Nylund-Gibson, D. Carter, T. Hinton","doi":"10.1177/1534508420959439","DOIUrl":"https://doi.org/10.1177/1534508420959439","url":null,"abstract":"Schools are an essential setting for mental health supports and services for students. To support student well-being, schools engage in universal mental health screening to identify students in need of support and to provide surveillance data for district-wide or state-wide policy changes. Mental health data have been collected via anonymous and self-identified response formats depending on the purpose of the screening (i.e., surveillance and screening, respectively). However, most surveys do not provide psychometric evidence for use in both types of response formats. The current study examined whether responses to the Social Emotional Health Survey–Secondary (SEHS-S), a school mental health survey, are comparable when administered using anonymous versus self-identified response formats. The study participants were from one high school and completed the SEHS-S using self-identified (n = 1,700) and anonymous (n = 1,667) formats. Full measurement invariance was found across the two response formats. Both substantial and minimal latent mean differences were detected. Implications for the use and interpretation of the SEHS-S for schoolwide mental health are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420959439","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43243744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-10DOI: 10.1177/1534508420953894
David A. Klingbeil, Ethan R. Van Norman, Peter M. Nelson
This direct replication study compared the use of dichotomized likelihood ratios and interval likelihood ratios, derived using a prior sample of students, for predicting math risk in middle school. Data from the prior year state test and the Measures of Academic Progress were analyzed to evaluate differences in the efficiency and diagnostic accuracy of gated screening decisions. Post-test probabilities were interpreted using a threshold decision-making model to classify student risk during screening. Using interval likelihood ratios led to fewer students requiring additional testing after the first gate. But, when interval likelihood ratios were used, three tests were required to classify 6th- and 7th-grade students as at-risk or not at-risk. Only two tests were needed to classify students as at-risk or not at-risk when dichotomized likelihood ratios were used. Acceptable sensitivity and specificity estimates were obtained, regardless of the type of likelihood ratios used to estimate post-test probabilities. When predicting academic risk, interval likelihood ratios may be best reserved for situations where at least three successive tests are available to be used in a gated screening model.
{"title":"Using Interval Likelihood Ratios in Gated Screening: A Direct Replication Study","authors":"David A. Klingbeil, Ethan R. Van Norman, Peter M. Nelson","doi":"10.1177/1534508420953894","DOIUrl":"https://doi.org/10.1177/1534508420953894","url":null,"abstract":"This direct replication study compared the use of dichotomized likelihood ratios and interval likelihood ratios, derived using a prior sample of students, for predicting math risk in middle school. Data from the prior year state test and the Measures of Academic Progress were analyzed to evaluate differences in the efficiency and diagnostic accuracy of gated screening decisions. Post-test probabilities were interpreted using a threshold decision-making model to classify student risk during screening. Using interval likelihood ratios led to fewer students requiring additional testing after the first gate. But, when interval likelihood ratios were used, three tests were required to classify 6th- and 7th-grade students as at-risk or not at-risk. Only two tests were needed to classify students as at-risk or not at-risk when dichotomized likelihood ratios were used. Acceptable sensitivity and specificity estimates were obtained, regardless of the type of likelihood ratios used to estimate post-test probabilities. When predicting academic risk, interval likelihood ratios may be best reserved for situations where at least three successive tests are available to be used in a gated screening model.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420953894","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49495290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/1534508418824154
C. Greenwood, J. Buzhardt, D. Walker, Fan Jia, J. Carta
The Early Communication Indicator (ECI) is a progress monitoring measure designed to support intervention decisions of the home visitors and early educators who serve infants and toddlers. The present study sought to add to the criterion validity claims of the ECI in a large sample of children using measures of language and preliteracy not previously investigated. Early Head Start service providers administered and scored ECIs quarterly for infants and toddlers in their caseload as part of standard services. In addition, a battery of language and early literacy criterion tests were administered by researchers when children were 12, 24, 36, and 48 months of age. Analyses of this longitudinal data then examined concurrent and predictive correlational patterns. Results indicated that children grew in communicative proficiency with age, and weak to moderately strong patterns of relationship emerged that differed by ECI scale, age, and criterion measure. The strongest positive patterns of relationships were between Single Words and Multiple Words and the criterion at older ages. Gestures and Vocalizations established a pattern of negative relationships to the criterion measures. Implications for research and practice are discussed.
早期沟通指标(ECI)是一项进度监测措施,旨在支持家庭访客和为婴幼儿服务的早期教育工作者的干预决策。本研究试图在大量儿童样本中添加ECI的标准有效性声明,这些儿童使用了以前没有调查过的语言和学前教育指标。作为标准服务的一部分,Early Head Start服务提供商每季度为其案件量中的婴幼儿管理和评分ECI。此外,研究人员在儿童12、24、36和48个月大时进行了一系列语言和早期识字标准测试。然后,对这些纵向数据的分析检验了并发和预测相关性模式。结果表明,儿童的交际能力随着年龄的增长而增长,并出现了弱到中等强度的关系模式,这些模式因ECI量表、年龄和标准测量而不同。最强烈的正向关系模式是单词和多词之间的关系,以及老年人的标准。手势和发音与标准测量建立了一种负面关系模式。讨论了对研究和实践的启示。
{"title":"Criterion Validity of the Early Communication Indicator for Infants and Toddlers","authors":"C. Greenwood, J. Buzhardt, D. Walker, Fan Jia, J. Carta","doi":"10.1177/1534508418824154","DOIUrl":"https://doi.org/10.1177/1534508418824154","url":null,"abstract":"The Early Communication Indicator (ECI) is a progress monitoring measure designed to support intervention decisions of the home visitors and early educators who serve infants and toddlers. The present study sought to add to the criterion validity claims of the ECI in a large sample of children using measures of language and preliteracy not previously investigated. Early Head Start service providers administered and scored ECIs quarterly for infants and toddlers in their caseload as part of standard services. In addition, a battery of language and early literacy criterion tests were administered by researchers when children were 12, 24, 36, and 48 months of age. Analyses of this longitudinal data then examined concurrent and predictive correlational patterns. Results indicated that children grew in communicative proficiency with age, and weak to moderately strong patterns of relationship emerged that differed by ECI scale, age, and criterion measure. The strongest positive patterns of relationships were between Single Words and Multiple Words and the criterion at older ages. Gestures and Vocalizations established a pattern of negative relationships to the criterion measures. Implications for research and practice are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418824154","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45409408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-01DOI: 10.1177/1534508418815751
S. Daily, K. Zullig, E. M. Myers, Megan L. Smith, A. Kristjansson, M. J. Mann
The school climate measure (SCM) has demonstrated robust psychometrics in regionally diverse samples of high school–aged adolescents, but remains untested among early adolescents. Confirmatory factor analysis was used to establish construct validity and measurement indices of the SCM using a sample of early adolescents from public schools located in Central Appalachia (n = 1,128). In addition, known-groups validity analyzed each SCM domain against self-reported academic achievement and school connection. Analyses confirmed all 10 SCM domains fit the data well with strong internal consistency and factor loadings. Known-groups analyses suggest students who reported higher academic achievement and school connection demonstrated higher perceptions of school climate. Findings provide evidence that extends the use of the SCM to early adolescents and may support school-based policy.
{"title":"Preliminary Validation of the SCM in a Sample of Early Adolescent Public School Children","authors":"S. Daily, K. Zullig, E. M. Myers, Megan L. Smith, A. Kristjansson, M. J. Mann","doi":"10.1177/1534508418815751","DOIUrl":"https://doi.org/10.1177/1534508418815751","url":null,"abstract":"The school climate measure (SCM) has demonstrated robust psychometrics in regionally diverse samples of high school–aged adolescents, but remains untested among early adolescents. Confirmatory factor analysis was used to establish construct validity and measurement indices of the SCM using a sample of early adolescents from public schools located in Central Appalachia (n = 1,128). In addition, known-groups validity analyzed each SCM domain against self-reported academic achievement and school connection. Analyses confirmed all 10 SCM domains fit the data well with strong internal consistency and factor loadings. Known-groups analyses suggest students who reported higher academic achievement and school connection demonstrated higher perceptions of school climate. Findings provide evidence that extends the use of the SCM to early adolescents and may support school-based policy.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418815751","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65474632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}