Pub Date : 2021-11-08DOI: 10.1177/15345084211055473
B. Strømgren, K. Couto
Norwegian schools are obliged to develop students’ social competences. Programs used are School-Wide Positive Behavioral Interventions and Supports (PBIS) or classroom-based ones that aim to teach students social and emotional learning (SEL) skills in a broad sense. Some rating scales have been used to assess the effect of SEL programs on SEL skills. We explored the Norwegian version of the 12-item Social Emotional Assets and Resilience Scales–Child–Short Form (SEARS-C-SF). An exploratory factor analysis (EFA) was performed, proposing a one-factor solution that was verified by a confirmatory factor analysis (CFA). The scale reliability of .84 (λ2), means and standard deviations, as well as tier levels were compared with the original short form. Finally, concurrent, discriminant, and convergent validities for the SEARS-C-SF were compared with different Strengths and Difficulties Questionnaire (SDQ) subscales.
{"title":"Psychometric Properties of a Norwegian Version of the Social Emotional Assets and Resilience Scales–Child–Short Form","authors":"B. Strømgren, K. Couto","doi":"10.1177/15345084211055473","DOIUrl":"https://doi.org/10.1177/15345084211055473","url":null,"abstract":"Norwegian schools are obliged to develop students’ social competences. Programs used are School-Wide Positive Behavioral Interventions and Supports (PBIS) or classroom-based ones that aim to teach students social and emotional learning (SEL) skills in a broad sense. Some rating scales have been used to assess the effect of SEL programs on SEL skills. We explored the Norwegian version of the 12-item Social Emotional Assets and Resilience Scales–Child–Short Form (SEARS-C-SF). An exploratory factor analysis (EFA) was performed, proposing a one-factor solution that was verified by a confirmatory factor analysis (CFA). The scale reliability of .84 (λ2), means and standard deviations, as well as tier levels were compared with the original short form. Finally, concurrent, discriminant, and convergent validities for the SEARS-C-SF were compared with different Strengths and Difficulties Questionnaire (SDQ) subscales.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"179 - 184"},"PeriodicalIF":1.3,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41735587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-30DOI: 10.1177/15345084211040219
Benjamin G. Solomon, O. Forsberg, Monelle Thomas, Brittney Penna, Katherine M. Weisheit
Bayesian regression has emerged as a viable alternative for the estimation of curriculum-based measurement (CBM) growth slopes. Preliminary findings suggest such methods may yield improved efficiency relative to other linear estimators and can be embedded into data management programs for high-frequency use. However, additional research is needed, as Bayesian estimators require multiple specifications of the prior distributions. The current study evaluates the accuracy of several combinations of prior values, including three distributions of the residuals, two values of the expected growth rate, and three possible values for the precision of slope when using Bayesian simple linear regression to estimate fluency growth slopes for reading CBM. We also included traditional ordinary least squares (OLS) as a baseline contrast. Findings suggest that the prior specification for the residual distribution had, on average, a trivial effect on the accuracy of the slope. However, specifications for growth rate and precision of slope were influential, and virtually all variants of Bayesian regression evaluated were superior to OLS. Converging evidence from both simulated and observed data now suggests Bayesian methods outperform OLS for estimating CBM growth slopes and should be strongly considered in research and practice.
{"title":"A Comparison of Priors When Using Bayesian Regression to Estimate Oral Reading Fluency Slopes","authors":"Benjamin G. Solomon, O. Forsberg, Monelle Thomas, Brittney Penna, Katherine M. Weisheit","doi":"10.1177/15345084211040219","DOIUrl":"https://doi.org/10.1177/15345084211040219","url":null,"abstract":"Bayesian regression has emerged as a viable alternative for the estimation of curriculum-based measurement (CBM) growth slopes. Preliminary findings suggest such methods may yield improved efficiency relative to other linear estimators and can be embedded into data management programs for high-frequency use. However, additional research is needed, as Bayesian estimators require multiple specifications of the prior distributions. The current study evaluates the accuracy of several combinations of prior values, including three distributions of the residuals, two values of the expected growth rate, and three possible values for the precision of slope when using Bayesian simple linear regression to estimate fluency growth slopes for reading CBM. We also included traditional ordinary least squares (OLS) as a baseline contrast. Findings suggest that the prior specification for the residual distribution had, on average, a trivial effect on the accuracy of the slope. However, specifications for growth rate and precision of slope were influential, and virtually all variants of Bayesian regression evaluated were superior to OLS. Converging evidence from both simulated and observed data now suggests Bayesian methods outperform OLS for estimating CBM growth slopes and should be strongly considered in research and practice.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"234 - 244"},"PeriodicalIF":1.3,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46204797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-29DOI: 10.1177/15345084211035055
Jillian M. Dawes, Benjamin G. Solomon, Daniel F. McCleary, Cutler Ruby, Brian C. Poncy
The current availability of research examining the precision of single-skill mathematics (SSM) curriculum-based measurements (CBMs) for progress monitoring is limited. Given the observed variance in administration conditions across current practice and research use, we examined potential differences between student responding and precision of slope when SSM-CBMs were administered individually and in group (classroom) conditions. No differences in student performance or measure precision were observed between conditions, indicating flexibility in the practical and research use of SSM-CBMs across administration conditions. In addition, findings contributed to the literature examining the stability of SSM-CBMs slopes of progress when used for instructional decision-making. Implications for the administration and interpretation of SSM-CBMs in practice are discussed.
{"title":"Precision of Single-Skill Mathematics CBM: Group Versus Individual Administration","authors":"Jillian M. Dawes, Benjamin G. Solomon, Daniel F. McCleary, Cutler Ruby, Brian C. Poncy","doi":"10.1177/15345084211035055","DOIUrl":"https://doi.org/10.1177/15345084211035055","url":null,"abstract":"The current availability of research examining the precision of single-skill mathematics (SSM) curriculum-based measurements (CBMs) for progress monitoring is limited. Given the observed variance in administration conditions across current practice and research use, we examined potential differences between student responding and precision of slope when SSM-CBMs were administered individually and in group (classroom) conditions. No differences in student performance or measure precision were observed between conditions, indicating flexibility in the practical and research use of SSM-CBMs across administration conditions. In addition, findings contributed to the literature examining the stability of SSM-CBMs slopes of progress when used for instructional decision-making. Implications for the administration and interpretation of SSM-CBMs in practice are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"170 - 178"},"PeriodicalIF":1.3,"publicationDate":"2021-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211035055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42666525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-14DOI: 10.1177/15345084211030840
Jacqueline Huscroft-D’Angelo, Jessica Wery, Jodie D. Martin-Gutel, Corey D. Pierce, Kara Loftin
The Scales for Assessing Emotional Disturbance Screener–Third Edition (SAED-3) is a standardized, norm-referenced measure designed to identify school-age students at risk for emotional and behavioral problems. Four studies are reported to address the psychometric status of the SAED-3 Screener. Study 1 examined the internal consistency of the Screener using a sample of 1,430 students. Study 2 investigated the interrater reliability of the Screener results across 123 pairs of teachers who had worked with the student for at least 2 months. Study 3 assessed the extent to which the results from the Screener are consistent over time by examining test–retest reliability. Study 4 examined convergent validity by comparing the Screener to the Strength and Difficulties Questionnaire (SDQ). Across all studies, samples were drawn from populations of students included in the nationally representative normative sample. The averaged coefficient alpha for the Screener was .88. Interrater reliability coefficient for the composite was .83. Test–retest reliability of the composite was .83. Correlations with the SDQ subscales ranged from .74 to .99, and the correlation of the Screener to the SDQ composite was .99. Limitations and implications for use of the Screener are discussed.
{"title":"The Scales for Assessing Emotional Disturbance–Third Edition: Reliability and Validity of the Screener","authors":"Jacqueline Huscroft-D’Angelo, Jessica Wery, Jodie D. Martin-Gutel, Corey D. Pierce, Kara Loftin","doi":"10.1177/15345084211030840","DOIUrl":"https://doi.org/10.1177/15345084211030840","url":null,"abstract":"The Scales for Assessing Emotional Disturbance Screener–Third Edition (SAED-3) is a standardized, norm-referenced measure designed to identify school-age students at risk for emotional and behavioral problems. Four studies are reported to address the psychometric status of the SAED-3 Screener. Study 1 examined the internal consistency of the Screener using a sample of 1,430 students. Study 2 investigated the interrater reliability of the Screener results across 123 pairs of teachers who had worked with the student for at least 2 months. Study 3 assessed the extent to which the results from the Screener are consistent over time by examining test–retest reliability. Study 4 examined convergent validity by comparing the Screener to the Strength and Difficulties Questionnaire (SDQ). Across all studies, samples were drawn from populations of students included in the nationally representative normative sample. The averaged coefficient alpha for the Screener was .88. Interrater reliability coefficient for the composite was .83. Test–retest reliability of the composite was .83. Correlations with the SDQ subscales ranged from .74 to .99, and the correlation of the Screener to the SDQ composite was .99. Limitations and implications for use of the Screener are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"137 - 146"},"PeriodicalIF":1.3,"publicationDate":"2021-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211030840","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43579773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-30DOI: 10.1177/15345084211027138
Marika R. King, Anne L. Larson, J. Buzhardt
Few, if any, reliable and valid screening tools exist to identify language delay in young Spanish–English speaking dual-language learners (DLLs). The early communication indicator (ECI) is a brief, naturalistic measure of expressive communication development designed to inform intervention decision-making and progress monitoring for infants and toddlers at-risk for language delays. We assessed the accuracy of the ECI as a language-screening tool for DLLs from Latinx backgrounds by completing classification accuracy analysis on 39 participants who completed the ECI and a widely used standardized reference, the Preschool Language Scales, Fifth Edition–Spanish, (PLS-5 Spanish). Sensitivity of the ECI was high, but the specificity was low, resulting in low classification accuracy overall. Given the limitations of using standalone assessments as a reference for DLLs, a subset of participants (n = 22) completed additional parent-report measures related to identification of language delay. Combining the ECI with parent-report data, the specificity of the ECI remained high, and the sensitivity improved. Findings show preliminary support for the ECI as a language-screening tool, especially when combined with other information sources, and highlight the need for validated language assessment for DLLs from Latinx backgrounds.
{"title":"Exploring the Classification Accuracy of the Early Communication Indicator (ECI) With Dual-Language Learners From Latinx Backgrounds","authors":"Marika R. King, Anne L. Larson, J. Buzhardt","doi":"10.1177/15345084211027138","DOIUrl":"https://doi.org/10.1177/15345084211027138","url":null,"abstract":"Few, if any, reliable and valid screening tools exist to identify language delay in young Spanish–English speaking dual-language learners (DLLs). The early communication indicator (ECI) is a brief, naturalistic measure of expressive communication development designed to inform intervention decision-making and progress monitoring for infants and toddlers at-risk for language delays. We assessed the accuracy of the ECI as a language-screening tool for DLLs from Latinx backgrounds by completing classification accuracy analysis on 39 participants who completed the ECI and a widely used standardized reference, the Preschool Language Scales, Fifth Edition–Spanish, (PLS-5 Spanish). Sensitivity of the ECI was high, but the specificity was low, resulting in low classification accuracy overall. Given the limitations of using standalone assessments as a reference for DLLs, a subset of participants (n = 22) completed additional parent-report measures related to identification of language delay. Combining the ECI with parent-report data, the specificity of the ECI remained high, and the sensitivity improved. Findings show preliminary support for the ECI as a language-screening tool, especially when combined with other information sources, and highlight the need for validated language assessment for DLLs from Latinx backgrounds.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"209 - 219"},"PeriodicalIF":1.3,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211027138","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48274691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-01DOI: 10.1177/15345084211000563
{"title":"Corrigendum to Psychometric Fundamentals of the Social Skills Improvement System: Social–Emotional Learning Edition Rating Forms","authors":"","doi":"10.1177/15345084211000563","DOIUrl":"https://doi.org/10.1177/15345084211000563","url":null,"abstract":"","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"46 1","pages":"244 - 244"},"PeriodicalIF":1.3,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211000563","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48645817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-25DOI: 10.1177/15345084211014942
Allison R. Lombardi, Graham G. Rifenbark, Marcus I. Poppen, Kyle Reardon, Valerie L. Mazzotti, Mary E. Morningstar, D. Rowe, Sheida K. Raley
In this study, we examined the structural validity of the Secondary Transition Fidelity Assessment (STFA), a measure of secondary schools’ use of programs and practices demonstrated by research to lead to meaningful college and career outcomes for all students, including students atrisk for or with disabilities, and students from diverse backgrounds. Drawing from evidence-based practices endorsed by the National Technical Assistance Center for Transition and the Council for Exceptional Children’s Division on Career Development and Transition, the instrument development and refinement process was iterative and involved collecting stakeholder feedback and pilot testing. Responses from a national sample of educators (N = 1,515) were subject to an exploratory factor analysis resulting in five measurable factors: (a) Adolescent Engagement, (b) Inclusive and Tiered Instruction, (c) School-Family Collaboration, (d) District-Community Collaboration, and (e) Professional Capacity. The 5-factor model was subject to a confirmatory factor analysis that resulted in good model fit. Invariance testing on the basis of geographical region strengthened validity evidence and showed a high level of variability with regard to implementing evidence-based transition services. Findings highlight the need for consistent and regular use of a robust, self-assessment fidelity measure of transition service implementation to support all students’ transition to college and career.
{"title":"Development and Validation of the Secondary Transition Fidelity Assessment","authors":"Allison R. Lombardi, Graham G. Rifenbark, Marcus I. Poppen, Kyle Reardon, Valerie L. Mazzotti, Mary E. Morningstar, D. Rowe, Sheida K. Raley","doi":"10.1177/15345084211014942","DOIUrl":"https://doi.org/10.1177/15345084211014942","url":null,"abstract":"In this study, we examined the structural validity of the Secondary Transition Fidelity Assessment (STFA), a measure of secondary schools’ use of programs and practices demonstrated by research to lead to meaningful college and career outcomes for all students, including students atrisk for or with disabilities, and students from diverse backgrounds. Drawing from evidence-based practices endorsed by the National Technical Assistance Center for Transition and the Council for Exceptional Children’s Division on Career Development and Transition, the instrument development and refinement process was iterative and involved collecting stakeholder feedback and pilot testing. Responses from a national sample of educators (N = 1,515) were subject to an exploratory factor analysis resulting in five measurable factors: (a) Adolescent Engagement, (b) Inclusive and Tiered Instruction, (c) School-Family Collaboration, (d) District-Community Collaboration, and (e) Professional Capacity. The 5-factor model was subject to a confirmatory factor analysis that resulted in good model fit. Invariance testing on the basis of geographical region strengthened validity evidence and showed a high level of variability with regard to implementing evidence-based transition services. Findings highlight the need for consistent and regular use of a robust, self-assessment fidelity measure of transition service implementation to support all students’ transition to college and career.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"147 - 156"},"PeriodicalIF":1.3,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211014942","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42590181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-20DOI: 10.1177/15345084211014926
Martin T. Peters, Karin Hebbecker, Elmar Souvignier
Monitoring learning progress enables teachers to address students’ interindividual differences and to adapt instruction to students’ needs. We investigated whether using learning progress assessment (LPA) or using a combination of LPA and prepared material to help teachers implement assessment-based differentiated instruction resulted in improved reading skills for students. The study was conducted in second-grade classrooms in general primary education, and participants (N = 33 teachers and N = 619 students) were assigned to one of three conditions: a control group (CG); a first intervention group (LPA), which received LPA only; or a second intervention group (LPA-RS), which received a combination of LPA and material for differentiated reading instruction (the “reading sportsman”). At the beginning and the end of one school year, students’ reading fluency and reading comprehension were assessed. Compared with business-as-usual reading instruction (the CG), providing teachers with LPA or both LPA and prepared material did not lead to higher gains in reading competence. Furthermore, no significant differences between the LPA and LPA-RS conditions were found. Corresponding analyses for lower- and higher-achieving students also revealed no differences between the treatment groups. Results are discussed regarding the implementation of LPA and reading instruction in general education.
{"title":"Effects of Providing Teachers With Tools for Implementing Assessment-Based Differentiated Reading Instruction in Second Grade","authors":"Martin T. Peters, Karin Hebbecker, Elmar Souvignier","doi":"10.1177/15345084211014926","DOIUrl":"https://doi.org/10.1177/15345084211014926","url":null,"abstract":"Monitoring learning progress enables teachers to address students’ interindividual differences and to adapt instruction to students’ needs. We investigated whether using learning progress assessment (LPA) or using a combination of LPA and prepared material to help teachers implement assessment-based differentiated instruction resulted in improved reading skills for students. The study was conducted in second-grade classrooms in general primary education, and participants (N = 33 teachers and N = 619 students) were assigned to one of three conditions: a control group (CG); a first intervention group (LPA), which received LPA only; or a second intervention group (LPA-RS), which received a combination of LPA and material for differentiated reading instruction (the “reading sportsman”). At the beginning and the end of one school year, students’ reading fluency and reading comprehension were assessed. Compared with business-as-usual reading instruction (the CG), providing teachers with LPA or both LPA and prepared material did not lead to higher gains in reading competence. Furthermore, no significant differences between the LPA and LPA-RS conditions were found. Corresponding analyses for lower- and higher-achieving students also revealed no differences between the treatment groups. Results are discussed regarding the implementation of LPA and reading instruction in general education.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"157 - 169"},"PeriodicalIF":1.3,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211014926","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47190712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1177/1534508421998772
W. van Dijk, A. Huggins-Manley, Nicholas A. Gage, Holly B. Lane, Michael D. Coyne
In reading intervention research, implementation fidelity is assumed to be positively related to student outcomes, but the methods used to measure fidelity are often treated as an afterthought. Fidelity has been conceptualized and measured in many different ways, suggesting a lack of construct validity. One aspect of construct validity is the fidelity index of a measure. This methodological case study examined how different decisions in fidelity indices influence relative rank ordering of individuals on the construct of interest and influence our perception of the relation between the construct and intervention outcomes. Data for this study came from a large state-funded project to implement multi-tiered systems of support for early reading instruction. Analyses were conducted to determine whether the different fidelity indices are stable in relative rank ordering participants and if fidelity indices of dosage and adherence data influence researcher decisions on model building within a multilevel modeling framework. Results indicated that the fidelity indices resulted in different relations to outcomes with the most commonly used fidelity indices for both dosage and adherence being the worst performing. The choice of index to use should receive considerable thought during the design phase of an intervention study.
{"title":"Why Does Construct Validity Matter in Measuring Implementation Fidelity? A Methodological Case Study","authors":"W. van Dijk, A. Huggins-Manley, Nicholas A. Gage, Holly B. Lane, Michael D. Coyne","doi":"10.1177/1534508421998772","DOIUrl":"https://doi.org/10.1177/1534508421998772","url":null,"abstract":"In reading intervention research, implementation fidelity is assumed to be positively related to student outcomes, but the methods used to measure fidelity are often treated as an afterthought. Fidelity has been conceptualized and measured in many different ways, suggesting a lack of construct validity. One aspect of construct validity is the fidelity index of a measure. This methodological case study examined how different decisions in fidelity indices influence relative rank ordering of individuals on the construct of interest and influence our perception of the relation between the construct and intervention outcomes. Data for this study came from a large state-funded project to implement multi-tiered systems of support for early reading instruction. Analyses were conducted to determine whether the different fidelity indices are stable in relative rank ordering participants and if fidelity indices of dosage and adherence data influence researcher decisions on model building within a multilevel modeling framework. Results indicated that the fidelity indices resulted in different relations to outcomes with the most commonly used fidelity indices for both dosage and adherence being the worst performing. The choice of index to use should receive considerable thought during the design phase of an intervention study.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"67 - 78"},"PeriodicalIF":1.3,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508421998772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46447335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-06DOI: 10.1177/1534508420984522
Christopher L. Thomas, Staci M. Zolkoski, S. Sass
Educators and educational support staff are becoming increasingly aware of the importance of systematic efforts to support students’ social and emotional growth. Logically, the success of social-emotional learning programs depends upon the ability of educators to assess student’s ability to process and utilize social-emotional information and use data to guide programmatic revisions. Therefore, the purpose of the current examination was to provide evidence of the structural validity of the Social-Emotional Learning Scale (SELS), a freely available measure of social-emotional learning, within Grades 6 to 12. Students (N = 289, 48% female, 43.35% male, 61% Caucasian) completed the SELS and the Strengths and Difficulties Questionnaire. Confirmatory factor analyses of the SELS failed to support a multidimensional factor structure identified in prior investigations. The results of an exploratory factor analysis suggest a reduced 16-item version of the SELS captures a unidimensional social-emotional construct. Furthermore, our results provide evidence of the internal consistency and concurrent validity of the reduced-length version of the instrument. Our discussion highlights the implications of the findings to social and emotional learning educational efforts and promoting evidence-based practice.
{"title":"Investigating the Psychometric Properties of the Social-Emotional Learning Scale","authors":"Christopher L. Thomas, Staci M. Zolkoski, S. Sass","doi":"10.1177/1534508420984522","DOIUrl":"https://doi.org/10.1177/1534508420984522","url":null,"abstract":"Educators and educational support staff are becoming increasingly aware of the importance of systematic efforts to support students’ social and emotional growth. Logically, the success of social-emotional learning programs depends upon the ability of educators to assess student’s ability to process and utilize social-emotional information and use data to guide programmatic revisions. Therefore, the purpose of the current examination was to provide evidence of the structural validity of the Social-Emotional Learning Scale (SELS), a freely available measure of social-emotional learning, within Grades 6 to 12. Students (N = 289, 48% female, 43.35% male, 61% Caucasian) completed the SELS and the Strengths and Difficulties Questionnaire. Confirmatory factor analyses of the SELS failed to support a multidimensional factor structure identified in prior investigations. The results of an exploratory factor analysis suggest a reduced 16-item version of the SELS captures a unidimensional social-emotional construct. Furthermore, our results provide evidence of the internal consistency and concurrent validity of the reduced-length version of the instrument. Our discussion highlights the implications of the findings to social and emotional learning educational efforts and promoting evidence-based practice.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"127 - 136"},"PeriodicalIF":1.3,"publicationDate":"2021-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420984522","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46107581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}