Pub Date : 2024-08-19DOI: 10.1177/15345084241271926
Matthew K. Burns, Heba Z. Abdelnaby, Jonie B. Welland, Katherine A. Graves, Kari Kurto
The current study examined the reliability of The Reading League Curriculum-Evaluation Guidelines (CEGs), which were developed to help school-based teams rate the presence of red flags when considering adopting specific literacy curricula. Coders ( n = 30) independently used the CEGs to evaluate a free online English language arts curriculum. The results indicated strong internal consistency ( a = 0.96) and high interrater reliability ( HM = .91, 95% CI = .89 to .93, p < .01). Overall, the CEGs hold the potential as a psychometrically sound tool for evaluating reading curricula. Limitations and implications for practice and research are discussed.
本研究考察了《阅读联盟课程评价指南》(CEG)的可靠性,该指南旨在帮助校本团队在考虑采用特定的读写课程时评定是否存在 "红旗"。编码员(n = 30)独立使用 CEGs 评估免费在线英语语言艺术课程。结果表明,内部一致性强(a = 0.96),译者间可靠性高(HM = .91, 95% CI = .89 to .93, p <.01)。总之,CEGs 有可能成为一种心理测量学上可靠的阅读课程评估工具。本文讨论了其局限性以及对实践和研究的影响。
{"title":"Reliability of Ratings of an English Language Arts Curriculum With the Curriculum Evaluation Guidelines","authors":"Matthew K. Burns, Heba Z. Abdelnaby, Jonie B. Welland, Katherine A. Graves, Kari Kurto","doi":"10.1177/15345084241271926","DOIUrl":"https://doi.org/10.1177/15345084241271926","url":null,"abstract":"The current study examined the reliability of The Reading League Curriculum-Evaluation Guidelines (CEGs), which were developed to help school-based teams rate the presence of red flags when considering adopting specific literacy curricula. Coders ( n = 30) independently used the CEGs to evaluate a free online English language arts curriculum. The results indicated strong internal consistency ( a = 0.96) and high interrater reliability ( H<jats:sub>M</jats:sub> = .91, 95% CI = .89 to .93, p < .01). Overall, the CEGs hold the potential as a psychometrically sound tool for evaluating reading curricula. Limitations and implications for practice and research are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1177/15345084241265632
Meagan Z. Plant, Kelly N. Clark
The prevalence of student mental health concerns has increased the need for universal mental health screening to promote access to services. Some screeners determine risk status by comparing student scores to norming samples based on age (i.e., combined-gender) or on age and gender (i.e., separate-gender). This study examined scores on the Behavior Assessment System for Children–Third Edition, Behavioral and Emotional Screening System (BASC-3 BESS) using combined-gender and separate-gender norms for high school students ( N = 594). There were no statistically significant differences in adolescents’ self-reported BASC-3 BESS raw scores or risk status classification across genders. These findings suggest that school teams are likely to identify students’ mental health status similarly, regardless of whether they use BESS separate-gender or combined-gender norms, although some students’ risk status is expected to vary. These findings have the potential to inform best practice recommendations for school-wide screenings of mental health and identification of students at risk. Additional implications, limitations, and future directions are discussed.
{"title":"Universal Screening for Student Mental Health: Selection of Norming Group","authors":"Meagan Z. Plant, Kelly N. Clark","doi":"10.1177/15345084241265632","DOIUrl":"https://doi.org/10.1177/15345084241265632","url":null,"abstract":"The prevalence of student mental health concerns has increased the need for universal mental health screening to promote access to services. Some screeners determine risk status by comparing student scores to norming samples based on age (i.e., combined-gender) or on age and gender (i.e., separate-gender). This study examined scores on the Behavior Assessment System for Children–Third Edition, Behavioral and Emotional Screening System (BASC-3 BESS) using combined-gender and separate-gender norms for high school students ( N = 594). There were no statistically significant differences in adolescents’ self-reported BASC-3 BESS raw scores or risk status classification across genders. These findings suggest that school teams are likely to identify students’ mental health status similarly, regardless of whether they use BESS separate-gender or combined-gender norms, although some students’ risk status is expected to vary. These findings have the potential to inform best practice recommendations for school-wide screenings of mental health and identification of students at risk. Additional implications, limitations, and future directions are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141773037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1177/15345084241265620
Cherish M. Sarmiento, Adrea J. Truckenmiller
Educators and researchers have been interested in supporting sentence-level language comprehension for struggling readers, but it has been challenging to research. To investigate the properties of sentences that might be useful targets for future research in instruction and assessment, we coded several features of the items in a computer-adaptive scale of sentence comprehension. The Syntactic Knowledge Task is designed for students in Grades 3–10. We then explored how the features of the sentences were related to the item’s difficulty value to determine which aspects of sentence-level language made sentences more and less challenging for students across a range of development. We found that genre, words that represent a logical connection, number of idea units, long words, and words on the Academic Word List were significantly associated with item difficulty. Implications for understanding students’ sentence-level language development are discussed.
{"title":"What Is Important to Measure in Sentence-Level Language Comprehension?","authors":"Cherish M. Sarmiento, Adrea J. Truckenmiller","doi":"10.1177/15345084241265620","DOIUrl":"https://doi.org/10.1177/15345084241265620","url":null,"abstract":"Educators and researchers have been interested in supporting sentence-level language comprehension for struggling readers, but it has been challenging to research. To investigate the properties of sentences that might be useful targets for future research in instruction and assessment, we coded several features of the items in a computer-adaptive scale of sentence comprehension. The Syntactic Knowledge Task is designed for students in Grades 3–10. We then explored how the features of the sentences were related to the item’s difficulty value to determine which aspects of sentence-level language made sentences more and less challenging for students across a range of development. We found that genre, words that represent a logical connection, number of idea units, long words, and words on the Academic Word List were significantly associated with item difficulty. Implications for understanding students’ sentence-level language development are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141773036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-14DOI: 10.1177/15345084241252369
Seohyeon Choi, Kristen McMaster, Erica S. Lembke, Manjary Guha
Teachers’ knowledge and skills about data-based instruction (DBI) can influence their self-efficacy and their implementation of DBI with fidelity, ultimately playing a crucial role in improving student outcomes. The purpose of this brief report is to provide evidence for the technical adequacy of a measure of DBI knowledge and skills in writing by examining its internal consistency reliability, considering different factor structures, and assessing item statistics using classical test theory and item response theory. We used responses from 154 elementary school teachers, primarily special educators, working with children with intensive early writing needs. Results from confirmatory factor analysis did not strongly favor either a one-factor solution, representing a single dimension of DBI knowledge and skills, or a two-factor solution, comprising knowledge and skills subscales. Internal consistency reliability coefficients were within an acceptable range, especially with the one-factor solution assumed. Item difficulty and discrimination estimates varied across items, suggesting the need to further investigate certain items. We discuss the potential of using the DBI Knowledge and Skills Assessment, specifically in the context of measuring teacher-level DBI outcomes in writing.
{"title":"Technical Adequacy of the Data-Based Instruction Knowledge and Skills Assessment in Writing","authors":"Seohyeon Choi, Kristen McMaster, Erica S. Lembke, Manjary Guha","doi":"10.1177/15345084241252369","DOIUrl":"https://doi.org/10.1177/15345084241252369","url":null,"abstract":"Teachers’ knowledge and skills about data-based instruction (DBI) can influence their self-efficacy and their implementation of DBI with fidelity, ultimately playing a crucial role in improving student outcomes. The purpose of this brief report is to provide evidence for the technical adequacy of a measure of DBI knowledge and skills in writing by examining its internal consistency reliability, considering different factor structures, and assessing item statistics using classical test theory and item response theory. We used responses from 154 elementary school teachers, primarily special educators, working with children with intensive early writing needs. Results from confirmatory factor analysis did not strongly favor either a one-factor solution, representing a single dimension of DBI knowledge and skills, or a two-factor solution, comprising knowledge and skills subscales. Internal consistency reliability coefficients were within an acceptable range, especially with the one-factor solution assumed. Item difficulty and discrimination estimates varied across items, suggesting the need to further investigate certain items. We discuss the potential of using the DBI Knowledge and Skills Assessment, specifically in the context of measuring teacher-level DBI outcomes in writing.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140978399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-28DOI: 10.1177/15345084241247062
Minglee Yong
The use of a screening tool for school-wide screening of internalizing symptoms is an important strategy for early identification and prevention of more serious and impairing emotional and behavioral health problems in adolescents. However, threshold cut-off scores determined for screening tools may not be suitable for all populations. Using a sample of 237 Singaporean secondary school students, this study validated the Youth Internalizing Problems Screener (YIPS) for local use. Results of confirmatory factor analyses supported a one-factor solution for the construct. A threshold cut-off score of 27 was found to show good classification accuracy based on receiver operating characteristics (ROC) analyses. Correlational and path analyses provided evidence of convergent and predictive validity for using YIPS to indicate at-risk status. The YIPS status was uniquely associated with girls’ sense of school well-being over and above the nature of their interpersonal relationships and their sense of inadequacy. Overall, YIPS demonstrated comparable sensitivity and specificity rates even though a different cut-off score was used for this study sample. The use of YIPS as a screening tool in a multitier system of support and directions for future development were discussed.
{"title":"Validation of the Youth Internalizing Problem Screener in Singapore","authors":"Minglee Yong","doi":"10.1177/15345084241247062","DOIUrl":"https://doi.org/10.1177/15345084241247062","url":null,"abstract":"The use of a screening tool for school-wide screening of internalizing symptoms is an important strategy for early identification and prevention of more serious and impairing emotional and behavioral health problems in adolescents. However, threshold cut-off scores determined for screening tools may not be suitable for all populations. Using a sample of 237 Singaporean secondary school students, this study validated the Youth Internalizing Problems Screener (YIPS) for local use. Results of confirmatory factor analyses supported a one-factor solution for the construct. A threshold cut-off score of 27 was found to show good classification accuracy based on receiver operating characteristics (ROC) analyses. Correlational and path analyses provided evidence of convergent and predictive validity for using YIPS to indicate at-risk status. The YIPS status was uniquely associated with girls’ sense of school well-being over and above the nature of their interpersonal relationships and their sense of inadequacy. Overall, YIPS demonstrated comparable sensitivity and specificity rates even though a different cut-off score was used for this study sample. The use of YIPS as a screening tool in a multitier system of support and directions for future development were discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140810642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-26DOI: 10.1177/15345084241247059
Alisha Wackerle-Hollman, Robin Hojnoski, Kristen Missall, Mohammed A. A. Abuela, Kristin Running
Early literacy skill development predicts later reading success, and development of skills in specific domains during the preschool years has been established as both a prerequisite and precursory for reading. Early literacy assessments typically include measures of separate skills across domains, and results can assist with determining where instructions may be most needed. When multiple areas of need are identified, understanding which skills to prioritize can be a challenge. Therefore, empirically identifying the relative contribution of each skill measured in preschool to subsequent reading success can promote more efficient systems of assessment. This study, conducted in the United States, examined the predictive validity of early literacy skills measured in preschool compared to skills measured in kindergarten, with a specific practical focus on identifying the most efficient predictive model for understanding reading readiness. Participants were 119 preschoolers (mean age = 66 months) who mostly spoke English as their primary language (79%). Results indicated early literacy and language skills in preschool are highly predictive of early reading in kindergarten, accounting for 59% of the variance in a reading composite score. The most parsimonious model indicated that first sounds, letter sounds, early comprehension, and expressive vocabulary measures adequately explained 52% of the variance in children’s kindergarten reading performance.
{"title":"Using Empirical Information to Prioritize Early Literacy Assessment and Instruction in Preschool and Kindergarten","authors":"Alisha Wackerle-Hollman, Robin Hojnoski, Kristen Missall, Mohammed A. A. Abuela, Kristin Running","doi":"10.1177/15345084241247059","DOIUrl":"https://doi.org/10.1177/15345084241247059","url":null,"abstract":"Early literacy skill development predicts later reading success, and development of skills in specific domains during the preschool years has been established as both a prerequisite and precursory for reading. Early literacy assessments typically include measures of separate skills across domains, and results can assist with determining where instructions may be most needed. When multiple areas of need are identified, understanding which skills to prioritize can be a challenge. Therefore, empirically identifying the relative contribution of each skill measured in preschool to subsequent reading success can promote more efficient systems of assessment. This study, conducted in the United States, examined the predictive validity of early literacy skills measured in preschool compared to skills measured in kindergarten, with a specific practical focus on identifying the most efficient predictive model for understanding reading readiness. Participants were 119 preschoolers (mean age = 66 months) who mostly spoke English as their primary language (79%). Results indicated early literacy and language skills in preschool are highly predictive of early reading in kindergarten, accounting for 59% of the variance in a reading composite score. The most parsimonious model indicated that first sounds, letter sounds, early comprehension, and expressive vocabulary measures adequately explained 52% of the variance in children’s kindergarten reading performance.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1177/15345084241247064
Matthew K. Burns
The current study meta-analyzed 27 effects from 21 studies to determine the effect assessment of text difficulty had on reading fluency interventions, which resulted in an overall weighted effect size ( ES) = 0.43 (95% CI = [0.25, 0.62], p < .001). Using reading passages that represented an instructional level based on accuracy criteria led to a large weighted effect of ES = 1.03, 95% CI = [0.65, 1.40], p < .01), which was reliably larger ( p < .05) than that for reading fluency interventions that used reading passages with an instructional level based on rate criteria (weighted ES = 0.29, 95% CI = [0.07, 0.50], p < .01). Using reading passages based on leveling systems or those written at the students’ current grade level resulted in small weighted effects. The approach to determining difficulty for reading passages used in reading fluency interventions accounted for 11% of the variance in the effect ( p < .05) beyond student group (no risk, at-risk, disability) and type of fluency intervention. The largest weighted effect was found for students with reading disabilities ( ES = 1.14, 95% CI = [0.64, 1.65], p < .01).
本研究对来自 21 项研究的 27 个效应进行了元分析,以确定文本难度评估对阅读流利性干预的影响,结果得出总体加权效应大小 ( ES) = 0.43 (95% CI = [0.25, 0.62], p <.001)。使用基于准确性标准的阅读段落来代表教学水平,会产生较大的加权效应(ES = 1.03,95% CI = [0.65,1.40],p < .01),与使用基于速率标准的阅读段落来代表教学水平的阅读流利性干预相比(加权 ES = 0.29,95% CI = [0.07,0.50],p < .01),该效应更大(p < .05)。使用基于分级系统的阅读段落或按照学生当前年级水平编写的阅读段落所产生的加权效应较小。除了学生群体(无风险、高风险、残疾)和流利性干预类型之外,阅读流利性干预中使用的阅读段落难度确定方法占效果差异(p <.05)的 11%。阅读障碍学生的加权效应最大(ES = 1.14, 95% CI = [0.64, 1.65], p <.01)。
{"title":"Assessing an Instructional Level During Reading Fluency Interventions: A Meta-Analysis of the Effects on Reading","authors":"Matthew K. Burns","doi":"10.1177/15345084241247064","DOIUrl":"https://doi.org/10.1177/15345084241247064","url":null,"abstract":"The current study meta-analyzed 27 effects from 21 studies to determine the effect assessment of text difficulty had on reading fluency interventions, which resulted in an overall weighted effect size ( ES) = 0.43 (95% CI = [0.25, 0.62], p < .001). Using reading passages that represented an instructional level based on accuracy criteria led to a large weighted effect of ES = 1.03, 95% CI = [0.65, 1.40], p < .01), which was reliably larger ( p < .05) than that for reading fluency interventions that used reading passages with an instructional level based on rate criteria (weighted ES = 0.29, 95% CI = [0.07, 0.50], p < .01). Using reading passages based on leveling systems or those written at the students’ current grade level resulted in small weighted effects. The approach to determining difficulty for reading passages used in reading fluency interventions accounted for 11% of the variance in the effect ( p < .05) beyond student group (no risk, at-risk, disability) and type of fluency intervention. The largest weighted effect was found for students with reading disabilities ( ES = 1.14, 95% CI = [0.64, 1.65], p < .01).","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140627368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1177/15345084241239302
Katie Scarlett Lane Pelton, Kathleen Lynne Lane, Wendy Peia Oakes, Mark Matthew Buckman, Nathan Allen Lane, Grant E. Allen, D. Betsy McCoach, David James Royer, Eric Alan Common
Educators across the United States have designed and implemented Comprehensive, Integrated, Three-tiered (Ci3T) models to meet K-12 students’ academic, behavioral, and social and emotional well-being needs. As part of implementation efforts, educators collect and use social validity and treatment integrity data to capture faculty and staff views of the plan’s goals, procedures, and outcomes and the degree to which the plan is implemented as designed (e.g., procedures for teaching, reinforcing, and monitoring). In this study, we re-examined the relation between social validity and treatment integrity utilizing hierarchical linear modeling with extant data from a research partnership across 27 schools in five midwestern districts. Findings suggested an educator’s fall and spring social validity score on the Primary Intervention Rating Scale (PIRS) predicted their treatment integrity scores on the Ci3T Treatment Integrity: Teacher Self-Report (CI3T TI: TSR) in the same timepoint. Schoolwide average fall PIRS scores also statistically significantly predicted spring Ci3T TI: TSR scores. Results suggested schoolwide context is important for sustained implementation of Tier 1 procedures during the first year. Findings demonstrate the complex nature of implementing a schoolwide plan, involving each individual’s behavior while also relying on others to facilitate implementation. We discuss limitations and future directions.
{"title":"Re-examining the Relation Between Social Validity and Treatment Integrity in Ci3T Models","authors":"Katie Scarlett Lane Pelton, Kathleen Lynne Lane, Wendy Peia Oakes, Mark Matthew Buckman, Nathan Allen Lane, Grant E. Allen, D. Betsy McCoach, David James Royer, Eric Alan Common","doi":"10.1177/15345084241239302","DOIUrl":"https://doi.org/10.1177/15345084241239302","url":null,"abstract":"Educators across the United States have designed and implemented Comprehensive, Integrated, Three-tiered (Ci3T) models to meet K-12 students’ academic, behavioral, and social and emotional well-being needs. As part of implementation efforts, educators collect and use social validity and treatment integrity data to capture faculty and staff views of the plan’s goals, procedures, and outcomes and the degree to which the plan is implemented as designed (e.g., procedures for teaching, reinforcing, and monitoring). In this study, we re-examined the relation between social validity and treatment integrity utilizing hierarchical linear modeling with extant data from a research partnership across 27 schools in five midwestern districts. Findings suggested an educator’s fall and spring social validity score on the Primary Intervention Rating Scale (PIRS) predicted their treatment integrity scores on the Ci3T Treatment Integrity: Teacher Self-Report (CI3T TI: TSR) in the same timepoint. Schoolwide average fall PIRS scores also statistically significantly predicted spring Ci3T TI: TSR scores. Results suggested schoolwide context is important for sustained implementation of Tier 1 procedures during the first year. Findings demonstrate the complex nature of implementing a schoolwide plan, involving each individual’s behavior while also relying on others to facilitate implementation. We discuss limitations and future directions.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140572708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1177/15345084241240403
Nathan A. Stevenson, Aarti P. Bellara
By tradition, editors of Assessment for Effective Intervention (AEI) typically serve 3-year terms. As of January 1, 2024, AEI officially transitioned from outgoing editor Dr. Leanne Ketterlin Geller to incoming co-editors Drs. Aarti Bellara and Nathan Stevenson. The following article describes recent history and current state of AEI as a peer-review scientific journal. The new editorial team describes some of the challenges ahead and their vision for the future of AEI.
{"title":"Transition and Future of Assessment for Effective Intervention","authors":"Nathan A. Stevenson, Aarti P. Bellara","doi":"10.1177/15345084241240403","DOIUrl":"https://doi.org/10.1177/15345084241240403","url":null,"abstract":"By tradition, editors of Assessment for Effective Intervention (AEI) typically serve 3-year terms. As of January 1, 2024, AEI officially transitioned from outgoing editor Dr. Leanne Ketterlin Geller to incoming co-editors Drs. Aarti Bellara and Nathan Stevenson. The following article describes recent history and current state of AEI as a peer-review scientific journal. The new editorial team describes some of the challenges ahead and their vision for the future of AEI.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140202570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1177/15345084241235226
Angus Kittelman, Sara Izzard, Kent McIntosh, Kelsey R. Morris, Timothy J. Lewis
The purpose of this study was to evaluate the psychometric properties of the Self-Assessment Survey (SAS) 4.0, an updated measure assessing implementation fidelity of positive behavioral interventions and supports (PBIS). A total of 627 school personnel from 33 schools in six U.S. states completed the SAS 4.0 during the 2021–2022 school year. We evaluated data demonstrating the measure’s reliability (internal consistency, interrater reliability between PBIS team and non-team members), internal structure, and convergent validity for assessing implementation of Tier 1, 2, and 3 systems. We found strong internal consistency (overall and across subscales) and evidence regarding the internal structure as a four-factor measure. In addition, we found the SAS 4.0 (overall score and subscales) to be statistically significantly correlated with another widely used and empirically evaluated PBIS fidelity measure, the Tiered Fidelity Inventory (TFI). We found a statistically significant correlation between the SAS 4.0 and the SAS 3.0 for the Schoolwide Systems subscale but not other subscales. We discuss limitations given the current sample and describe implications for how PBIS teams can use the measure for school improvement and decision making.
本研究的目的是评估自我评估调查(SAS)4.0 的心理测量特性,这是一项评估积极行为干预与支持(PBIS)实施忠诚度的最新措施。在 2021-2022 学年期间,共有来自美国 6 个州 33 所学校的 627 名学校工作人员完成了 SAS 4.0。我们对数据进行了评估,以证明该测量方法在评估 1、2 和 3 级系统实施情况时的可靠性(内部一致性、PBIS 团队成员和非团队成员之间的互测可靠性)、内部结构和收敛有效性。我们发现其内部一致性(整体和各分量表之间)很强,并有证据表明其内部结构为四因素测量。此外,我们还发现 SAS 4.0(总分和分量表)与另一个广泛使用并经过实证评估的 PBIS 忠诚度测量方法--分层忠诚度量表(TFI)--在统计上有显著的相关性。我们发现,SAS 4.0 与 SAS 3.0 在全校系统分量表上存在统计意义上的显著相关性,但在其他分量表上则没有。我们讨论了当前样本的局限性,并阐述了 PBIS 团队如何使用该测量方法进行学校改进和决策的意义。
{"title":"Self-Assessment Survey: Evaluation of a Revised Measure Assessing Positive Behavioral Interventions and Supports","authors":"Angus Kittelman, Sara Izzard, Kent McIntosh, Kelsey R. Morris, Timothy J. Lewis","doi":"10.1177/15345084241235226","DOIUrl":"https://doi.org/10.1177/15345084241235226","url":null,"abstract":"The purpose of this study was to evaluate the psychometric properties of the Self-Assessment Survey (SAS) 4.0, an updated measure assessing implementation fidelity of positive behavioral interventions and supports (PBIS). A total of 627 school personnel from 33 schools in six U.S. states completed the SAS 4.0 during the 2021–2022 school year. We evaluated data demonstrating the measure’s reliability (internal consistency, interrater reliability between PBIS team and non-team members), internal structure, and convergent validity for assessing implementation of Tier 1, 2, and 3 systems. We found strong internal consistency (overall and across subscales) and evidence regarding the internal structure as a four-factor measure. In addition, we found the SAS 4.0 (overall score and subscales) to be statistically significantly correlated with another widely used and empirically evaluated PBIS fidelity measure, the Tiered Fidelity Inventory (TFI). We found a statistically significant correlation between the SAS 4.0 and the SAS 3.0 for the Schoolwide Systems subscale but not other subscales. We discuss limitations given the current sample and describe implications for how PBIS teams can use the measure for school improvement and decision making.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140032664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}