Pub Date : 2020-06-01DOI: 10.1177/1534508419845465
Peter G. Mezo, Hannah C. Herc, Kelsey J. Pritchard, W. A. Bullock
The Child and Adolescent Mindfulness Measure (CAMM) is a frequently used measure of mindfulness in school settings. This study evaluates the psychometric properties and internal consistency of the CAMM in a predominantly African American, low socioeconomic status (SES) school sample drawn from students in kindergarten through fourth grade. In addition, a revised version of the CAMM (the CAMM-R) was developed and evaluated in the same sample. Results are generally supportive of the internal consistency and item-level characteristics of both the CAMM and the CAMM-R. These results are discussed in terms of implications for understanding the reliability and validity of the CAMM and CAMM-R among underrepresented students, as well as students within a younger sample.
{"title":"Evaluation and a Proposed Revision of the CAMM Among Underrepresented Elementary School Children","authors":"Peter G. Mezo, Hannah C. Herc, Kelsey J. Pritchard, W. A. Bullock","doi":"10.1177/1534508419845465","DOIUrl":"https://doi.org/10.1177/1534508419845465","url":null,"abstract":"The Child and Adolescent Mindfulness Measure (CAMM) is a frequently used measure of mindfulness in school settings. This study evaluates the psychometric properties and internal consistency of the CAMM in a predominantly African American, low socioeconomic status (SES) school sample drawn from students in kindergarten through fourth grade. In addition, a revised version of the CAMM (the CAMM-R) was developed and evaluated in the same sample. Results are generally supportive of the internal consistency and item-level characteristics of both the CAMM and the CAMM-R. These results are discussed in terms of implications for understanding the reliability and validity of the CAMM and CAMM-R among underrepresented students, as well as students within a younger sample.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419845465","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48171010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1177/1534508420916957
P. Kipping
{"title":"Call for Nominations: Editor(s), Assessment for Effective Intervention","authors":"P. Kipping","doi":"10.1177/1534508420916957","DOIUrl":"https://doi.org/10.1177/1534508420916957","url":null,"abstract":"","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420916957","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44579082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-04DOI: 10.1177/1534508420909527
Stephen P. Kilgus, Katie Eklund, Nathaniel P. von der Embse, Madison M. Weist, Alexandra J. Barber, Megan Kaul, Sophia Dodge
The purpose of this study was to evaluate the structural validity, internal consistency, and measurement invariance of scores from the Social, Academic, and Emotional Behavior Risk Screener–Student Rating Scale (mySAEBRS), a student self-report universal screening tool. Participants included 24,094 K–12 students who completed the mySAEBRS. Confirmatory factor analyses (CFAs) supported the fit of a bifactor model, wherein each item corresponding to both a general factor (i.e., Total Behavior) and one of three narrow factors (i.e., Social Behavior, Academic Behavior, and Emotional Behavior). Such model fit was superior to that of alternative factor structures (i.e., unidimensional, correlated-factor, and higher order). A review of pattern coefficients suggested items were relatively split, with some items loading higher on the general factor and others loading higher on their narrow factor. A series of multigroup CFAs supported the configural and metric invariance of the bifactor model, while yielding less consistent support for scalar/threshold invariance. Omega reliability coefficients indicated each mySAEBRS scale was associated with acceptable internal consistency (>.70). However, when accounting for other factors, only the Total Behavior, Social Behavior, and Emotional Behavior scales demonstrated acceptable internal consistency (i.e., >.50). Implications for practice and directions for future research are discussed.
{"title":"Structural Validity and Reliability of Social, Academic, and Emotional Behavior Risk Screener–Student Rating Scale Scores: A Replication Study","authors":"Stephen P. Kilgus, Katie Eklund, Nathaniel P. von der Embse, Madison M. Weist, Alexandra J. Barber, Megan Kaul, Sophia Dodge","doi":"10.1177/1534508420909527","DOIUrl":"https://doi.org/10.1177/1534508420909527","url":null,"abstract":"The purpose of this study was to evaluate the structural validity, internal consistency, and measurement invariance of scores from the Social, Academic, and Emotional Behavior Risk Screener–Student Rating Scale (mySAEBRS), a student self-report universal screening tool. Participants included 24,094 K–12 students who completed the mySAEBRS. Confirmatory factor analyses (CFAs) supported the fit of a bifactor model, wherein each item corresponding to both a general factor (i.e., Total Behavior) and one of three narrow factors (i.e., Social Behavior, Academic Behavior, and Emotional Behavior). Such model fit was superior to that of alternative factor structures (i.e., unidimensional, correlated-factor, and higher order). A review of pattern coefficients suggested items were relatively split, with some items loading higher on the general factor and others loading higher on their narrow factor. A series of multigroup CFAs supported the configural and metric invariance of the bifactor model, while yielding less consistent support for scalar/threshold invariance. Omega reliability coefficients indicated each mySAEBRS scale was associated with acceptable internal consistency (>.70). However, when accounting for other factors, only the Total Behavior, Social Behavior, and Emotional Behavior scales demonstrated acceptable internal consistency (i.e., >.50). Implications for practice and directions for future research are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420909527","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43584137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-03DOI: 10.1177/1534508420902523
Marissa J. Filderman, Jessica R. Toste, North Cooc
Although national legislation and policy call for the use of student assessment data to support instruction, evidence suggests that teachers lack the knowledge and skills required to effectively use data. Previous studies have demonstrated the potential of training for increasing immediate teacher outcomes (i.e., knowledge, skills, and beliefs), yet research is still needed that investigates whether these immediate learning outcomes correspond to improved practices in reading and math instruction. Using the Early Childhood Longitudinal Survey: Kindergarten (2011), the present study sought to investigate whether data-focused training predicted teacher use of data for four prevalent decision-making outcomes: monitor progress on specific skills, identify skill deficits, monitor overall progress of students performing below benchmark, and determine placement in instructional tiers. Results indicate that professional development to use data to identify struggling learners and coursework focused on the use of assessment to select interventions and supports significantly predicted teachers’ frequent use of data across key decision-making dimensions in reading instruction. Results for math instruction differ in that more frequent data use was not consistent across outcomes, more training sessions were needed, and professional development to use data to guide instruction significantly predicted use of data to monitor students who performed below benchmark.
{"title":"Does Training Predict Second-Grade Teachers’ Use of Student Data for Decision-Making in Reading and Mathematics?","authors":"Marissa J. Filderman, Jessica R. Toste, North Cooc","doi":"10.1177/1534508420902523","DOIUrl":"https://doi.org/10.1177/1534508420902523","url":null,"abstract":"Although national legislation and policy call for the use of student assessment data to support instruction, evidence suggests that teachers lack the knowledge and skills required to effectively use data. Previous studies have demonstrated the potential of training for increasing immediate teacher outcomes (i.e., knowledge, skills, and beliefs), yet research is still needed that investigates whether these immediate learning outcomes correspond to improved practices in reading and math instruction. Using the Early Childhood Longitudinal Survey: Kindergarten (2011), the present study sought to investigate whether data-focused training predicted teacher use of data for four prevalent decision-making outcomes: monitor progress on specific skills, identify skill deficits, monitor overall progress of students performing below benchmark, and determine placement in instructional tiers. Results indicate that professional development to use data to identify struggling learners and coursework focused on the use of assessment to select interventions and supports significantly predicted teachers’ frequent use of data across key decision-making dimensions in reading instruction. Results for math instruction differ in that more frequent data use was not consistent across outcomes, more training sessions were needed, and professional development to use data to guide instruction significantly predicted use of data to monitor students who performed below benchmark.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420902523","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46800201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-20DOI: 10.1177/1534508419897997
David C. Parker, Lisa H. Stewart, S. Thomson, Ruth A. Kaminski
Vocabulary skills are important for overall reading competence, but vocabulary assessment approaches that inform instructional decision-making and are sensitive to improvement are limited. This article describes a process for developing vocabulary measures designed to facilitate data-driven decision-making for kindergarten and first-grade students who are at risk in vocabulary. A pilot study suggested the measures could be administered and scored with fidelity, and also produced promising data for indices of reliability, criterion-related validity, and sensitivity to growth, particularly for a rating-based scoring metric. Implications and considerations for developing instructionally relevant vocabulary measures are discussed.
{"title":"Development and Technical Adequacy of Instructionally Relevant Vocabulary Measures for Young Students","authors":"David C. Parker, Lisa H. Stewart, S. Thomson, Ruth A. Kaminski","doi":"10.1177/1534508419897997","DOIUrl":"https://doi.org/10.1177/1534508419897997","url":null,"abstract":"Vocabulary skills are important for overall reading competence, but vocabulary assessment approaches that inform instructional decision-making and are sensitive to improvement are limited. This article describes a process for developing vocabulary measures designed to facilitate data-driven decision-making for kindergarten and first-grade students who are at risk in vocabulary. A pilot study suggested the measures could be administered and scored with fidelity, and also produced promising data for indices of reliability, criterion-related validity, and sensitivity to growth, particularly for a rating-based scoring metric. Implications and considerations for developing instructionally relevant vocabulary measures are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419897997","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48771184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-20DOI: 10.1177/1534508419900546
Jason C. Chow, Jennifer R. Frey, Lauren Hunter Naples
We investigated the associations between teacher-rated and direct assessments of early elementary students’ speech and language skills to explore whether using teachers as primary screeners yielded assessment data that reliably identified young students with language difficulties who many need a more comprehensive evaluation. We assessed first- and second-grade students (N = 365) on syntax, morphology, and vocabulary, screened for global speech and language development, and analyzed a teacher-completed norm-referenced communication rating scale. Teacher-rated language significantly predicted students’ latent language skills, and teachers’ ratings of students’ communication skills were not influenced by students’ gender, race/ethnicity, English language learner status, or special education status. We conclude with a discussion of implications for school-based research, assessment, and practice.
{"title":"Associations Between Teacher Ratings and Direct Assessment of Elementary Students’ Speech and Language Skills","authors":"Jason C. Chow, Jennifer R. Frey, Lauren Hunter Naples","doi":"10.1177/1534508419900546","DOIUrl":"https://doi.org/10.1177/1534508419900546","url":null,"abstract":"We investigated the associations between teacher-rated and direct assessments of early elementary students’ speech and language skills to explore whether using teachers as primary screeners yielded assessment data that reliably identified young students with language difficulties who many need a more comprehensive evaluation. We assessed first- and second-grade students (N = 365) on syntax, morphology, and vocabulary, screened for global speech and language development, and analyzed a teacher-completed norm-referenced communication rating scale. Teacher-rated language significantly predicted students’ latent language skills, and teachers’ ratings of students’ communication skills were not influenced by students’ gender, race/ethnicity, English language learner status, or special education status. We conclude with a discussion of implications for school-based research, assessment, and practice.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419900546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43670065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-31DOI: 10.1177/1534508419883937
Nicole M. McKevett, Robin S. Codding
Brief experimental analysis (BEA) is a quick method used to identify the function of student learning difficulties and match effective interventions to students’ needs. Extensive work has been done to explore the use of this methodology to determine effective reading interventions; however, a smaller number of published studies have examined the use of BEAs in math. The purpose of the current review was to identify all studies that have used BEA methodology in math. Fifteen studies that included 63 participants and used BEA methodology to identify the most effective math intervention for students were located. Results of the synthesis indicate that the majority of BEAs compared skill and performance interventions on computational fluency; however, the methodology across the included studies varied. Strengths and limitations of the research, in addition to implications for research and practice, are discussed.
{"title":"Brief Experimental Analysis of Math Interventions: A Synthesis of Evidence","authors":"Nicole M. McKevett, Robin S. Codding","doi":"10.1177/1534508419883937","DOIUrl":"https://doi.org/10.1177/1534508419883937","url":null,"abstract":"Brief experimental analysis (BEA) is a quick method used to identify the function of student learning difficulties and match effective interventions to students’ needs. Extensive work has been done to explore the use of this methodology to determine effective reading interventions; however, a smaller number of published studies have examined the use of BEAs in math. The purpose of the current review was to identify all studies that have used BEA methodology in math. Fifteen studies that included 63 participants and used BEA methodology to identify the most effective math intervention for students were located. Results of the synthesis indicate that the majority of BEAs compared skill and performance interventions on computational fluency; however, the methodology across the included studies varied. Strengths and limitations of the research, in addition to implications for research and practice, are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419883937","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46450147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-31DOI: 10.1177/1534508419883947
A. Vanderheyden, C. Broussard
This study details the construction of parameters for generating subskill mastery math measures to be used for screening, intervention planning, progress monitoring, and proximal program evaluation. Parameters for generating assessment measures were built and tested to verify initial equivalence of generated measures using potential digits correct as a proxy for task difficulty across generated measures. Generated measures met initial equivalence criteria and were subjected to further reliability analysis. Measures were generated and administered 1 week apart at fall and winter to students in Grades K, 1, 3, 5, and 7. Thirty-four screening measures were examined for delayed alternate form reliability, risk decision agreement, and interobserver agreement. Delayed alternate form reliability values generally exceeded r = .80, could be reliably scored, and yielded consistent risk decisions. Future research directions were discussed.
{"title":"Construction and Examination of Math Subskill Mastery Measures","authors":"A. Vanderheyden, C. Broussard","doi":"10.1177/1534508419883947","DOIUrl":"https://doi.org/10.1177/1534508419883947","url":null,"abstract":"This study details the construction of parameters for generating subskill mastery math measures to be used for screening, intervention planning, progress monitoring, and proximal program evaluation. Parameters for generating assessment measures were built and tested to verify initial equivalence of generated measures using potential digits correct as a proxy for task difficulty across generated measures. Generated measures met initial equivalence criteria and were subjected to further reliability analysis. Measures were generated and administered 1 week apart at fall and winter to students in Grades K, 1, 3, 5, and 7. Thirty-four screening measures were examined for delayed alternate form reliability, risk decision agreement, and interobserver agreement. Delayed alternate form reliability values generally exceeded r = .80, could be reliably scored, and yielded consistent risk decisions. Future research directions were discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2019-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419883947","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45256012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-23DOI: 10.1177/1534508419895085
J. McKenna, Xiaoxia Newton, E. Bergman
Although the majority of students receiving special education services for emotional disturbance (ED) receive a significant amount of instruction in general education classrooms, evidence-based practices for educating students with ED in these settings have yet to be identified. As a result, school-based practitioners must primarily rely on professional recommendations and values when planning and delivering inclusive instruction for this student population. This study investigated the internal consistency and factor structure of a survey measure designed to obtain information on practitioner knowledge, use, and perceived effectiveness of recommended classroom-based practices for the inclusive instruction of students with ED. Results indicate adequate internal consistency. An exploratory factor analysis (EFA) revealed a four-factor structure: Behavior Support, Classroom Management, Differentiation, and Instructional Practices. Study limitations include a low response rate for the electronic survey and reliance on responses from practitioners from one geographic area. Future investigations are necessary to refine the survey instrument and to obtain data from teachers from other geographic areas.
{"title":"Inclusive Instruction for Students Receiving Special Education Services for Emotional Disturbance: A Survey Development Study","authors":"J. McKenna, Xiaoxia Newton, E. Bergman","doi":"10.1177/1534508419895085","DOIUrl":"https://doi.org/10.1177/1534508419895085","url":null,"abstract":"Although the majority of students receiving special education services for emotional disturbance (ED) receive a significant amount of instruction in general education classrooms, evidence-based practices for educating students with ED in these settings have yet to be identified. As a result, school-based practitioners must primarily rely on professional recommendations and values when planning and delivering inclusive instruction for this student population. This study investigated the internal consistency and factor structure of a survey measure designed to obtain information on practitioner knowledge, use, and perceived effectiveness of recommended classroom-based practices for the inclusive instruction of students with ED. Results indicate adequate internal consistency. An exploratory factor analysis (EFA) revealed a four-factor structure: Behavior Support, Classroom Management, Differentiation, and Instructional Practices. Study limitations include a low response rate for the electronic survey and reliance on responses from practitioners from one geographic area. Future investigations are necessary to refine the survey instrument and to obtain data from teachers from other geographic areas.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2019-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419895085","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46361073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-23DOI: 10.1177/1534508419895094
Saundra M. Tabet, Mary K. Perleoni, Dalena Dillman Taylor, Viki P. Kelchner, Glenn W. Lambie
The Child Behavior Checklist (CBCL) is one of the most frequently used assessments of social, emotional, and behavioral functioning; however, previous research has noted inconsistency in the factor structure and items included on the Child Behavior Checklist for Ages 6 to 18 Years (CBCL/6-18) when tested with diverse samples of client populations. Thus, the purpose of our investigation was to examine the factor structure of CBCL/6-18 scores (N = 459) with diverse American children referred to receive school-based mental health counseling enrolled in five Title I elementary schools in the Southeastern United States. We performed confirmatory factor analysis (CFA) and principal component analysis (PCA) on CBCL/6-18 scores to examine the factor structure and internal consistency reliability of the data. Results demonstrated an inadequate fit for model and further data analyses resulted in a three-factor, 32-item model (41.40% of the variance explained). Implications of the findings support a new conceptual framework of the CBCL/6-18 to provide a more parsimonious model when working with diverse populations, specifically children from low-income families.
{"title":"The Factor Structure of Child Behavior Checklist Scores With Elementary School Students Referred to Counseling Within Low-Income Communities","authors":"Saundra M. Tabet, Mary K. Perleoni, Dalena Dillman Taylor, Viki P. Kelchner, Glenn W. Lambie","doi":"10.1177/1534508419895094","DOIUrl":"https://doi.org/10.1177/1534508419895094","url":null,"abstract":"The Child Behavior Checklist (CBCL) is one of the most frequently used assessments of social, emotional, and behavioral functioning; however, previous research has noted inconsistency in the factor structure and items included on the Child Behavior Checklist for Ages 6 to 18 Years (CBCL/6-18) when tested with diverse samples of client populations. Thus, the purpose of our investigation was to examine the factor structure of CBCL/6-18 scores (N = 459) with diverse American children referred to receive school-based mental health counseling enrolled in five Title I elementary schools in the Southeastern United States. We performed confirmatory factor analysis (CFA) and principal component analysis (PCA) on CBCL/6-18 scores to examine the factor structure and internal consistency reliability of the data. Results demonstrated an inadequate fit for model and further data analyses resulted in a three-factor, 32-item model (41.40% of the variance explained). Implications of the findings support a new conceptual framework of the CBCL/6-18 to provide a more parsimonious model when working with diverse populations, specifically children from low-income families.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2019-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419895094","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44822872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}