Pub Date : 2022-05-19DOI: 10.1177/15345084221095440
Evan J. Basting, Shereen C. Naser, Elizabeth A. Goncy
The BASC-3 Behavioral and Emotional Screening System Student Form (BESS SF) is the latest iteration of a widely used instrument for identifying students at behavioral and emotional risk. Measurement invariance across race/ethnicity and gender for the latest BESS SF has not yet been established. Using a sample of 737 U.S. urban fourth- to eighth-grade students, we tested competing models of the BESS SF to determine the best-fitting factor structure. We also tested for measurement equivalence by race/ethnicity (i.e., White, Black, Latinx) and gender (i.e., boys, girls). Consistent with prior findings, we identified that a bifactor structure of the BESS SF best fit the data and supported measurement equivalence across race/ethnicity and gender. These findings provide further support for using the BESS SF to conduct universal behavioral and emotional screening among diverse students. More research is needed in schools serving students with greater racial/ethnic and socioeconomic diversity.
{"title":"Assessing the Factor Structure and Measurement Invariance of the BASC-3 Behavioral and Emotional Screening System Student Form Across Race/Ethnicity and Gender","authors":"Evan J. Basting, Shereen C. Naser, Elizabeth A. Goncy","doi":"10.1177/15345084221095440","DOIUrl":"https://doi.org/10.1177/15345084221095440","url":null,"abstract":"The BASC-3 Behavioral and Emotional Screening System Student Form (BESS SF) is the latest iteration of a widely used instrument for identifying students at behavioral and emotional risk. Measurement invariance across race/ethnicity and gender for the latest BESS SF has not yet been established. Using a sample of 737 U.S. urban fourth- to eighth-grade students, we tested competing models of the BESS SF to determine the best-fitting factor structure. We also tested for measurement equivalence by race/ethnicity (i.e., White, Black, Latinx) and gender (i.e., boys, girls). Consistent with prior findings, we identified that a bifactor structure of the BESS SF best fit the data and supported measurement equivalence across race/ethnicity and gender. These findings provide further support for using the BESS SF to conduct universal behavioral and emotional screening among diverse students. More research is needed in schools serving students with greater racial/ethnic and socioeconomic diversity.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"48 1","pages":"43 - 51"},"PeriodicalIF":1.3,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45281404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-19DOI: 10.1177/15345084221091172
Parmaksiz Leonid, T. Kanonire
The Rasch/Guttman scenario (RGS) measurement approach is a promising test development methodology. The purpose of this study is to compare the RGS measure of primary school students’ motivation against more traditional self-report scales. The Scenario Scale of Extrinsic Motivation toward Math (SSEM-M) and its traditional counterpart were developed. The sample consisted of 1,299 primary school students. Both measures demonstrated solid psychometric properties and sound evidence of validity. The comparative part of the research revealed notable differences in scores and factor structure. Scenario item composition appears to provide a slightly better motivation measurement than traditional composition. Further research considering response style and social desirability effects may be of interest.
{"title":"A Comparative Investigation of the Rasch/Guttman Scenario Approach: Measuring Learning Motivation Toward Mathematics in Elementary School","authors":"Parmaksiz Leonid, T. Kanonire","doi":"10.1177/15345084221091172","DOIUrl":"https://doi.org/10.1177/15345084221091172","url":null,"abstract":"The Rasch/Guttman scenario (RGS) measurement approach is a promising test development methodology. The purpose of this study is to compare the RGS measure of primary school students’ motivation against more traditional self-report scales. The Scenario Scale of Extrinsic Motivation toward Math (SSEM-M) and its traditional counterpart were developed. The sample consisted of 1,299 primary school students. Both measures demonstrated solid psychometric properties and sound evidence of validity. The comparative part of the research revealed notable differences in scores and factor structure. Scenario item composition appears to provide a slightly better motivation measurement than traditional composition. Further research considering response style and social desirability effects may be of interest.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"48 1","pages":"34 - 42"},"PeriodicalIF":1.3,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42970309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-03DOI: 10.1177/15345084221091173
Breda V. O’Keeffe, Kaitlin Bundock, Kristin Kladis, Kat Nelson
Kindergarten reading screening measures typically identify many students as at risk who later meet criteria on important outcome measures (i.e., false positives). To address this issue, we evaluated a gated screening process that included accelerated progress monitoring, followed by a simple goal/reward procedure (skill vs. performance assessment, SPA) to distinguish between skill and performance difficulties on Phoneme Segmentation Fluency (PSF) and Nonsense Word Fluency (NWF) in a multiple baseline across students design. Nine kindergarten students scored below benchmark on PSF and/or NWF at the middle of year benchmark assessment. Across students and skills (n = 13 panels of the study), nine met/exceeded benchmark during baseline (suggesting additional exposure to the assessments was adequate), two exceeded benchmark during goal/reward procedures (suggesting adding a motivation component was adequate), and two required extended exposure to goal/reward or skill-based review to exceed the benchmark. Across panels of the baseline, 12 of 13 skills were at/above the end-of-year benchmark on PSF and/or NWF, suggesting lower risk than predicted by middle-of-year screening. Due to increasing baseline responding, experimental control was limited; however, these results suggest that simple progress monitoring may help reduce false positives after screening. Future research on this hypothesis is needed.
{"title":"Skill Performance Assessment for Kindergarten Reading Screening Measures: Pilot Study","authors":"Breda V. O’Keeffe, Kaitlin Bundock, Kristin Kladis, Kat Nelson","doi":"10.1177/15345084221091173","DOIUrl":"https://doi.org/10.1177/15345084221091173","url":null,"abstract":"Kindergarten reading screening measures typically identify many students as at risk who later meet criteria on important outcome measures (i.e., false positives). To address this issue, we evaluated a gated screening process that included accelerated progress monitoring, followed by a simple goal/reward procedure (skill vs. performance assessment, SPA) to distinguish between skill and performance difficulties on Phoneme Segmentation Fluency (PSF) and Nonsense Word Fluency (NWF) in a multiple baseline across students design. Nine kindergarten students scored below benchmark on PSF and/or NWF at the middle of year benchmark assessment. Across students and skills (n = 13 panels of the study), nine met/exceeded benchmark during baseline (suggesting additional exposure to the assessments was adequate), two exceeded benchmark during goal/reward procedures (suggesting adding a motivation component was adequate), and two required extended exposure to goal/reward or skill-based review to exceed the benchmark. Across panels of the baseline, 12 of 13 skills were at/above the end-of-year benchmark on PSF and/or NWF, suggesting lower risk than predicted by middle-of-year screening. Due to increasing baseline responding, experimental control was limited; however, these results suggest that simple progress monitoring may help reduce false positives after screening. Future research on this hypothesis is needed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"48 1","pages":"67 - 79"},"PeriodicalIF":1.3,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46706013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-04DOI: 10.1177/15345084221081091
Katherine A Koller, Robin L. Hojnoski, Ethan R. Van Norman
A strong foundation in early literacy supports children’s academic pursuits and impacts personal, social, and economic outcomes. Therefore, examining the adequacy of early literacy assessments as predictors of future performance on important outcomes is critical for identifying students at risk of reading problems. This study explored the predictive validity of preschoolers’ literacy skills measured in the spring with the Individual Growth and Development Indicators 2.0 (IGDIs 2.0) to performance in the fall and winter of kindergarten as assessed by the Dynamic Indicators of Basic Early Literacy Skills Next Edition (DIBELS Next) using Pearson product-moment correlations. In addition, the classification accuracy of student performance on the IGDIs 2.0 measures to the publisher-identified benchmark scores on DIBELS Next assessment in kindergarten was examined by calculating the sensitivity, specificity, positive and negative predictive power, overall correct classification, and kappa. Participants were 537 children from ethnically diverse backgrounds enrolled in an urban school district in the U.S. Northeast region. Results indicated small to moderate relationships between the individual IGDIs 2.0 tasks and DIBELS Next measures. Classification accuracy of student performance on the IGDIs 2.0 measures to the publisher-identified benchmark score on DIBELS Next composite in the fall and winter of kindergarten revealed inadequate levels of sensitivity; however, locally derived cut-scores improved sensitivity and specificity.
{"title":"Classification Accuracy of Early Literacy Assessments: Linking Preschool and Kindergarten Performance","authors":"Katherine A Koller, Robin L. Hojnoski, Ethan R. Van Norman","doi":"10.1177/15345084221081091","DOIUrl":"https://doi.org/10.1177/15345084221081091","url":null,"abstract":"A strong foundation in early literacy supports children’s academic pursuits and impacts personal, social, and economic outcomes. Therefore, examining the adequacy of early literacy assessments as predictors of future performance on important outcomes is critical for identifying students at risk of reading problems. This study explored the predictive validity of preschoolers’ literacy skills measured in the spring with the Individual Growth and Development Indicators 2.0 (IGDIs 2.0) to performance in the fall and winter of kindergarten as assessed by the Dynamic Indicators of Basic Early Literacy Skills Next Edition (DIBELS Next) using Pearson product-moment correlations. In addition, the classification accuracy of student performance on the IGDIs 2.0 measures to the publisher-identified benchmark scores on DIBELS Next assessment in kindergarten was examined by calculating the sensitivity, specificity, positive and negative predictive power, overall correct classification, and kappa. Participants were 537 children from ethnically diverse backgrounds enrolled in an urban school district in the U.S. Northeast region. Results indicated small to moderate relationships between the individual IGDIs 2.0 tasks and DIBELS Next measures. Classification accuracy of student performance on the IGDIs 2.0 measures to the publisher-identified benchmark score on DIBELS Next composite in the fall and winter of kindergarten revealed inadequate levels of sensitivity; however, locally derived cut-scores improved sensitivity and specificity.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"48 1","pages":"13 - 22"},"PeriodicalIF":1.3,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47069761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-02DOI: 10.1177/15345084211061338
Lia E. Sandilos, J. DiPerna
The creation of psychometrically sound assessments of teacher well-being is critical given the alarmingly high rates of teacher burnout reported among U.S. educators. The present study sought to address this need by developing the Measures of Stressors and Supports for Teachers (MOST), a teacher-report questionnaire designed to assess ecological and psychological factors that affect teachers’ professional well-being. To assess structural validity, the MOST was administered to a sample of K–12 educators (N = 218). Methods outlined in Classical Test Theory and exploratory factor analysis were conducted to examine items and assess the factor structure of the MOST. Factor analytic findings yielded a 40-item, nine-factor structure (Parents, Colleagues, School Leadership and Belonging, Classroom Students, Students With Disabilities, Time Pressure, Professional Development, Safety, and Emotional State). Implications for further validation and use of the MOST are discussed.
{"title":"Initial Development and Validation of the Measures of Stressors and Supports for Teachers (MOST)","authors":"Lia E. Sandilos, J. DiPerna","doi":"10.1177/15345084211061338","DOIUrl":"https://doi.org/10.1177/15345084211061338","url":null,"abstract":"The creation of psychometrically sound assessments of teacher well-being is critical given the alarmingly high rates of teacher burnout reported among U.S. educators. The present study sought to address this need by developing the Measures of Stressors and Supports for Teachers (MOST), a teacher-report questionnaire designed to assess ecological and psychological factors that affect teachers’ professional well-being. To assess structural validity, the MOST was administered to a sample of K–12 educators (N = 218). Methods outlined in Classical Test Theory and exploratory factor analysis were conducted to examine items and assess the factor structure of the MOST. Factor analytic findings yielded a 40-item, nine-factor structure (Parents, Colleagues, School Leadership and Belonging, Classroom Students, Students With Disabilities, Time Pressure, Professional Development, Safety, and Emotional State). Implications for further validation and use of the MOST are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"187 - 197"},"PeriodicalIF":1.3,"publicationDate":"2022-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41916721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1177/15345084211000564
{"title":"Erratum to Monster, P.I.: Validation Evidence for an Assessment of Adolescent Language That Assesses Vocabulary Knowledge, Morphological Knowledge, and Syntactical Awareness","authors":"","doi":"10.1177/15345084211000564","DOIUrl":"https://doi.org/10.1177/15345084211000564","url":null,"abstract":"","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"124 - 124"},"PeriodicalIF":1.3,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211000564","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44261008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-03DOI: 10.1177/15345084211073601
Amna A. Agha, Adrea J. Truckenmiller, J. Fine, Megan Perreault
The development of written expression includes transcription, text generation, and executive functions (including planning) interacting within working memory. However, executive functions are not formally measured in school-based written expression tasks, although there is an opportunity for examining students’ advance planning—a key manifestation of executive functions. We explore the influence of advance planning on Grade 2 written expression using curriculum-based measurement in written expression (CBM-WE) probes with a convenience sample of 126 students in six U.S. classrooms. Controlling for transcription, which is typically the primary focus of instruction in early elementary grades, we found that a score on advance planning explained additional significant variance in writing quantity and accuracy. Results support that planning may be an additional score to add to the use of CBM-WE. Implications for assessment and further research on the early development of planning and executive functions related to written expression are explored.
{"title":"A Preliminary Investigation Into the Role of Planning in Early Writing Development","authors":"Amna A. Agha, Adrea J. Truckenmiller, J. Fine, Megan Perreault","doi":"10.1177/15345084211073601","DOIUrl":"https://doi.org/10.1177/15345084211073601","url":null,"abstract":"The development of written expression includes transcription, text generation, and executive functions (including planning) interacting within working memory. However, executive functions are not formally measured in school-based written expression tasks, although there is an opportunity for examining students’ advance planning—a key manifestation of executive functions. We explore the influence of advance planning on Grade 2 written expression using curriculum-based measurement in written expression (CBM-WE) probes with a convenience sample of 126 students in six U.S. classrooms. Controlling for transcription, which is typically the primary focus of instruction in early elementary grades, we found that a score on advance planning explained additional significant variance in writing quantity and accuracy. Results support that planning may be an additional score to add to the use of CBM-WE. Implications for assessment and further research on the early development of planning and executive functions related to written expression are explored.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"48 1","pages":"3 - 12"},"PeriodicalIF":1.3,"publicationDate":"2022-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44572801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.1177/15345084211073604
S. Major, M. Seabra-Santos, Roy P. Martin
The early identification of social-emotional and behavioral problems of preschool children has become an important goal in research and clinical practice. A growing number of studies have been published in this field; however, most focus on behavior problems, or on social skills, but few on both. The present study aims to test the validity of the Portuguese version of the Preschool and Kindergarten Behavior Scales–Second Edition (PKBS-2) in differentiating two groups of preschoolers regarding their social skills and behavior problems: 41 children at risk for disruptive behavior (BP group) and 41 selected from the PKBS-2 normative sample (comparison group). Each child was rated with the PKBS-2 by parents and teachers. Results showed that children in the BP group were rated by their parents as having fewer social skills and more behavior problems than the comparison group (p < .01, for the majority of the PKBS-2 scores). A similar pattern was found for teachers’ ratings. The discriminant functional analysis highlighted the Social Cooperation and the Externalizing Problem Behavior subscales as most accurate in differentiating the two groups. The usefulness of the PKBS-2 Portuguese version as a valid assessment tool available for practice and research with preschoolers was supported.
{"title":"Differentiating Preschoolers With(Out) Social-Emotional and Behavioral Problems: Do We Have a Useful Tool?","authors":"S. Major, M. Seabra-Santos, Roy P. Martin","doi":"10.1177/15345084211073604","DOIUrl":"https://doi.org/10.1177/15345084211073604","url":null,"abstract":"The early identification of social-emotional and behavioral problems of preschool children has become an important goal in research and clinical practice. A growing number of studies have been published in this field; however, most focus on behavior problems, or on social skills, but few on both. The present study aims to test the validity of the Portuguese version of the Preschool and Kindergarten Behavior Scales–Second Edition (PKBS-2) in differentiating two groups of preschoolers regarding their social skills and behavior problems: 41 children at risk for disruptive behavior (BP group) and 41 selected from the PKBS-2 normative sample (comparison group). Each child was rated with the PKBS-2 by parents and teachers. Results showed that children in the BP group were rated by their parents as having fewer social skills and more behavior problems than the comparison group (p < .01, for the majority of the PKBS-2 scores). A similar pattern was found for teachers’ ratings. The discriminant functional analysis highlighted the Social Cooperation and the Externalizing Problem Behavior subscales as most accurate in differentiating the two groups. The usefulness of the PKBS-2 Portuguese version as a valid assessment tool available for practice and research with preschoolers was supported.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"198 - 208"},"PeriodicalIF":1.3,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43409622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-31DOI: 10.1177/15345084211065977
M. McKenna, R. Dedrick, H. Goldstein
This article describes the development of the Early Elementary Writing Rubric (EEWR), an analytic assessment designed to measure kindergarten and first-grade writing and inform educators’ instruction. Crocker and Algina’s (1986) approach to instrument development and validation was used as a guide to create and refine the writing measure. Study 1 describes the development of the 10-item measure (response scale ranges from 0 = Beginning of Kindergarten to 5 = End of First Grade). Educators participated in focus groups, expert panel review, cognitive interviews, and pretesting as part of the instrument development process. Study 2 evaluates measurement quality in terms of score reliability and validity. Data from writing samples produced by 634 students in kindergarten and first-grade classrooms were collected during pilot testing. An exploratory factor analysis was conducted to evaluate the psychometric properties of the EEWR. A one-factor model fit the data for all writing genres and all scoring elements were retained with loadings ranging from 0.49 to 0.92. Internal consistency reliability was high and ranged from .89 to .91. Interrater reliability between the researcher and participants varied from poor to good and means ranged from 52% to 72%. First-grade students received higher scores than kindergartners on all 10 scoring elements. The EEWR holds promise as an acceptable, useful, and psychometrically sound measure of early writing. Further iterative development is needed to fully investigate its ability to accurately identify the present level of student performance and to determine sensitivity to developmental and instruction gains.
{"title":"Development and Initial Validation of the Early Elementary Writing Rubric to Inform Instruction for Kindergarten and First-Grade Students","authors":"M. McKenna, R. Dedrick, H. Goldstein","doi":"10.1177/15345084211065977","DOIUrl":"https://doi.org/10.1177/15345084211065977","url":null,"abstract":"This article describes the development of the Early Elementary Writing Rubric (EEWR), an analytic assessment designed to measure kindergarten and first-grade writing and inform educators’ instruction. Crocker and Algina’s (1986) approach to instrument development and validation was used as a guide to create and refine the writing measure. Study 1 describes the development of the 10-item measure (response scale ranges from 0 = Beginning of Kindergarten to 5 = End of First Grade). Educators participated in focus groups, expert panel review, cognitive interviews, and pretesting as part of the instrument development process. Study 2 evaluates measurement quality in terms of score reliability and validity. Data from writing samples produced by 634 students in kindergarten and first-grade classrooms were collected during pilot testing. An exploratory factor analysis was conducted to evaluate the psychometric properties of the EEWR. A one-factor model fit the data for all writing genres and all scoring elements were retained with loadings ranging from 0.49 to 0.92. Internal consistency reliability was high and ranged from .89 to .91. Interrater reliability between the researcher and participants varied from poor to good and means ranged from 52% to 72%. First-grade students received higher scores than kindergartners on all 10 scoring elements. The EEWR holds promise as an acceptable, useful, and psychometrically sound measure of early writing. Further iterative development is needed to fully investigate its ability to accurately identify the present level of student performance and to determine sensitivity to developmental and instruction gains.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"47 1","pages":"220 - 233"},"PeriodicalIF":1.3,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46713278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-28DOI: 10.1177/15345084211063533
T. Nergård-Nilssen, O. Friborg
This article describes the development and psychometric properties of a new Dyslexia Marker Test for Children (Dysmate-C). The test was designed to identify Norwegian students who need special instructional attention. The computerized test includes measures of letter knowledge, phoneme awareness, rapid automatized naming, working memory, decoding, and spelling skills. Data were collected from a sample of more than 1,100 students. Item response theory (IRT) was used for the psychometric evaluation, and principal component analysis for checking uni-dimensionality. IRT was further used to select and remove items, which significantly shortened the test battery without sacrificing reliability or discriminating ability. Cronbach’s alphas ranged between .84 and .95. Validity was established by examining how well the Dysmate-C identified students already diagnosed with dyslexia. Logistic regression and receiver operating characteristic (ROC) curve analyses indicated good to excellent accuracy in separating children with dyslexia from typical children (area under curve [AUC] = .92). The Dysmate-C meets the standards for reliability and validity. The use of regression-based norms, voice-over instructions, easy scoring procedures, accurate timing, and automatic computation of scores make the test a useful tool. It may be used as part of a screening procedure, and as part of a diagnostic assessment. Limitations and practical implications are discussed.
{"title":"The Dyslexia Marker Test for Children: Development and Validation of a New Test","authors":"T. Nergård-Nilssen, O. Friborg","doi":"10.1177/15345084211063533","DOIUrl":"https://doi.org/10.1177/15345084211063533","url":null,"abstract":"This article describes the development and psychometric properties of a new Dyslexia Marker Test for Children (Dysmate-C). The test was designed to identify Norwegian students who need special instructional attention. The computerized test includes measures of letter knowledge, phoneme awareness, rapid automatized naming, working memory, decoding, and spelling skills. Data were collected from a sample of more than 1,100 students. Item response theory (IRT) was used for the psychometric evaluation, and principal component analysis for checking uni-dimensionality. IRT was further used to select and remove items, which significantly shortened the test battery without sacrificing reliability or discriminating ability. Cronbach’s alphas ranged between .84 and .95. Validity was established by examining how well the Dysmate-C identified students already diagnosed with dyslexia. Logistic regression and receiver operating characteristic (ROC) curve analyses indicated good to excellent accuracy in separating children with dyslexia from typical children (area under curve [AUC] = .92). The Dysmate-C meets the standards for reliability and validity. The use of regression-based norms, voice-over instructions, easy scoring procedures, accurate timing, and automatic computation of scores make the test a useful tool. It may be used as part of a screening procedure, and as part of a diagnostic assessment. Limitations and practical implications are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":"48 1","pages":"23 - 33"},"PeriodicalIF":1.3,"publicationDate":"2021-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44587318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}