Pub Date : 2021-05-20DOI: 10.1177/15345084211014926
Martin T. Peters, Karin Hebbecker, Elmar Souvignier
Monitoring learning progress enables teachers to address students’ interindividual differences and to adapt instruction to students’ needs. We investigated whether using learning progress assessment (LPA) or using a combination of LPA and prepared material to help teachers implement assessment-based differentiated instruction resulted in improved reading skills for students. The study was conducted in second-grade classrooms in general primary education, and participants (N = 33 teachers and N = 619 students) were assigned to one of three conditions: a control group (CG); a first intervention group (LPA), which received LPA only; or a second intervention group (LPA-RS), which received a combination of LPA and material for differentiated reading instruction (the “reading sportsman”). At the beginning and the end of one school year, students’ reading fluency and reading comprehension were assessed. Compared with business-as-usual reading instruction (the CG), providing teachers with LPA or both LPA and prepared material did not lead to higher gains in reading competence. Furthermore, no significant differences between the LPA and LPA-RS conditions were found. Corresponding analyses for lower- and higher-achieving students also revealed no differences between the treatment groups. Results are discussed regarding the implementation of LPA and reading instruction in general education.
{"title":"Effects of Providing Teachers With Tools for Implementing Assessment-Based Differentiated Reading Instruction in Second Grade","authors":"Martin T. Peters, Karin Hebbecker, Elmar Souvignier","doi":"10.1177/15345084211014926","DOIUrl":"https://doi.org/10.1177/15345084211014926","url":null,"abstract":"Monitoring learning progress enables teachers to address students’ interindividual differences and to adapt instruction to students’ needs. We investigated whether using learning progress assessment (LPA) or using a combination of LPA and prepared material to help teachers implement assessment-based differentiated instruction resulted in improved reading skills for students. The study was conducted in second-grade classrooms in general primary education, and participants (N = 33 teachers and N = 619 students) were assigned to one of three conditions: a control group (CG); a first intervention group (LPA), which received LPA only; or a second intervention group (LPA-RS), which received a combination of LPA and material for differentiated reading instruction (the “reading sportsman”). At the beginning and the end of one school year, students’ reading fluency and reading comprehension were assessed. Compared with business-as-usual reading instruction (the CG), providing teachers with LPA or both LPA and prepared material did not lead to higher gains in reading competence. Furthermore, no significant differences between the LPA and LPA-RS conditions were found. Corresponding analyses for lower- and higher-achieving students also revealed no differences between the treatment groups. Results are discussed regarding the implementation of LPA and reading instruction in general education.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/15345084211014926","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47190712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-15DOI: 10.1177/1534508421998772
W. van Dijk, A. Huggins-Manley, Nicholas A. Gage, Holly B. Lane, Michael D. Coyne
In reading intervention research, implementation fidelity is assumed to be positively related to student outcomes, but the methods used to measure fidelity are often treated as an afterthought. Fidelity has been conceptualized and measured in many different ways, suggesting a lack of construct validity. One aspect of construct validity is the fidelity index of a measure. This methodological case study examined how different decisions in fidelity indices influence relative rank ordering of individuals on the construct of interest and influence our perception of the relation between the construct and intervention outcomes. Data for this study came from a large state-funded project to implement multi-tiered systems of support for early reading instruction. Analyses were conducted to determine whether the different fidelity indices are stable in relative rank ordering participants and if fidelity indices of dosage and adherence data influence researcher decisions on model building within a multilevel modeling framework. Results indicated that the fidelity indices resulted in different relations to outcomes with the most commonly used fidelity indices for both dosage and adherence being the worst performing. The choice of index to use should receive considerable thought during the design phase of an intervention study.
{"title":"Why Does Construct Validity Matter in Measuring Implementation Fidelity? A Methodological Case Study","authors":"W. van Dijk, A. Huggins-Manley, Nicholas A. Gage, Holly B. Lane, Michael D. Coyne","doi":"10.1177/1534508421998772","DOIUrl":"https://doi.org/10.1177/1534508421998772","url":null,"abstract":"In reading intervention research, implementation fidelity is assumed to be positively related to student outcomes, but the methods used to measure fidelity are often treated as an afterthought. Fidelity has been conceptualized and measured in many different ways, suggesting a lack of construct validity. One aspect of construct validity is the fidelity index of a measure. This methodological case study examined how different decisions in fidelity indices influence relative rank ordering of individuals on the construct of interest and influence our perception of the relation between the construct and intervention outcomes. Data for this study came from a large state-funded project to implement multi-tiered systems of support for early reading instruction. Analyses were conducted to determine whether the different fidelity indices are stable in relative rank ordering participants and if fidelity indices of dosage and adherence data influence researcher decisions on model building within a multilevel modeling framework. Results indicated that the fidelity indices resulted in different relations to outcomes with the most commonly used fidelity indices for both dosage and adherence being the worst performing. The choice of index to use should receive considerable thought during the design phase of an intervention study.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508421998772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46447335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-06DOI: 10.1177/1534508420984522
Christopher L. Thomas, Staci M. Zolkoski, S. Sass
Educators and educational support staff are becoming increasingly aware of the importance of systematic efforts to support students’ social and emotional growth. Logically, the success of social-emotional learning programs depends upon the ability of educators to assess student’s ability to process and utilize social-emotional information and use data to guide programmatic revisions. Therefore, the purpose of the current examination was to provide evidence of the structural validity of the Social-Emotional Learning Scale (SELS), a freely available measure of social-emotional learning, within Grades 6 to 12. Students (N = 289, 48% female, 43.35% male, 61% Caucasian) completed the SELS and the Strengths and Difficulties Questionnaire. Confirmatory factor analyses of the SELS failed to support a multidimensional factor structure identified in prior investigations. The results of an exploratory factor analysis suggest a reduced 16-item version of the SELS captures a unidimensional social-emotional construct. Furthermore, our results provide evidence of the internal consistency and concurrent validity of the reduced-length version of the instrument. Our discussion highlights the implications of the findings to social and emotional learning educational efforts and promoting evidence-based practice.
{"title":"Investigating the Psychometric Properties of the Social-Emotional Learning Scale","authors":"Christopher L. Thomas, Staci M. Zolkoski, S. Sass","doi":"10.1177/1534508420984522","DOIUrl":"https://doi.org/10.1177/1534508420984522","url":null,"abstract":"Educators and educational support staff are becoming increasingly aware of the importance of systematic efforts to support students’ social and emotional growth. Logically, the success of social-emotional learning programs depends upon the ability of educators to assess student’s ability to process and utilize social-emotional information and use data to guide programmatic revisions. Therefore, the purpose of the current examination was to provide evidence of the structural validity of the Social-Emotional Learning Scale (SELS), a freely available measure of social-emotional learning, within Grades 6 to 12. Students (N = 289, 48% female, 43.35% male, 61% Caucasian) completed the SELS and the Strengths and Difficulties Questionnaire. Confirmatory factor analyses of the SELS failed to support a multidimensional factor structure identified in prior investigations. The results of an exploratory factor analysis suggest a reduced 16-item version of the SELS captures a unidimensional social-emotional construct. Furthermore, our results provide evidence of the internal consistency and concurrent validity of the reduced-length version of the instrument. Our discussion highlights the implications of the findings to social and emotional learning educational efforts and promoting evidence-based practice.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420984522","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46107581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-14DOI: 10.1177/1534508420978457
Sarah J. Conoyer, W. Therrien, Kristen K. White
Meta-analysis was used to examine curriculum-based measurement in the content areas of social studies and science. Nineteen studies between the years of 1998 and 2020 were reviewed to determine overall mean correlation for criterion validity and examine alternate-form reliability and slope coefficients. An overall mean correlation of .59 was found for criterion validity; however, there was significant heterogeneity across studies, suggesting curriculum-based measure (CBM) format or content area may affect findings. Low to high alternative form reliability correlation coefficients were reported across CBM formats between .21 and .89. Studies investigating slopes included mostly vocabulary-matching formats and reported a range from .12 to .65 correct items per week with a mean of .34. Our findings suggest that additional research in the development of these measures in validity, reliability, and slope is warranted.
{"title":"Meta-Analysis of Validity and Review of Alternate Form Reliability and Slope for Curriculum-Based Measurement in Science and Social Studies","authors":"Sarah J. Conoyer, W. Therrien, Kristen K. White","doi":"10.1177/1534508420978457","DOIUrl":"https://doi.org/10.1177/1534508420978457","url":null,"abstract":"Meta-analysis was used to examine curriculum-based measurement in the content areas of social studies and science. Nineteen studies between the years of 1998 and 2020 were reviewed to determine overall mean correlation for criterion validity and examine alternate-form reliability and slope coefficients. An overall mean correlation of .59 was found for criterion validity; however, there was significant heterogeneity across studies, suggesting curriculum-based measure (CBM) format or content area may affect findings. Low to high alternative form reliability correlation coefficients were reported across CBM formats between .21 and .89. Studies investigating slopes included mostly vocabulary-matching formats and reported a range from .12 to .65 correct items per week with a mean of .34. Our findings suggest that additional research in the development of these measures in validity, reliability, and slope is warranted.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420978457","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47043950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-04DOI: 10.1177/1534508420976830
Brian Barger, Emily C. Graybill, Andrew T. Roach, K. Lane
This study used item response theory (IRT) methods to investigate group differences in responses to the 12-item Student Risk Screening Scale–Internalizing and Externalizing (SRSS-IE12) in a sample of 3,837 U.S. elementary school students. Using factor analysis and graded response models from IRT methods, we examined the factor structure and the item and test functioning of the SRSS-IE12. The SRSS-IE12 internalizing and externalizing factors reflected the hypothesized two-factor model. IRT analyses indicated that SRSS-IE12 items and tests measure internalizing and externalizing traits similarly across students from different race, ethnicity, gender, and elementary level (K–Grade 2 vs. Grades 3–5) groups. Moreover, the mostly negligible differential item functioning (DIF) and differential test functioning (DTF) observed suggest these scales render equitable trait ratings. Collectively, the results provide further support for the SRSS-IE12 for universal screening in racially diverse elementary schools.
{"title":"Differential Item and Test Functioning of the SRSS-IE12 Across Race, Ethnicity, Gender, and Elementary Level","authors":"Brian Barger, Emily C. Graybill, Andrew T. Roach, K. Lane","doi":"10.1177/1534508420976830","DOIUrl":"https://doi.org/10.1177/1534508420976830","url":null,"abstract":"This study used item response theory (IRT) methods to investigate group differences in responses to the 12-item Student Risk Screening Scale–Internalizing and Externalizing (SRSS-IE12) in a sample of 3,837 U.S. elementary school students. Using factor analysis and graded response models from IRT methods, we examined the factor structure and the item and test functioning of the SRSS-IE12. The SRSS-IE12 internalizing and externalizing factors reflected the hypothesized two-factor model. IRT analyses indicated that SRSS-IE12 items and tests measure internalizing and externalizing traits similarly across students from different race, ethnicity, gender, and elementary level (K–Grade 2 vs. Grades 3–5) groups. Moreover, the mostly negligible differential item functioning (DIF) and differential test functioning (DTF) observed suggest these scales render equitable trait ratings. Collectively, the results provide further support for the SRSS-IE12 for universal screening in racially diverse elementary schools.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508420976830","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44364242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/1534508419862619
Sarah K. Ura, Sara Castro-Olivo, A. d’Abreu
Recent meta-analyses confirm that social–emotional learning (SEL) interventions are effective in increasing academic, social, and emotional outcomes via direct skills instruction. With skill development serving as a primary mechanism of change in SEL interventions, we argue for the accurate measurement of skills as an important component of SEL research. Using the Collaborative for Academic, Social, and Emotional Learning (CASEL) model, we evaluate 111 studies included in a recent meta-analysis to determine the match between constructs targeted in interventions and SEL skill competency, as well as the measurement of skills and instruments used to evaluate programs. Findings indicate a general trend in the measurement of broad outcomes, rather than skills taught in programs, and limited measurement across CASEL five-competency model. Utility of measuring outcomes specific to competencies taught in intervention across SEL domains are discussed.
{"title":"Outcome Measurement of School-Based SEL Intervention Follow-Up Studies","authors":"Sarah K. Ura, Sara Castro-Olivo, A. d’Abreu","doi":"10.1177/1534508419862619","DOIUrl":"https://doi.org/10.1177/1534508419862619","url":null,"abstract":"Recent meta-analyses confirm that social–emotional learning (SEL) interventions are effective in increasing academic, social, and emotional outcomes via direct skills instruction. With skill development serving as a primary mechanism of change in SEL interventions, we argue for the accurate measurement of skills as an important component of SEL research. Using the Collaborative for Academic, Social, and Emotional Learning (CASEL) model, we evaluate 111 studies included in a recent meta-analysis to determine the match between constructs targeted in interventions and SEL skill competency, as well as the measurement of skills and instruments used to evaluate programs. Findings indicate a general trend in the measurement of broad outcomes, rather than skills taught in programs, and limited measurement across CASEL five-competency model. Utility of measuring outcomes specific to competencies taught in intervention across SEL domains are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419862619","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46305877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/1534508419857225
S. Whitley, Yojanna Cuenca-Carlino
Many schools attempt to identify and service students at risk for poor mental health outcomes within a multi-tiered system of support (MTSS). Universal screening within a MTSS requires technically adequate tools. The Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) has been put forth as a technically adequate screener. Researchers have examined the factor structure, diagnostic accuracy, criterion validity, and internal consistency of SAEBRS data. However, previous research has not examined its temporal stability or replicated the criterion validity results with a racially/ethnically diverse urban elementary school sample. This study examined the test–retest reliability, convergent validity, and predictive validity of teacher-completed SAEBRS ratings with racially/ethnically diverse group students enrolled in first through fifth grade in an urban elementary school. Reliability analyses resulted in significant test–retest reliability coefficients across four weeks for all SAEBRS scales. Furthermore, nonsignificant paired samples t tests were observed with the exception of the third-grade Emotional subscale. Validity analyses yielded significant concurrent and predictive Pearson correlation coefficients between SAEBRS ratings, oral reading fluency, and office discipline referrals. Limitations and implications of the results are discussed.
{"title":"Examining the Technical Adequacy of the Social, Academic, and Emotional Behavior Risk Screener","authors":"S. Whitley, Yojanna Cuenca-Carlino","doi":"10.1177/1534508419857225","DOIUrl":"https://doi.org/10.1177/1534508419857225","url":null,"abstract":"Many schools attempt to identify and service students at risk for poor mental health outcomes within a multi-tiered system of support (MTSS). Universal screening within a MTSS requires technically adequate tools. The Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) has been put forth as a technically adequate screener. Researchers have examined the factor structure, diagnostic accuracy, criterion validity, and internal consistency of SAEBRS data. However, previous research has not examined its temporal stability or replicated the criterion validity results with a racially/ethnically diverse urban elementary school sample. This study examined the test–retest reliability, convergent validity, and predictive validity of teacher-completed SAEBRS ratings with racially/ethnically diverse group students enrolled in first through fifth grade in an urban elementary school. Reliability analyses resulted in significant test–retest reliability coefficients across four weeks for all SAEBRS scales. Furthermore, nonsignificant paired samples t tests were observed with the exception of the third-grade Emotional subscale. Validity analyses yielded significant concurrent and predictive Pearson correlation coefficients between SAEBRS ratings, oral reading fluency, and office discipline referrals. Limitations and implications of the results are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419857225","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43432366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/1534508419857228
Melissa A. Collier‐Meek, L. Sanetti, Lindsay M. Fallon, Sandra M. Chafouleas
Treatment fidelity data are critical to evaluate intervention effectiveness, yet there are only general guidelines regarding treatment fidelity measurement. Initial investigations have found treatment fidelity data collected via direct observation to be more reliable than data collected via permanent product or self-report. However, the comparison of assessment methods is complicated by the intervention steps accounted for, observation timing, and intervention sessions accounted for, which may impact treatment fidelity estimates. In this study, we compared direct observation and permanent product data to evaluate these varied assessment and data collection decisions on treatment fidelity data estimates in three classrooms engaged in a group contingency intervention. Findings revealed that treatment fidelity estimates, in addition to being different across assessment method, are, in fact, different depending on the intervention steps assessed, intervention sessions accounted for, and observation timing. Implications for treatment fidelity assessment research, reporting in intervention research broadly, and implementation assessment in practice are described.
{"title":"Exploring the Influences of Assessment Method, Intervention Steps, Intervention Sessions, and Observation Timing on Treatment Fidelity Estimates","authors":"Melissa A. Collier‐Meek, L. Sanetti, Lindsay M. Fallon, Sandra M. Chafouleas","doi":"10.1177/1534508419857228","DOIUrl":"https://doi.org/10.1177/1534508419857228","url":null,"abstract":"Treatment fidelity data are critical to evaluate intervention effectiveness, yet there are only general guidelines regarding treatment fidelity measurement. Initial investigations have found treatment fidelity data collected via direct observation to be more reliable than data collected via permanent product or self-report. However, the comparison of assessment methods is complicated by the intervention steps accounted for, observation timing, and intervention sessions accounted for, which may impact treatment fidelity estimates. In this study, we compared direct observation and permanent product data to evaluate these varied assessment and data collection decisions on treatment fidelity data estimates in three classrooms engaged in a group contingency intervention. Findings revealed that treatment fidelity estimates, in addition to being different across assessment method, are, in fact, different depending on the intervention steps assessed, intervention sessions accounted for, and observation timing. Implications for treatment fidelity assessment research, reporting in intervention research broadly, and implementation assessment in practice are described.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508419857228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43750273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/1534508418819793
S. Neugebauer, Ken A. Fujimoto
The current investigation addresses critiques about motivation terminology and instrumentation by examining together three commonly used reading motivation assessments in schools. This study explores the distinctiveness and redundancies of the constructs operationalized in these reading motivation assessments with 222 middle school students, using item response theory. Results support distinctions between constructs grounded in self-determination theory, social cognitive theory, and expectancy-value theory as well as conceptual overlap, among constructs associated with competence beliefs and social sources of motivation across different theoretical traditions. Educational benefits of multidimensional and unidimensional interpretations of reading motivation constructs covered in these instruments are discussed.
{"title":"Distinct and Overlapping Dimensions of Reading Motivation in Commonly Used Measures in Schools","authors":"S. Neugebauer, Ken A. Fujimoto","doi":"10.1177/1534508418819793","DOIUrl":"https://doi.org/10.1177/1534508418819793","url":null,"abstract":"The current investigation addresses critiques about motivation terminology and instrumentation by examining together three commonly used reading motivation assessments in schools. This study explores the distinctiveness and redundancies of the constructs operationalized in these reading motivation assessments with 222 middle school students, using item response theory. Results support distinctions between constructs grounded in self-determination theory, social cognitive theory, and expectancy-value theory as well as conceptual overlap, among constructs associated with competence beliefs and social sources of motivation across different theoretical traditions. Educational benefits of multidimensional and unidimensional interpretations of reading motivation constructs covered in these instruments are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418819793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41420811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1177/1534508418820697
Stephanie Morano, P. Riccomini
The present study examines the features and quality of visual representations (VRs) created by middle school students with learning disabilities and difficulties in mathematics in response to a released fraction item from the National Assessment of Educational Progress (NAEP). Relations between VR quality and scores on other measures of fraction knowledge are also investigated. Results show that students used circular area models most frequently to represent the NAEP item, but used bar models most accurately. Based on results, bar models may be the most efficient and effective area model VRs for use in fractions instruction. Representation quality was associated with problem-solving accuracy, as well as with performance on fraction number line estimation and fraction magnitude comparison. Implications for practice are discussed.
{"title":"Is a Picture Worth 1,000 Words? Investigating Fraction Magnitude Knowledge Through Analysis of Student Representations","authors":"Stephanie Morano, P. Riccomini","doi":"10.1177/1534508418820697","DOIUrl":"https://doi.org/10.1177/1534508418820697","url":null,"abstract":"The present study examines the features and quality of visual representations (VRs) created by middle school students with learning disabilities and difficulties in mathematics in response to a released fraction item from the National Assessment of Educational Progress (NAEP). Relations between VR quality and scores on other measures of fraction knowledge are also investigated. Results show that students used circular area models most frequently to represent the NAEP item, but used bar models most accurately. Based on results, bar models may be the most efficient and effective area model VRs for use in fractions instruction. Representation quality was associated with problem-solving accuracy, as well as with performance on fraction number line estimation and fraction magnitude comparison. Implications for practice are discussed.","PeriodicalId":46264,"journal":{"name":"ASSESSMENT FOR EFFECTIVE INTERVENTION","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1534508418820697","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44378502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}