Pub Date : 2023-11-30DOI: 10.1177/07342829231219291
Ashleigh Barrett-Young, Rachel Martin, A. E. Clifford, Elizabeth Schaughency, Jimmy McLauchlan, Dione Healey
This literature review investigates tools used to assess self-regulation at school entry and to inform recommendations for use in Aotearoa New Zealand. We were particularly interested in identifying self-regulation screening tools that had been developed from Indigenous frameworks to enhance likelihood of culturally empowering assessment. APA PsycInfo and Clarivate Web of Science databases were searched for articles on self-regulation screening at school entry. Screening tools were included if they met the following criteria: available in English or te reo Māori (the two predominant written languages of Aotearoa New Zealand); appropriate for children aged 5–6 years; and focus on self-regulation or contain a component assessing self-regulation. 39 screening tools which met the criteria were identified. Overall, most tools were developed from a Euro-American perspective and many were deficit- and/or clinically-focused. Issues with translating screening tools to other cultures are discussed, specifically in the context of Aotearoa New Zealand.
本文献综述调查了用于评估入学时自我调节能力的工具,并为在新西兰奥特亚罗瓦使用这些工具提供建议。我们特别感兴趣的是确定根据土著框架开发的自我调节筛选工具,以提高文化赋权评估的可能性。我们在 APA PsycInfo 和 Clarivate Web of Science 数据库中搜索了有关入学自律筛查的文章。符合以下标准的筛查工具均被纳入其中:有英语或毛利语(新西兰奥特亚罗瓦的两种主要书面语言)版本;适合 5-6 岁儿童使用;侧重于自我调节或包含评估自我调节的内容。符合标准的筛查工具共有 39 种。总体而言,大多数工具都是从欧美国家的角度开发的,许多工具都以缺陷和/或临床为重点。本文讨论了将筛查工具翻译成其他文化的问题,特别是在新西兰奥特亚罗瓦的背景下。
{"title":"Assessment of Self-Regulation at School Entry: A Literature Review of Existing Screening Tools and Suitability for the Aotearoa New Zealand Context","authors":"Ashleigh Barrett-Young, Rachel Martin, A. E. Clifford, Elizabeth Schaughency, Jimmy McLauchlan, Dione Healey","doi":"10.1177/07342829231219291","DOIUrl":"https://doi.org/10.1177/07342829231219291","url":null,"abstract":"This literature review investigates tools used to assess self-regulation at school entry and to inform recommendations for use in Aotearoa New Zealand. We were particularly interested in identifying self-regulation screening tools that had been developed from Indigenous frameworks to enhance likelihood of culturally empowering assessment. APA PsycInfo and Clarivate Web of Science databases were searched for articles on self-regulation screening at school entry. Screening tools were included if they met the following criteria: available in English or te reo Māori (the two predominant written languages of Aotearoa New Zealand); appropriate for children aged 5–6 years; and focus on self-regulation or contain a component assessing self-regulation. 39 screening tools which met the criteria were identified. Overall, most tools were developed from a Euro-American perspective and many were deficit- and/or clinically-focused. Issues with translating screening tools to other cultures are discussed, specifically in the context of Aotearoa New Zealand.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139208803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1177/07342829231218582
Sohyun An Kim, Rebecca Gotlieb, Laura V. Rhinehart, Veronica Pedroza, Maryanne Wolf
Rapid automatized naming (RAN) is a powerful predictor of reading fluency, and many digitized dyslexia screeners include RAN as an essential component. However, the validity of digitized RAN has not been established. Using a sample of 174 second-graders, this study tested (1) the comparability between paper and digitized versions of RAN and (2) the validity of the digitized version. We found that paper and digital versions were highly correlated, and such correlation was consistent across students’ reading levels. Further, the digital RAN predicted children’s word reading proficiency as well as the paper version. Moreover, the constructs measured by paper and digital versions of RAN were comparable. We conclude that the digitized RAN is a valid alternative to the traditional paper version for this age group.
快速自动命名(RAN)是预测阅读流畅性的有力指标,许多数字化诵读困难筛查工具都将 RAN 作为一个重要组成部分。然而,数字化 RAN 的有效性尚未得到证实。本研究以 174 名二年级学生为样本,测试了(1)纸质版和数字化版 RAN 的可比性;(2)数字化版 RAN 的有效性。我们发现,纸质版和数字版的相关性很高,而且这种相关性在不同阅读水平的学生之间是一致的。此外,数字版 RAN 对儿童单词阅读能力的预测与纸质版相同。此外,纸质版和数字版 RAN 所测量的结构具有可比性。我们的结论是,对于这个年龄段的学生来说,数字化 RAN 是传统纸质版 RAN 的有效替代品。
{"title":"A Validity Study of the Digitized Version of the Rapid Automatized Naming Test","authors":"Sohyun An Kim, Rebecca Gotlieb, Laura V. Rhinehart, Veronica Pedroza, Maryanne Wolf","doi":"10.1177/07342829231218582","DOIUrl":"https://doi.org/10.1177/07342829231218582","url":null,"abstract":"Rapid automatized naming (RAN) is a powerful predictor of reading fluency, and many digitized dyslexia screeners include RAN as an essential component. However, the validity of digitized RAN has not been established. Using a sample of 174 second-graders, this study tested (1) the comparability between paper and digitized versions of RAN and (2) the validity of the digitized version. We found that paper and digital versions were highly correlated, and such correlation was consistent across students’ reading levels. Further, the digital RAN predicted children’s word reading proficiency as well as the paper version. Moreover, the constructs measured by paper and digital versions of RAN were comparable. We conclude that the digitized RAN is a valid alternative to the traditional paper version for this age group.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"41 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139216929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.1177/07342829231218036
Qiongxi Zhang, Lisa Underwood, Elizabeth R. Peterson, J. Fenaughty, K. Waldie
The Aggressive Student Culture Scale (ASCS) is a survey designed to measure the extent to which New Zealand (NZ) students experience aggressive behaviours within the school environment. The aim of this study is to assess the psychometric properties of the ASCS in the multidisciplinary Growing Up in NZ longitudinal study. We used data from 4938 children from the Growing Up in NZ study to examine the psychometric properties of ASCS for 8-year-old children. Confirmatory factor analysis was conducted, and measurement invariance was tested across sex, ethnicity, and deprivation levels. The ASCS tool comprises a single latent factor: aggressive student behaviour. The ASCS provides an adequate and satisfactory measure for student aggression experiences. Full measurement invariance was supported for child’s sex, but only configural invariance was confirmed across ethnicity and area-level deprivation. Males reported higher levels of aggressive experiences than females. The one-factor model structure offers an excellent fit to our data with good internal consistency. Comparisons across sex are valid; however, direct comparisons across ethnicity and deprivation levels should be approached with caution. We recommend replication studies and encourage further research involving participants from different age groups to better understand the factor structure across diverse demographic variables.
{"title":"Psychometric Properties and Factor Structure of the Aggressive Student Culture Scale Administered to the Age 8 Growing Up in NZ Cohort","authors":"Qiongxi Zhang, Lisa Underwood, Elizabeth R. Peterson, J. Fenaughty, K. Waldie","doi":"10.1177/07342829231218036","DOIUrl":"https://doi.org/10.1177/07342829231218036","url":null,"abstract":"The Aggressive Student Culture Scale (ASCS) is a survey designed to measure the extent to which New Zealand (NZ) students experience aggressive behaviours within the school environment. The aim of this study is to assess the psychometric properties of the ASCS in the multidisciplinary Growing Up in NZ longitudinal study. We used data from 4938 children from the Growing Up in NZ study to examine the psychometric properties of ASCS for 8-year-old children. Confirmatory factor analysis was conducted, and measurement invariance was tested across sex, ethnicity, and deprivation levels. The ASCS tool comprises a single latent factor: aggressive student behaviour. The ASCS provides an adequate and satisfactory measure for student aggression experiences. Full measurement invariance was supported for child’s sex, but only configural invariance was confirmed across ethnicity and area-level deprivation. Males reported higher levels of aggressive experiences than females. The one-factor model structure offers an excellent fit to our data with good internal consistency. Comparisons across sex are valid; however, direct comparisons across ethnicity and deprivation levels should be approached with caution. We recommend replication studies and encourage further research involving participants from different age groups to better understand the factor structure across diverse demographic variables.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"100 ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139240205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-23DOI: 10.1177/07342829231217771
Annie Goerdt, Faith Miller, Danielle Dupuis, Meg Olson
School-based universal screening in the social, emotional, and behavioral (SEB) domains allows for the early identification of students in need of SEB support. Importantly, equitable assessment in universal screening for the SEB domains is critical to engage in accurate and ethical data-based decision-making. Measurement invariance is one method for examining potential inequities in assessment tools, permitting the ability to evaluate which assessments or assessment items perform differently across groups. As such, this study utilized multi-group confirmatory factor analysis to evaluate the extent of measurement invariance for a commonly used universal screening tool for the SEB domains: the Social, Academic, and Emotional Behavior Risk Screener - Teacher Rating Scale (SAEBRS-TRS). The sample consisted of 1949 students in kindergarten through fourth grade in a Midwest, suburban school district. Examination of factor structures indicated the bifactor model yielded adequate fit and was utilized for measurement invariance testing. Multi-group confirmatory factor analysis results provided preliminary evidence that the SAEBRS-TRS displays invariance across a variety of student characteristics. Specifically, results supported configural and metric/scalar invariance of the bifactor model across the student characteristics of racial or ethnic identity, sex assigned at birth, and eligibility for free or reduced-price lunch. Yet, future research is needed to corroborate these findings. Limitations, implications for practice, and directions for future research are discussed.
{"title":"Measurement Invariance of the Social, Academic, and Emotional Behavior Risk Screener - Teacher Rating Scale","authors":"Annie Goerdt, Faith Miller, Danielle Dupuis, Meg Olson","doi":"10.1177/07342829231217771","DOIUrl":"https://doi.org/10.1177/07342829231217771","url":null,"abstract":"School-based universal screening in the social, emotional, and behavioral (SEB) domains allows for the early identification of students in need of SEB support. Importantly, equitable assessment in universal screening for the SEB domains is critical to engage in accurate and ethical data-based decision-making. Measurement invariance is one method for examining potential inequities in assessment tools, permitting the ability to evaluate which assessments or assessment items perform differently across groups. As such, this study utilized multi-group confirmatory factor analysis to evaluate the extent of measurement invariance for a commonly used universal screening tool for the SEB domains: the Social, Academic, and Emotional Behavior Risk Screener - Teacher Rating Scale (SAEBRS-TRS). The sample consisted of 1949 students in kindergarten through fourth grade in a Midwest, suburban school district. Examination of factor structures indicated the bifactor model yielded adequate fit and was utilized for measurement invariance testing. Multi-group confirmatory factor analysis results provided preliminary evidence that the SAEBRS-TRS displays invariance across a variety of student characteristics. Specifically, results supported configural and metric/scalar invariance of the bifactor model across the student characteristics of racial or ethnic identity, sex assigned at birth, and eligibility for free or reduced-price lunch. Yet, future research is needed to corroborate these findings. Limitations, implications for practice, and directions for future research are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"35 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139245915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-23DOI: 10.1177/07342829231218767
Scott H. Yamamoto
This was the first study in which a psychometrically validated STEM measure, the “Student STEM” (S-STEM), was studied for HSSWD. This study also represented the first time a psychometrically validated STEM measure, the “Student STEM” (S-STEM), was studied for HSSWD. Data were collected from 229 HSSWD in a western state and analyzed using Cronbach’s Alpha, McDonald’s Omega, and exploratory factor analysis (EFA). The Alpha and Omega results showed less internal consistency in HSSWD attitudes toward mathematics than their attitudes toward the other three S-STEM subscales. The EFA results showed preliminary indications of construct validity for the engineering and technology subscale. Limitations of the study are its modest sample size and some measurement imprecision. Implications of the study are further psychometric analyses of the S-STEM for HSSWD and educators focusing on increasing STEM opportunities for HSSWD.
{"title":"Preliminary Psychometric Assessment of STEM Attitude Measure for U.S. High School Students With Disabilities","authors":"Scott H. Yamamoto","doi":"10.1177/07342829231218767","DOIUrl":"https://doi.org/10.1177/07342829231218767","url":null,"abstract":"This was the first study in which a psychometrically validated STEM measure, the “Student STEM” (S-STEM), was studied for HSSWD. This study also represented the first time a psychometrically validated STEM measure, the “Student STEM” (S-STEM), was studied for HSSWD. Data were collected from 229 HSSWD in a western state and analyzed using Cronbach’s Alpha, McDonald’s Omega, and exploratory factor analysis (EFA). The Alpha and Omega results showed less internal consistency in HSSWD attitudes toward mathematics than their attitudes toward the other three S-STEM subscales. The EFA results showed preliminary indications of construct validity for the engineering and technology subscale. Limitations of the study are its modest sample size and some measurement imprecision. Implications of the study are further psychometric analyses of the S-STEM for HSSWD and educators focusing on increasing STEM opportunities for HSSWD.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"55 9","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139244366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collectively, measures created for research use—whether self-report or observational—have contributed to evidence underscoring the importance of ensuring teachers develop knowledge and skills to engage in asset-based pedagogy. Teachers who wish to enhance their practice, however, do not have a way to elicit students’ perspectives of their instruction with a validated instrument designed to do so. Given that student identity is a robust predictor of minoritized students’ academic and non-academic outcomes, this study reflects the development and validation of Asset-Based Identities Measure that centers student voice to formatively inform teacher practice. The iterative design of the study included expert educators, students, and a larger validation sample of N = 860 students. Cognitive interviews and focus groups contributed to the refinement of the pilot measure across three identity domains. Factor structures were examined through confirmatory factor analyses resulting in a robust measure. Use of the measure is discussed.
总之,为研究目的而设计的测量方法--无论是自我报告法还是观察法--都有助于提供证据,强调确保教师发展知识和技能以从事基于资产的教学法的重要性。然而,那些希望提高自身教学实践水平的教师,却没有办法通过有效的工具来了解学生对其教学的看法。鉴于学生身份是少数族裔学生学业和非学业成绩的有力预测因素,本研究反映了 "基于资产的身份测量 "的开发和验证,该测量以学生的声音为中心,为教师的实践提供形成性信息。该研究的迭代设计包括专家教育者、学生和更大的验证样本 N = 860 名学生。认知访谈和焦点小组有助于完善三个身份领域的试点测量。通过确认性因子分析对因子结构进行了研究,最终形成了一个稳健的测量方法。本文讨论了该测量方法的使用。
{"title":"Centering Student Voice to Inform Teacher Practice and Research: Validation of an Asset-Based Identities Measure","authors":"Francesca López, DeLeon Gray, Mildred Boveda, Dynah Oviedo, Nilam Ram, Lorenzo López","doi":"10.1177/07342829231216778","DOIUrl":"https://doi.org/10.1177/07342829231216778","url":null,"abstract":"Collectively, measures created for research use—whether self-report or observational—have contributed to evidence underscoring the importance of ensuring teachers develop knowledge and skills to engage in asset-based pedagogy. Teachers who wish to enhance their practice, however, do not have a way to elicit students’ perspectives of their instruction with a validated instrument designed to do so. Given that student identity is a robust predictor of minoritized students’ academic and non-academic outcomes, this study reflects the development and validation of Asset-Based Identities Measure that centers student voice to formatively inform teacher practice. The iterative design of the study included expert educators, students, and a larger validation sample of N = 860 students. Cognitive interviews and focus groups contributed to the refinement of the pilot measure across three identity domains. Factor structures were examined through confirmatory factor analyses resulting in a robust measure. Use of the measure is discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"29 9","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139266195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.1177/07342829231213791
Kristína Czekóová, Tomáš Urbánek
An accurate assessment of cognitive abilities in populations that differ from the majority in cultural and linguistic characteristics is one of the main challenges in cognitive testing. Previously developed methods for assessment of the validity of cognitive scores in individuals with diverse backgrounds, such as the Culture-Language Interpretative Matrix (C-LIM), have not been empirically substantiated. We tested the applicability of the C-LIM in the European context, by comparing selected test scores from the Woodcock-Johnson-IV Test of Cognitive Abilities (WJ-IV) between Roma children aged 7–11 years ( n = 399) and their counterparts from the normative population ( n = 131). The largest differences were detected in WJ-IV tests requiring abstract reasoning and manipulation with complex signs. Furthermore, the C-LIM did not reliably discriminate between our groups and its use appears to be inappropriate for making diagnostic decisions about children from populations that do not traditionally rely on processes such as categorical thinking, abstract reasoning, and generalization.
{"title":"Validity of Intelligence Assessment Among the Roma Minority Population","authors":"Kristína Czekóová, Tomáš Urbánek","doi":"10.1177/07342829231213791","DOIUrl":"https://doi.org/10.1177/07342829231213791","url":null,"abstract":"An accurate assessment of cognitive abilities in populations that differ from the majority in cultural and linguistic characteristics is one of the main challenges in cognitive testing. Previously developed methods for assessment of the validity of cognitive scores in individuals with diverse backgrounds, such as the Culture-Language Interpretative Matrix (C-LIM), have not been empirically substantiated. We tested the applicability of the C-LIM in the European context, by comparing selected test scores from the Woodcock-Johnson-IV Test of Cognitive Abilities (WJ-IV) between Roma children aged 7–11 years ( n = 399) and their counterparts from the normative population ( n = 131). The largest differences were detected in WJ-IV tests requiring abstract reasoning and manipulation with complex signs. Furthermore, the C-LIM did not reliably discriminate between our groups and its use appears to be inappropriate for making diagnostic decisions about children from populations that do not traditionally rely on processes such as categorical thinking, abstract reasoning, and generalization.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"27 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136347524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-05DOI: 10.1177/07342829231212320
Annamaria Di Fabio, Andrea Svicher
The Eco-Generativity Scale (EGS) is a recently developed 28-item scale derived from a 4-factor higher-order model (ecological generativity, social generativity, environmental identity, and agency/pathways). The aim of this study was to develop a short-scale version of the EGS to facilitate its use with university students ( N = 779) who will determine the future of our world’s ecosystem. Data analyses included removing misfitting items and assessing the psychometric properties of the EGS short form. The Eco-Generativity Scale-Short Form (EGS-SF) showed a good fit for a higher-order model composed of four factors and sixteen items (four items for each factor).
{"title":"The Eco-Generativity Scale-Short Form: A Multidimensional Item Response Theory Analysis in University Students","authors":"Annamaria Di Fabio, Andrea Svicher","doi":"10.1177/07342829231212320","DOIUrl":"https://doi.org/10.1177/07342829231212320","url":null,"abstract":"The Eco-Generativity Scale (EGS) is a recently developed 28-item scale derived from a 4-factor higher-order model (ecological generativity, social generativity, environmental identity, and agency/pathways). The aim of this study was to develop a short-scale version of the EGS to facilitate its use with university students ( N = 779) who will determine the future of our world’s ecosystem. Data analyses included removing misfitting items and assessing the psychometric properties of the EGS short form. The Eco-Generativity Scale-Short Form (EGS-SF) showed a good fit for a higher-order model composed of four factors and sixteen items (four items for each factor).","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"96 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135725935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1177/07342829231210032
Angela F. Y. Siu, Chrysa P. C. KEUNG, Alastair H. K. TO
This study analyzed the psychometric properties of a Chinese version of the teacher-reported Devereux Early Childhood Assessment, Second Edition (DECA-P2) using a sample of 246 children aged between 2 and 6 years old. Confirmatory factor analysis was used to examine its construct validity. Reliability was evaluated based on the internal consistency of the scale items, as well as discriminant and convergent validities were assessed using the Strengths and Difficulties Questionnaire. The findings provide emerging evidence of a four-factor structure (i.e., attachment, initiative, self-regulation, and behavioral concern) and support the use of this Chinese DECA-P2 as a screening instrument to identify social-emotional strengths and behavioral problems among Chinese preschool children. The limitations of this study and its implications concerning the dimensionality of the Chinese DECA-P2 for future research are discussed.
{"title":"Construction and Validation of a Chinese Translation of the Devereux Early Childhood Assessment, Second Edition (DECA-P2)","authors":"Angela F. Y. Siu, Chrysa P. C. KEUNG, Alastair H. K. TO","doi":"10.1177/07342829231210032","DOIUrl":"https://doi.org/10.1177/07342829231210032","url":null,"abstract":"This study analyzed the psychometric properties of a Chinese version of the teacher-reported Devereux Early Childhood Assessment, Second Edition (DECA-P2) using a sample of 246 children aged between 2 and 6 years old. Confirmatory factor analysis was used to examine its construct validity. Reliability was evaluated based on the internal consistency of the scale items, as well as discriminant and convergent validities were assessed using the Strengths and Difficulties Questionnaire. The findings provide emerging evidence of a four-factor structure (i.e., attachment, initiative, self-regulation, and behavioral concern) and support the use of this Chinese DECA-P2 as a screening instrument to identify social-emotional strengths and behavioral problems among Chinese preschool children. The limitations of this study and its implications concerning the dimensionality of the Chinese DECA-P2 for future research are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"41 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135873690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1177/07342829231211965
Michael Matta
Students with Attention-Deficit/Hyperactivity Disorder (ADHD) are struggling writers. Yet no comprehensive model has been validated to explain their poor writing outcomes. This study aims to test whether an extended version of the Not-So-Simple View of Writing (NSVW) model can describe the effects of key abilities on writing performance in students with ADHD. The sample included students with and without ADHD who completed cognitive and academic measures in the Colorado Twin Project. A Multi-Group Structural Equation Model approach revealed that multiple broad cognitive abilities predicted student writing performance and basic writing skills predicted more advanced writing skills. Model fit was excellent both for a model with writing as a single latent variable (fully latent) and as interrelated manifest variables (partially latent). Furthermore, students with and without ADHD demonstrated comparable patterns of relationships among the variables in the model. Implications for the assessment of writing difficulties in students with ADHD are discussed.
{"title":"Assessing an Extended Version of the Not-So-Simple View of Writing Model in School-Aged Students With Attention-Deficit/Hyperactivity Disorder","authors":"Michael Matta","doi":"10.1177/07342829231211965","DOIUrl":"https://doi.org/10.1177/07342829231211965","url":null,"abstract":"Students with Attention-Deficit/Hyperactivity Disorder (ADHD) are struggling writers. Yet no comprehensive model has been validated to explain their poor writing outcomes. This study aims to test whether an extended version of the Not-So-Simple View of Writing (NSVW) model can describe the effects of key abilities on writing performance in students with ADHD. The sample included students with and without ADHD who completed cognitive and academic measures in the Colorado Twin Project. A Multi-Group Structural Equation Model approach revealed that multiple broad cognitive abilities predicted student writing performance and basic writing skills predicted more advanced writing skills. Model fit was excellent both for a model with writing as a single latent variable (fully latent) and as interrelated manifest variables (partially latent). Furthermore, students with and without ADHD demonstrated comparable patterns of relationships among the variables in the model. Implications for the assessment of writing difficulties in students with ADHD are discussed.","PeriodicalId":51446,"journal":{"name":"Journal of Psychoeducational Assessment","volume":"24 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135876614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}