Pub Date : 2018-03-01Epub Date: 2017-03-16DOI: 10.1037/spq0000199
Guadalupe Guzman, Taryn S Goldberg, H Lee Swanson
The published single-case design (SCD) research (N = 19 articles) on self-monitoring and reading performance was synthesized. The following inclusion criteria were used: (a) the study must have been peer-reviewed, (b) implemented an intervention targeting student self-monitoring of reading skills, (c) included data on at least 1 reading outcome, (d) included visual representation of the data, and (f) the study must have used an SCD to assess the topic of interest. A total of 67 participants, 45 males and 22 females, ranging in age from 7:8 -18:7 were included in the current meta-analysis. Ethnicity was reported for 42 students: 23 were Caucasian, 12 were African American, and 7 were Latino/Hispanic. Studies were compared with those meeting What Works Clearinghouse (WWC) standards and those not meeting standards. The Tau-U effect size (ES) method was the main calculation method used; however, Phi ES estimates are included for comparison purposes. Results indicated that self-monitoring had an overall significant large positive effect on the reading performance of K-12 students, Tau-U = 0.79, 95% confidence interval (CI) [0.64, 0.93], p < .0001. However, self-monitoring for studies that met WWC criteria yielded a larger overall positive ES, Tau-U = 0.93, 95% CI [0.79, 1.07], p < .0001. Although the current meta-analysis is limited to peer-reviewed SCD studies, the findings provide support for self-monitoring as an evidence-based reading intervention for students in Grades K-12. Furthermore, findings indicate that larger ES values were identified when consolidating studies based on WWC guidelines as compared with consolidating across all studies. (PsycINFO Database Record
综合已发表的关于自我监测和阅读表现的单例设计(SCD)研究(N = 19篇)。使用以下纳入标准:(a)研究必须经过同行评审,(b)实施针对学生阅读技能自我监控的干预措施,(c)包括至少一项阅读结果的数据,(d)包括数据的可视化表示,以及(f)研究必须使用SCD来评估感兴趣的主题。本次荟萃分析共纳入67名参与者,其中男性45名,女性22名,年龄在7:8 -18:7之间。报告了42名学生的种族:23名白人,12名非洲裔美国人,7名拉丁裔/西班牙裔。研究人员将符合What Works Clearinghouse (WWC)标准和不符合标准的研究进行了比较。采用Tau-U效应量(ES)法计算;然而,为了便于比较,包括了Phi ES估算值。结果显示,自我监控对K-12年级学生的阅读成绩有显著的正向影响,Tau-U = 0.79, 95%可信区间(CI) [0.64, 0.93], p < 0.0001。然而,符合WWC标准的研究的自我监测产生了更大的总体阳性ES, Tau-U = 0.93, 95% CI [0.79, 1.07], p < 0.0001。虽然目前的荟萃分析仅限于同行评议的SCD研究,但研究结果为自我监控作为K-12年级学生的循证阅读干预提供了支持。此外,研究结果表明,与整合所有研究相比,基于WWC指南的整合研究发现了更大的ES值。(PsycINFO数据库记录
{"title":"A meta-analysis of self-monitoring on reading performance of K-12 students.","authors":"Guadalupe Guzman, Taryn S Goldberg, H Lee Swanson","doi":"10.1037/spq0000199","DOIUrl":"https://doi.org/10.1037/spq0000199","url":null,"abstract":"<p><p>The published single-case design (SCD) research (N = 19 articles) on self-monitoring and reading performance was synthesized. The following inclusion criteria were used: (a) the study must have been peer-reviewed, (b) implemented an intervention targeting student self-monitoring of reading skills, (c) included data on at least 1 reading outcome, (d) included visual representation of the data, and (f) the study must have used an SCD to assess the topic of interest. A total of 67 participants, 45 males and 22 females, ranging in age from 7:8 -18:7 were included in the current meta-analysis. Ethnicity was reported for 42 students: 23 were Caucasian, 12 were African American, and 7 were Latino/Hispanic. Studies were compared with those meeting What Works Clearinghouse (WWC) standards and those not meeting standards. The Tau-U effect size (ES) method was the main calculation method used; however, Phi ES estimates are included for comparison purposes. Results indicated that self-monitoring had an overall significant large positive effect on the reading performance of K-12 students, Tau-U = 0.79, 95% confidence interval (CI) [0.64, 0.93], p < .0001. However, self-monitoring for studies that met WWC criteria yielded a larger overall positive ES, Tau-U = 0.93, 95% CI [0.79, 1.07], p < .0001. Although the current meta-analysis is limited to peer-reviewed SCD studies, the findings provide support for self-monitoring as an evidence-based reading intervention for students in Grades K-12. Furthermore, findings indicate that larger ES values were identified when consolidating studies based on WWC guidelines as compared with consolidating across all studies. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"33 1","pages":"160-168"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34818743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-01Epub Date: 2017-08-31DOI: 10.1037/spq0000222
Laura L Bailet, Cynthia Zettler-Greeley, Kandia Lewis
Home literacy activities influence children's emergent literacy progress and readiness for reading instruction. To help parents fulfill this opportunity, we developed a new Emergent Literacy Screener (ELS) and conducted 2 studies of its psychometric properties with independent prekindergarten samples. For Study 1 (n = 812, Mage = 54.4 months, 49.4% male, 46.1% white) exploratory factor analyses (EFA) supported a 5-factor structure. EFA and item calibration supported the removal of 10 items from the original 30 test items. The resultant 20-item ELS demonstrated good reliability (Cronbach's alpha = .83) and significant positive correlation, r = .50, p < .001 with a standardized emergent literacy measure, Get Ready to Read - Revised. For Study 2 (n = 959, Mage = 53.5 months, 52.3% male, 60.3% white), confirmatory factor analyses (CFA) supported a bifactor model, which captured direct effects of 5 specific subfactors and an overarching emergent literacy factor. Using a cut score of 15, the ELS demonstrated moderate sensitivity (.71) and specificity (.61). Negative predictive value was high, whereas positive predictive value was low. Overall the ELS demonstrated acceptable psychometric characteristics for use by parents of prekindergarten children, providing a promising new tool for universal emergent literacy screening and an opportunity to identify where children are in their emergent literacy development. Implications for further research and practice are discussed. (PsycINFO Database Record
家庭扫盲活动影响儿童的突发扫盲进步和阅读指导的准备。为了帮助家长利用这一机会,我们开发了一种新的新兴识字筛选器(ELS),并对独立的幼儿园前样本进行了两项心理测量特性研究。研究1 (n = 812,年龄54.4个月,男性49.4%,白人46.1%)探索性因素分析(EFA)支持5因素结构。EFA和项目校准支持从原来的30个测试项目中删除10个项目。由此产生的20个项目的ELS与标准化的应急识字测量,Get Ready to Read - Revised具有良好的信度(Cronbach's alpha = .83)和显著的正相关(r = .50, p < .001)。对于研究2 (n = 959,年龄为53.5个月,男性占52.3%,白人占60.3%),验证性因素分析(CFA)支持双因素模型,该模型捕获了5个特定子因素和一个总体紧急读写能力因素的直接影响。使用15分的切口评分,ELS表现出中等的敏感性(0.71)和特异性(0.61)。阴性预测值高,阳性预测值低。总体而言,ELS表现出可接受的心理测量特征,供学龄前儿童的父母使用,为普遍的紧急读写能力筛查提供了一个有前途的新工具,并提供了一个确定儿童在紧急读写能力发展阶段的机会。讨论了进一步研究和实践的意义。(PsycINFO数据库记录
{"title":"Psychometric profile of an experimental Emergent Literacy Screener for preschoolers.","authors":"Laura L Bailet, Cynthia Zettler-Greeley, Kandia Lewis","doi":"10.1037/spq0000222","DOIUrl":"https://doi.org/10.1037/spq0000222","url":null,"abstract":"<p><p>Home literacy activities influence children's emergent literacy progress and readiness for reading instruction. To help parents fulfill this opportunity, we developed a new Emergent Literacy Screener (ELS) and conducted 2 studies of its psychometric properties with independent prekindergarten samples. For Study 1 (n = 812, Mage = 54.4 months, 49.4% male, 46.1% white) exploratory factor analyses (EFA) supported a 5-factor structure. EFA and item calibration supported the removal of 10 items from the original 30 test items. The resultant 20-item ELS demonstrated good reliability (Cronbach's alpha = .83) and significant positive correlation, r = .50, p < .001 with a standardized emergent literacy measure, Get Ready to Read - Revised. For Study 2 (n = 959, Mage = 53.5 months, 52.3% male, 60.3% white), confirmatory factor analyses (CFA) supported a bifactor model, which captured direct effects of 5 specific subfactors and an overarching emergent literacy factor. Using a cut score of 15, the ELS demonstrated moderate sensitivity (.71) and specificity (.61). Negative predictive value was high, whereas positive predictive value was low. Overall the ELS demonstrated acceptable psychometric characteristics for use by parents of prekindergarten children, providing a promising new tool for universal emergent literacy screening and an opportunity to identify where children are in their emergent literacy development. Implications for further research and practice are discussed. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"33 1","pages":"120-136"},"PeriodicalIF":0.0,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35361969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01Epub Date: 2017-04-03DOI: 10.1037/spq0000205
Christopher James Anthony, James Clyde DiPerna
The Academic Competence Evaluation Scales-Teacher Form (ACES-TF; DiPerna & Elliott, 2000) was developed to measure student academic skills and enablers (interpersonal skills, engagement, motivation, and study skills). Although ACES-TF scores have demonstrated psychometric adequacy, the length of the measure may be prohibitive for certain applications in research and practice. Thus, the purpose of this project was to use item response theory to identify sets of maximally efficient items (SMIs) for each subscale of the ACES-TF that could inform the development of an abbreviated version. Results supported the reliability and precision of SMI scores. As such, the SMIs demonstrate promise to inform the development of an abbreviated version of the ACES-TF. (PsycINFO Database Record
{"title":"Identifying sets of maximally efficient items from the Academic Competence Evaluation Scales-Teacher Form.","authors":"Christopher James Anthony, James Clyde DiPerna","doi":"10.1037/spq0000205","DOIUrl":"https://doi.org/10.1037/spq0000205","url":null,"abstract":"<p><p>The Academic Competence Evaluation Scales-Teacher Form (ACES-TF; DiPerna & Elliott, 2000) was developed to measure student academic skills and enablers (interpersonal skills, engagement, motivation, and study skills). Although ACES-TF scores have demonstrated psychometric adequacy, the length of the measure may be prohibitive for certain applications in research and practice. Thus, the purpose of this project was to use item response theory to identify sets of maximally efficient items (SMIs) for each subscale of the ACES-TF that could inform the development of an abbreviated version. Results supported the reliability and precision of SMI scores. As such, the SMIs demonstrate promise to inform the development of an abbreviated version of the ACES-TF. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 4","pages":"552-559"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34877471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01Epub Date: 2017-05-01DOI: 10.1037/spq0000206
Benjamin G Solomon, Ole J Forsberg
Bayesian techniques have become increasingly present in the social sciences, fueled by advances in computer speed and the development of user-friendly software. In this paper, we forward the use of Bayesian Asymmetric Regression (BAR) to monitor intervention responsiveness when using Curriculum-Based Measurement (CBM) to assess oral reading fluency (ORF). An overview of Bayesian methods and their application to the problem-solving model is first presented, which is further illustrated by a case example. We conclude the paper with a Monte Carlo simulation study demonstrating the validity of BAR, as compared to the current standard of practice for CBM decision-making, ordinary least squares (OLS) regression. Results suggest that BAR is most advantageous with studies using small-to-moderate sample sizes, and when distributional information (such as the probability of intervention success) is of interest. (PsycINFO Database Record
{"title":"Bayesian asymmetric regression as a means to estimate and evaluate oral reading fluency slopes.","authors":"Benjamin G Solomon, Ole J Forsberg","doi":"10.1037/spq0000206","DOIUrl":"https://doi.org/10.1037/spq0000206","url":null,"abstract":"<p><p>Bayesian techniques have become increasingly present in the social sciences, fueled by advances in computer speed and the development of user-friendly software. In this paper, we forward the use of Bayesian Asymmetric Regression (BAR) to monitor intervention responsiveness when using Curriculum-Based Measurement (CBM) to assess oral reading fluency (ORF). An overview of Bayesian methods and their application to the problem-solving model is first presented, which is further illustrated by a case example. We conclude the paper with a Monte Carlo simulation study demonstrating the validity of BAR, as compared to the current standard of practice for CBM decision-making, ordinary least squares (OLS) regression. Results suggest that BAR is most advantageous with studies using small-to-moderate sample sizes, and when distributional information (such as the probability of intervention success) is of interest. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 4","pages":"539-551"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34956111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-06-29DOI: 10.1037/spq0000211
Laura Baams, Craig A Talmage, Stephen T Russell
Because many school districts receive funding based on student attendance, absenteeism results in a high cost for the public education system. This study shows the direct links between bias-based bullying, school absenteeism because of feeling unsafe at school, and loss of funds for school districts in California. Data from the 2011-2013 California Healthy Kids Survey and the California Department of Education were utilized. Results indicate that annually, California school districts lose an estimated $276 million of unallocated funds because of student absences resulting from feeling unsafe at school. Experiences of bias-based bullying were significantly associated with student absenteeism, and the combination of these experiences resulted in a loss of funds to school districts. For example, the absence of students who experienced bullying based on their race or ethnicity resulted in a projected loss of $78 million in unallocated funds. These data indicate that in addition to fostering student safety and well-being, schools have the societal obligation and economic responsibility to prevent bias-based bullying and related absenteeism. (PsycINFO Database Record
{"title":"Economic costs of bias-based bullying.","authors":"Laura Baams, Craig A Talmage, Stephen T Russell","doi":"10.1037/spq0000211","DOIUrl":"https://doi.org/10.1037/spq0000211","url":null,"abstract":"<p><p>Because many school districts receive funding based on student attendance, absenteeism results in a high cost for the public education system. This study shows the direct links between bias-based bullying, school absenteeism because of feeling unsafe at school, and loss of funds for school districts in California. Data from the 2011-2013 California Healthy Kids Survey and the California Department of Education were utilized. Results indicate that annually, California school districts lose an estimated $276 million of unallocated funds because of student absences resulting from feeling unsafe at school. Experiences of bias-based bullying were significantly associated with student absenteeism, and the combination of these experiences resulted in a loss of funds to school districts. For example, the absence of students who experienced bullying based on their race or ethnicity resulted in a projected loss of $78 million in unallocated funds. These data indicate that in addition to fostering student safety and well-being, schools have the societal obligation and economic responsibility to prevent bias-based bullying and related absenteeism. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 3","pages":"422-433"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5578874/pdf/nihms882256.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35130121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2016-08-11DOI: 10.1037/spq0000174
Marisa Malone, Dewey Cornell, Kathan Shukla
Educational authorities have questioned whether middle schools provide the best school climate for 7th and 8th grade students, and proposed that other grade configurations such as K-8th grade schools may provide a better learning environment. The purpose of this study was to compare 7th and 8th grade students' perceptions of 4 key features of school climate (disciplinary structure, student support, student engagement, and prevalence of teasing and bullying) in middle schools versus elementary or high schools. Multilevel multivariate modeling in a statewide sample of 39,036 7th and 8th grade students attending 418 schools revealed that students attending middle schools had a more negative perception of school climate than students in schools with other grade configurations. Seventh grade students placed in middle schools reported lower disciplinary structure and a higher prevalence of teasing and bullying in comparison to those in elementary schools. Eighth grade students in middle schools reported poorer disciplinary structure, lower student engagement, and a higher prevalence of teasing and bullying compared to those in high schools. These findings can guide school psychologists in identifying aspects of school climate that may be troublesome for 7th and 8th grade students in schools with different grade configurations. (PsycINFO Database Record
{"title":"Association of grade configuration with school climate for 7th and 8th grade students.","authors":"Marisa Malone, Dewey Cornell, Kathan Shukla","doi":"10.1037/spq0000174","DOIUrl":"https://doi.org/10.1037/spq0000174","url":null,"abstract":"<p><p>Educational authorities have questioned whether middle schools provide the best school climate for 7th and 8th grade students, and proposed that other grade configurations such as K-8th grade schools may provide a better learning environment. The purpose of this study was to compare 7th and 8th grade students' perceptions of 4 key features of school climate (disciplinary structure, student support, student engagement, and prevalence of teasing and bullying) in middle schools versus elementary or high schools. Multilevel multivariate modeling in a statewide sample of 39,036 7th and 8th grade students attending 418 schools revealed that students attending middle schools had a more negative perception of school climate than students in schools with other grade configurations. Seventh grade students placed in middle schools reported lower disciplinary structure and a higher prevalence of teasing and bullying in comparison to those in elementary schools. Eighth grade students in middle schools reported poorer disciplinary structure, lower student engagement, and a higher prevalence of teasing and bullying compared to those in high schools. These findings can guide school psychologists in identifying aspects of school climate that may be troublesome for 7th and 8th grade students in schools with different grade configurations. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 3","pages":"350-366"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34360705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2016-08-08DOI: 10.1037/spq0000175
Ethan R Van Norman, Peter M Nelson, David C Parker
Computer adaptive tests (CATs) hold promise to monitor student progress within multitiered systems of support. However, the relationship between how long and how often data are collected and the technical adequacy of growth estimates from CATs has not been explored. Given CAT administration times, it is important to identify optimal data collection schedules to minimize missed instructional time. We used simulation methodology to investigate how the duration and frequency of data collection influenced the reliability, validity, and precision of growth estimates from a math CAT. A progress monitoring dataset of 746 Grade 4, 664 Grade 5, and 400 Grade 6 students from 40 schools in the upper Midwest was used to generate model parameters. Across grades, 53% of students were female and 53% were White. Grade level was not as influential as the duration and frequency of data collection on the technical adequacy of growth estimates. Low-stakes decisions were possible after 14-18 weeks when data were collected weekly (420-540 min of assessment), 20-24 weeks when collected every other week (300-360 min of assessment), and 20-28 weeks (150-210 min of assessment) when data were collected once a month, depending on student grade level. The validity and precision of growth estimates improved when the duration and frequency of progress monitoring increased. Given the amount of time required to obtain technically adequate growth estimates in the present study, results highlight the importance of weighing the potential costs of missed instructional time relative to other types of assessments, such as curriculum-based measures. Implications for practice, research, as well as future directions are also discussed. (PsycINFO Database Record
{"title":"Technical adequacy of growth estimates from a computer adaptive test: Implications for progress monitoring.","authors":"Ethan R Van Norman, Peter M Nelson, David C Parker","doi":"10.1037/spq0000175","DOIUrl":"https://doi.org/10.1037/spq0000175","url":null,"abstract":"<p><p>Computer adaptive tests (CATs) hold promise to monitor student progress within multitiered systems of support. However, the relationship between how long and how often data are collected and the technical adequacy of growth estimates from CATs has not been explored. Given CAT administration times, it is important to identify optimal data collection schedules to minimize missed instructional time. We used simulation methodology to investigate how the duration and frequency of data collection influenced the reliability, validity, and precision of growth estimates from a math CAT. A progress monitoring dataset of 746 Grade 4, 664 Grade 5, and 400 Grade 6 students from 40 schools in the upper Midwest was used to generate model parameters. Across grades, 53% of students were female and 53% were White. Grade level was not as influential as the duration and frequency of data collection on the technical adequacy of growth estimates. Low-stakes decisions were possible after 14-18 weeks when data were collected weekly (420-540 min of assessment), 20-24 weeks when collected every other week (300-360 min of assessment), and 20-28 weeks (150-210 min of assessment) when data were collected once a month, depending on student grade level. The validity and precision of growth estimates improved when the duration and frequency of progress monitoring increased. Given the amount of time required to obtain technically adequate growth estimates in the present study, results highlight the importance of weighing the potential costs of missed instructional time relative to other types of assessments, such as curriculum-based measures. Implications for practice, research, as well as future directions are also discussed. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 3","pages":"379-391"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34741173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2016-07-21DOI: 10.1037/spq0000168
Michael J Furlong, Aileen Fullchange, Erin Dowdy
Student surveys are often used for school-based mental health screening; hence, it is critical to evaluate the authenticity of information obtained via the self-report format. The objective of this study was to examine the possible effects of mischievous response patterns on school-based screening results. The present study included 1,857 high school students who completed a schoolwide screening for complete mental health. Student responses were reviewed to detect possible mischievous responses and to examine their association with other survey results. Consistent with previous research, mischievous responding was evaluated by items that are legitimate to ask of all students (e.g., How much do you weigh? and How many siblings do you have?). Responses were considered "mischievous" when a student selected multiple extreme, unusual (less than 5% incidence) response options, such as weighing more than 225 pounds and having 10 or more siblings. Only 1.8% of the students responded in extreme ways to 2 or more of 7 mischievous response items. When compared with other students, the mischievous responders were less likely to declare that they answered items honestly, were more likely to finish the survey in less than 10 min, reported lower levels of life satisfaction and school connectedness, and reported higher levels of emotional and behavioral distress. When applying a dual-factor mental health screening framework to the responses, mischievous responders were less likely to be categorized as having complete mental health. Implications for school-based mental health screening are discussed. (PsycINFO Database Record
{"title":"Effects of mischievous responding on universal mental health screening: I love rum raisin ice cream, really I do!","authors":"Michael J Furlong, Aileen Fullchange, Erin Dowdy","doi":"10.1037/spq0000168","DOIUrl":"https://doi.org/10.1037/spq0000168","url":null,"abstract":"<p><p>Student surveys are often used for school-based mental health screening; hence, it is critical to evaluate the authenticity of information obtained via the self-report format. The objective of this study was to examine the possible effects of mischievous response patterns on school-based screening results. The present study included 1,857 high school students who completed a schoolwide screening for complete mental health. Student responses were reviewed to detect possible mischievous responses and to examine their association with other survey results. Consistent with previous research, mischievous responding was evaluated by items that are legitimate to ask of all students (e.g., How much do you weigh? and How many siblings do you have?). Responses were considered \"mischievous\" when a student selected multiple extreme, unusual (less than 5% incidence) response options, such as weighing more than 225 pounds and having 10 or more siblings. Only 1.8% of the students responded in extreme ways to 2 or more of 7 mischievous response items. When compared with other students, the mischievous responders were less likely to declare that they answered items honestly, were more likely to finish the survey in less than 10 min, reported lower levels of life satisfaction and school connectedness, and reported higher levels of emotional and behavioral distress. When applying a dual-factor mental health screening framework to the responses, mischievous responders were less likely to be categorized as having complete mental health. Implications for school-based mental health screening are discussed. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 3","pages":"320-335"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34591631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-05-25DOI: 10.1037/spq0000207
Kristy Warmbold-Brann, Matthew K Burns, June L Preast, Crystal N Taylor, Lisa N Aguilar
The current study examined the effect of academic interventions and modifications on behavioral outcomes in a meta-analysis of 32 single-case design studies. Academic interventions included modifying task difficulty, providing instruction in reading, mathematics, or writing, and contingent reinforcement for academic performance. There was an overall small to moderate effect (ϕ = .56) on behavioral outcomes, with a stronger effect on increasing time on task (ϕ = .64) than on decreasing disruptive behavior (ϕ = .42). There was a small effect for using a performance-based contingent reinforcer (ϕ = .48). Interventions completed in an individual setting resulted in a moderate to large effects on behavior outcomes. Results of the current meta-analysis suggest that academic interventions can offer both positive academic and behavioral outcomes. Practical implications and suggestions for future research are included. (PsycINFO Database Record
{"title":"Meta-analysis of the effects of academic interventions and modifications on student behavior outcomes.","authors":"Kristy Warmbold-Brann, Matthew K Burns, June L Preast, Crystal N Taylor, Lisa N Aguilar","doi":"10.1037/spq0000207","DOIUrl":"https://doi.org/10.1037/spq0000207","url":null,"abstract":"<p><p>The current study examined the effect of academic interventions and modifications on behavioral outcomes in a meta-analysis of 32 single-case design studies. Academic interventions included modifying task difficulty, providing instruction in reading, mathematics, or writing, and contingent reinforcement for academic performance. There was an overall small to moderate effect (ϕ = .56) on behavioral outcomes, with a stronger effect on increasing time on task (ϕ = .64) than on decreasing disruptive behavior (ϕ = .42). There was a small effect for using a performance-based contingent reinforcer (ϕ = .48). Interventions completed in an individual setting resulted in a moderate to large effects on behavior outcomes. Results of the current meta-analysis suggest that academic interventions can offer both positive academic and behavioral outcomes. Practical implications and suggestions for future research are included. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 3","pages":"291-305"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35024918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-04-17DOI: 10.1037/spq0000203
Sarah Wollersheim Shervey, Lia E Sandilos, James C DiPerna, Pui-Wa Lei
The purpose of this study was to examine the social validity of the Social Skills Improvement System-Classwide Intervention Program (SSIS-CIP) for teachers in the primary grades. Participants included 45 first and second grade teachers who completed a 16-item social validity questionnaire during each year of the SSIS-CIP efficacy trial. Findings indicated that teachers generally perceived the SSIS-CIP as a socially valid and feasible intervention for primary grades; however, teachers' ratings regarding ease of implementation and relevance and sequence demonstrated differences across grade levels in the second year of implementation. (PsycINFO Database Record
{"title":"Social validity of the Social Skills Improvement System-Classwide Intervention Program (SSIS-CIP) in the primary grades.","authors":"Sarah Wollersheim Shervey, Lia E Sandilos, James C DiPerna, Pui-Wa Lei","doi":"10.1037/spq0000203","DOIUrl":"https://doi.org/10.1037/spq0000203","url":null,"abstract":"<p><p>The purpose of this study was to examine the social validity of the Social Skills Improvement System-Classwide Intervention Program (SSIS-CIP) for teachers in the primary grades. Participants included 45 first and second grade teachers who completed a 16-item social validity questionnaire during each year of the SSIS-CIP efficacy trial. Findings indicated that teachers generally perceived the SSIS-CIP as a socially valid and feasible intervention for primary grades; however, teachers' ratings regarding ease of implementation and relevance and sequence demonstrated differences across grade levels in the second year of implementation. (PsycINFO Database Record</p>","PeriodicalId":88124,"journal":{"name":"School psychology quarterly : the official journal of the Division of School Psychology, American Psychological Association","volume":"32 3","pages":"414-421"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34917274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}