In this report we examine crime rates for young adults who experienced Milwaukee's citywide voucher program as high school students and a comparable group of their peers who had been public school students. Using unique data collected as part of a longitudinal evaluation of the program, we consider criminal activity by youth initially exposed to voucher schools and those in public schools at the same time. We also consider subsequent criminal activity by the students that stayed in the voucher program through 12th grade compared to those who were in public schools for the same period. We show that the mere exposure to private schooling through a voucher is associated with lower rates of criminal activity but the relationship is not robust to different analytic samples or measures of crime. We find a more consistent statistically significant negative relationship between students that stayed in the voucher program through 12th grade and criminal activity (meaning persistent voucher students commit fewer crimes). These results are apparent when controlling for a robust set of student demographics, test scores, and parental characteristics. We conclude that merely being exposed to private schooling for a short time through a voucher program may not have a significant impact on criminal activity, though persistently attending a private school through a voucher program can decrease subsequent criminal activity, especially for males.
{"title":"The School Choice Voucher: A 'Get Out of Jail' Card?","authors":"Corey A. DeAngelis, Patrick Wolf","doi":"10.2139/ssrn.2743541","DOIUrl":"https://doi.org/10.2139/ssrn.2743541","url":null,"abstract":"In this report we examine crime rates for young adults who experienced Milwaukee's citywide voucher program as high school students and a comparable group of their peers who had been public school students. Using unique data collected as part of a longitudinal evaluation of the program, we consider criminal activity by youth initially exposed to voucher schools and those in public schools at the same time. We also consider subsequent criminal activity by the students that stayed in the voucher program through 12th grade compared to those who were in public schools for the same period. We show that the mere exposure to private schooling through a voucher is associated with lower rates of criminal activity but the relationship is not robust to different analytic samples or measures of crime. We find a more consistent statistically significant negative relationship between students that stayed in the voucher program through 12th grade and criminal activity (meaning persistent voucher students commit fewer crimes). These results are apparent when controlling for a robust set of student demographics, test scores, and parental characteristics. We conclude that merely being exposed to private schooling for a short time through a voucher program may not have a significant impact on criminal activity, though persistently attending a private school through a voucher program can decrease subsequent criminal activity, especially for males.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna J. Egalite, Jonathan N. Mills, Patrick J. Wolf
The question of how school choice programs affect the racial stratification of schools is highly salient in the field of education policy. We use a student-level panel data set to analyze the impacts of the Louisiana Scholarship Program (LSP) on racial segregation in public and private schools. This targeted school voucher program provides funding for low-income, mostly minority students in the lowest-graded public schools to enroll in participating private schools. Our analysis indicates that the vast majority (82%) of LSP transfers have reduced racial segregation in the voucher students’ former public schools. LSP transfers have marginally increased segregation in the participating private schools, however, where just 45% of transfers reduce racial segregation. In those school districts under federal desegregation orders, voucher transfers result in a large reduction in traditional public schools’ racial segregation levels and have no discernible impact on private schools. The results of this analysis provide reliable empirical evidence that parental choice actually has aided desegregation efforts in Louisiana.
{"title":"The Impact of the Louisiana Scholarship Program on Racial Segregation in Louisiana Schools","authors":"Anna J. Egalite, Jonathan N. Mills, Patrick J. Wolf","doi":"10.2139/ssrn.2738785","DOIUrl":"https://doi.org/10.2139/ssrn.2738785","url":null,"abstract":"The question of how school choice programs affect the racial stratification of schools is highly salient in the field of education policy. We use a student-level panel data set to analyze the impacts of the Louisiana Scholarship Program (LSP) on racial segregation in public and private schools. This targeted school voucher program provides funding for low-income, mostly minority students in the lowest-graded public schools to enroll in participating private schools. Our analysis indicates that the vast majority (82%) of LSP transfers have reduced racial segregation in the voucher students’ former public schools. LSP transfers have marginally increased segregation in the participating private schools, however, where just 45% of transfers reduce racial segregation. In those school districts under federal desegregation orders, voucher transfers result in a large reduction in traditional public schools’ racial segregation levels and have no discernible impact on private schools. The results of this analysis provide reliable empirical evidence that parental choice actually has aided desegregation efforts in Louisiana.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125492632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Policy debates in education are often framed by using international test scores, such as the Programme for International Student Assessment (PISA). The obvious presumption is that observed differences in test scores within and across countries reflect differences in cognitive skills and general content knowledge, the things which achievement tests are designed to measure. We challenge this presumption, by studying how much of the within-country and between-country variation in PISA test scores is associated with student effort, rather than true academic content knowledge. Drawing heavily on recent literature, we posit that our measures of student effort are actually proxy measures of relevant non-cognitive skills related to conscientiousness. Completing surveys and tests takes effort and students may actually reveal something about their conscientiousness by the amount of effort they show during these tasks. Our previous work, and that of others validates this claim (e.g. Boe, May and Boruch, 2002; Borghans and Schils, 2012; Hitt, Trivitt and Cheng, 2016; Hitt, 2016; Zamarro et al., 2016). Using parametrizations of measures of survey and test effort we find that these measures help explain between 32 and 38 percent of the observed variation in test scores across countries, while explaining only a minor share of the observed variation within countries.
教育方面的政策辩论通常以国际考试成绩为框架,例如国际学生评估项目(PISA)。显而易见的假设是,观察到的国家内部和国家之间考试成绩的差异反映了认知技能和一般内容知识的差异,而成就测试的目的是衡量这些差异。我们通过研究国家内部和国家之间的PISA测试分数差异与学生努力有关的程度,而不是真正的学术内容知识,来挑战这一假设。根据最近的文献,我们假设我们对学生努力的测量实际上是与责任心相关的相关非认知技能的替代测量。完成调查和测试需要付出努力,学生们在完成这些任务时所付出的努力实际上可以揭示出他们的责任心。我们之前的工作,以及其他人的工作证实了这一说法(例如Boe, May和Boruch, 2002;Borghans and Schils, 2012;Hitt, Trivitt and Cheng, 2016;希特,2016;Zamarro et al., 2016)。使用调查和测试努力的参数化措施,我们发现这些措施有助于解释32%至38%的观察到的国家之间的考试成绩差异,而只能解释国家内部观察到的变化的一小部分。
{"title":"When Students Don't Care: Reexamining International Differences in Achievement and Non-Cognitive Skills","authors":"Gema Zamarro, Collin Hitt, Ildefonso Méndez","doi":"10.2139/ssrn.2857243","DOIUrl":"https://doi.org/10.2139/ssrn.2857243","url":null,"abstract":"Policy debates in education are often framed by using international test scores, such as the Programme for International Student Assessment (PISA). The obvious presumption is that observed differences in test scores within and across countries reflect differences in cognitive skills and general content knowledge, the things which achievement tests are designed to measure. We challenge this presumption, by studying how much of the within-country and between-country variation in PISA test scores is associated with student effort, rather than true academic content knowledge. Drawing heavily on recent literature, we posit that our measures of student effort are actually proxy measures of relevant non-cognitive skills related to conscientiousness. Completing surveys and tests takes effort and students may actually reveal something about their conscientiousness by the amount of effort they show during these tasks. Our previous work, and that of others validates this claim (e.g. Boe, May and Boruch, 2002; Borghans and Schils, 2012; Hitt, Trivitt and Cheng, 2016; Hitt, 2016; Zamarro et al., 2016). Using parametrizations of measures of survey and test effort we find that these measures help explain between 32 and 38 percent of the observed variation in test scores across countries, while explaining only a minor share of the observed variation within countries.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130493352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper I provide a methodological critique of the conventional method for assessing the impact of investment shortfalls and other contributors to unfunded pension liabilities, and offer a methodologically sound replacement with substantive policy implications. The conventional method – simply summing the annual actuarial gain/loss figures over time – provides a neat, additive decomposition of the sources of the rise in the Unfunded Accrued Liability (UAL). In doing so, however, it implicitly assumes that in the counterfactual exercise, amortization would adjust dollar-for-dollar with the interest on additional UAL. That is, even if the total (and average) shortfall from covering interest is substantial, the marginal shortfall is assumed to be zero. This is not how contribution shortfalls arise under funding formulas typically used by public plans in the United States. Using the actual funding formula in the counterfactual – with contribution shortfalls on the margin -- leads to much higher estimates of the UAL impact of investment shortfalls than the conventional method. The reason is that there are large interactions over time between investment shortfalls and marginal contribution shortfalls. The conventional counterfactual implicitly assumes away these interactions. The resulting additivity is alluring, but illusory. The conventional method also leads to untenable results on other UAL-drivers. Most striking is the implication that the cumulative UAL impact of pension obligation bonds (POB’s) is no different from the initial impact of receiving the proceeds, independent of the return (actual or assumed) on those proceeds. The underlying problem with the conventional framework is that it has emerged without careful attention to the counterfactual scenarios it is meant to address. This paper provides explicit and internally consistent counterfactuals to better understand the conventional method and its flaws, as well as the reasons for using instead the actual amortization formula in the counterfactual. Mathematical methods are used to illuminate the theoretical issues that lie behind any simulations. The analytical results are illustrated empirically with an adapted version of the actuarial history of the Connecticut State Teachers’ Retirement System (CSTRS), FY00-FY14. The example is instructive because it is a highly underfunded system, notable for its high (and unreduced) assumed rate of return (8.5 percent), as well as its use of $2 billion in POB proceeds to reduce the UAL in FY08, just before the market crash.
{"title":"Assessing the Impact of Investment Shortfalls on Unfunded Pension Liabilities: The Allure of Neat, but Faulty Counterfactuals","authors":"Robert M. Costrell","doi":"10.2139/ssrn.2685383","DOIUrl":"https://doi.org/10.2139/ssrn.2685383","url":null,"abstract":"In this paper I provide a methodological critique of the conventional method for assessing the impact of investment shortfalls and other contributors to unfunded pension liabilities, and offer a methodologically sound replacement with substantive policy implications. The conventional method – simply summing the annual actuarial gain/loss figures over time – provides a neat, additive decomposition of the sources of the rise in the Unfunded Accrued Liability (UAL). In doing so, however, it implicitly assumes that in the counterfactual exercise, amortization would adjust dollar-for-dollar with the interest on additional UAL. That is, even if the total (and average) shortfall from covering interest is substantial, the marginal shortfall is assumed to be zero. This is not how contribution shortfalls arise under funding formulas typically used by public plans in the United States. Using the actual funding formula in the counterfactual – with contribution shortfalls on the margin -- leads to much higher estimates of the UAL impact of investment shortfalls than the conventional method. The reason is that there are large interactions over time between investment shortfalls and marginal contribution shortfalls. The conventional counterfactual implicitly assumes away these interactions. The resulting additivity is alluring, but illusory. The conventional method also leads to untenable results on other UAL-drivers. Most striking is the implication that the cumulative UAL impact of pension obligation bonds (POB’s) is no different from the initial impact of receiving the proceeds, independent of the return (actual or assumed) on those proceeds. The underlying problem with the conventional framework is that it has emerged without careful attention to the counterfactual scenarios it is meant to address. This paper provides explicit and internally consistent counterfactuals to better understand the conventional method and its flaws, as well as the reasons for using instead the actual amortization formula in the counterfactual. Mathematical methods are used to illuminate the theoretical issues that lie behind any simulations. The analytical results are illustrated empirically with an adapted version of the actuarial history of the Connecticut State Teachers’ Retirement System (CSTRS), FY00-FY14. The example is instructive because it is a highly underfunded system, notable for its high (and unreduced) assumed rate of return (8.5 percent), as well as its use of $2 billion in POB proceeds to reduce the UAL in FY08, just before the market crash.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131257904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Character traits and noncognitive skills are important for human capital development and long-run life outcomes. Research in economics and psychology now shows this convincingly. But research into the exact determinants of noncognitive skills has been slowed by a common data limitation: most large-scale datasets do not contain adequate measures of noncognitive skills. This is particularly problematic in education policy evaluation. We demonstrate that within any survey dataset, there is important latent information that can be used as a proxy measure of noncognitive skills. Specifically, we examine the amount of conscientious effort that students exhibit on surveys, as measured by their item response rates. We use six nationally-representative, longitudinal surveys of American youth. We find that the percentage of questions skipped during the baseline year when respondents were adolescents is a significant predictor of later-life educational attainment, net of cognitive ability. Insofar as item response rates affect employment and income, they do so through their effect on educational attainment. The pattern of findings gives compelling reasons to view item response rates as a promising behavioral measure of noncognitive skills for use in future research. We posit that response rates are a measure of conscientiousness, though additional research is required to determine what exact noncognitive skills are being captured by item response rates.
{"title":"When You Say Nothing at All: The Predictive Power of Student Effort on Surveys","authors":"Collin Hitt, Julie R. Trivitt, Albert Cheng","doi":"10.2139/ssrn.2684096","DOIUrl":"https://doi.org/10.2139/ssrn.2684096","url":null,"abstract":"Character traits and noncognitive skills are important for human capital development and long-run life outcomes. Research in economics and psychology now shows this convincingly. But research into the exact determinants of noncognitive skills has been slowed by a common data limitation: most large-scale datasets do not contain adequate measures of noncognitive skills. This is particularly problematic in education policy evaluation. We demonstrate that within any survey dataset, there is important latent information that can be used as a proxy measure of noncognitive skills. Specifically, we examine the amount of conscientious effort that students exhibit on surveys, as measured by their item response rates. We use six nationally-representative, longitudinal surveys of American youth. We find that the percentage of questions skipped during the baseline year when respondents were adolescents is a significant predictor of later-life educational attainment, net of cognitive ability. Insofar as item response rates affect employment and income, they do so through their effect on educational attainment. The pattern of findings gives compelling reasons to view item response rates as a promising behavioral measure of noncognitive skills for use in future research. We posit that response rates are a measure of conscientiousness, though additional research is required to determine what exact noncognitive skills are being captured by item response rates.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117327683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to a 2014 report from the US Department of Education’s Office for Civil Rights, black students represent only 15% of students across the nation, but 35% of students suspended once are black, 44% of students suspended more than once are black, and 36% of expelled students are black. These disparate disciplinary aggregate outcomes, while troubling, do not provide as much information as policymakers need. In this study, we exploit three years of student-level discipline data from Arkansas to assess the extent to which black students or other minority students were more likely to receive certain types of punishments, even for the same infraction. In previous studies utilizing the same dataset, we find that, consistent with the recent reports on this topic, black students were punished more frequently; furthermore, we find that black students received slightly longer punishments than their white peers in the same school. The current study utilizes multinomial logit to assess the extent to which student demographics predict consequence type, even after controlling for infraction-level information and district characteristics. Black students, males, and low-income students (eligible for free- and reduced- lunch) were more likely to receive certain types of exclusionary consequences such as out-of-school suspension, expulsion, and referrals to Alternative Learning Environments relative to in-school-suspension.
{"title":"Discipline Disproportionalities in Schools: The Relationship between Student Characteristics and School Disciplinary Outcomes","authors":"Kaitlin P. Anderson, Gary W. Ritter","doi":"10.2139/ssrn.2693141","DOIUrl":"https://doi.org/10.2139/ssrn.2693141","url":null,"abstract":"According to a 2014 report from the US Department of Education’s Office for Civil Rights, black students represent only 15% of students across the nation, but 35% of students suspended once are black, 44% of students suspended more than once are black, and 36% of expelled students are black. These disparate disciplinary aggregate outcomes, while troubling, do not provide as much information as policymakers need. In this study, we exploit three years of student-level discipline data from Arkansas to assess the extent to which black students or other minority students were more likely to receive certain types of punishments, even for the same infraction. In previous studies utilizing the same dataset, we find that, consistent with the recent reports on this topic, black students were punished more frequently; furthermore, we find that black students received slightly longer punishments than their white peers in the same school. The current study utilizes multinomial logit to assess the extent to which student demographics predict consequence type, even after controlling for infraction-level information and district characteristics. Black students, males, and low-income students (eligible for free- and reduced- lunch) were more likely to receive certain types of exclusionary consequences such as out-of-school suspension, expulsion, and referrals to Alternative Learning Environments relative to in-school-suspension.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123516580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Steele, Robert Slater, Gema Zamarro, Trey Miller, Jennifer Li, Susan Burkhauser, M. Bacon
Using data from seven cohorts of language immersion lottery applicants in a large, urban school district, we estimate the causal effects of immersion on students’ test scores in reading, mathematics, and science, and on English learners’ (EL) reclassification. We estimate positive intent-to-treat (ITT) effects on reading performance in fifth and eighth grades, ranging from 13 to 22 percent of a standard deviation, reflecting 7 to 9 months of learning. We find little benefit in terms of mathematics and science performance, but also no detriment. By sixth and seventh grade, lottery winners’ probabilities of remaining classified as EL are three to four percentage points lower than those of their counterparts. This effect is stronger for ELs whose native language matches the partner language.
{"title":"Effects of Dual-Language Immersion on Students’ Academic Performance","authors":"J. Steele, Robert Slater, Gema Zamarro, Trey Miller, Jennifer Li, Susan Burkhauser, M. Bacon","doi":"10.2139/ssrn.2693337","DOIUrl":"https://doi.org/10.2139/ssrn.2693337","url":null,"abstract":"Using data from seven cohorts of language immersion lottery applicants in a large, urban school district, we estimate the causal effects of immersion on students’ test scores in reading, mathematics, and science, and on English learners’ (EL) reclassification. We estimate positive intent-to-treat (ITT) effects on reading performance in fifth and eighth grades, ranging from 13 to 22 percent of a standard deviation, reflecting 7 to 9 months of learning. We find little benefit in terms of mathematics and science performance, but also no detriment. By sixth and seventh grade, lottery winners’ probabilities of remaining classified as EL are three to four percentage points lower than those of their counterparts. This effect is stronger for ELs whose native language matches the partner language.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123028159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The No Child Left Behind Act of 2001 (NCLB) brought high-stakes accountability testing into every American public school with the goal of 100 percent proficiency for all students. Making annual yearly progress (AYP) toward this proficiency goal for the total student population as well as at-risk subgroups was required in order for schools to avoid possible sanctions, such as school restructuring. In implementing NCLB, states had flexibility to determine the minimum size of these subgroups as to provide statistical reliability and accountability for as many schools as possible. If a school did not meet the state’s minimum subgroup size, the proficiency of the students in the group were not calculated as part of AYP. The subjectivity of identification along with the lack of reliability in test score results makes manipulating the subgroup of students with disabilities possible and advantageous to schools. Using data from over 1,000 Arkansas schools for the years 2004-05 to 2013-14, school-level fixed effects analyses show that falling below the minimum subgroup cutoff of 40 is associated with a 1.5 percentage point decrease in students with disabilities at the school. For every student a school is above the cutoff, there is an increase of 0.09 percentage points in special education enrollment. Possible implications are discussed.
{"title":"Falling Below the Line: Minimum Subgroup Size and Special Education Enrollment","authors":"Sivan Tuchman","doi":"10.2139/ssrn.2667047","DOIUrl":"https://doi.org/10.2139/ssrn.2667047","url":null,"abstract":"The No Child Left Behind Act of 2001 (NCLB) brought high-stakes accountability testing into every American public school with the goal of 100 percent proficiency for all students. Making annual yearly progress (AYP) toward this proficiency goal for the total student population as well as at-risk subgroups was required in order for schools to avoid possible sanctions, such as school restructuring. In implementing NCLB, states had flexibility to determine the minimum size of these subgroups as to provide statistical reliability and accountability for as many schools as possible. If a school did not meet the state’s minimum subgroup size, the proficiency of the students in the group were not calculated as part of AYP. The subjectivity of identification along with the lack of reliability in test score results makes manipulating the subgroup of students with disabilities possible and advantageous to schools. Using data from over 1,000 Arkansas schools for the years 2004-05 to 2013-14, school-level fixed effects analyses show that falling below the minimum subgroup cutoff of 40 is associated with a 1.5 percentage point decrease in students with disabilities at the school. For every student a school is above the cutoff, there is an increase of 0.09 percentage points in special education enrollment. Possible implications are discussed.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115248728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ildefonso Méndez, Gema Zamarro, J. G. Clavel, Collin Hitt
The goal of this paper is to analyze the role that non-cognitive skills and, in particular, regional differences in those skills, play on the observed differences in 15-year-old student’s academic performance, across Spanish regions, on PISA 2009. Previous research has shown the relevance of differences in student’s personal, family and school characteristics in accounting for academic differences across Spanish regions but it has also found that a sizeable part of the observed differences remained unexplained. We have found that differences in the distribution of certain non-cognitive skills associated to academic performance like focus, perseverance and resilience play a prominent role in accounting for differences in student performance in PISA 2009. We observe these skills by developing new measures of student effort on standardized tests. In particular, our estimates suggest that a standard deviation reduction in the dispersion of non-cognitive skills across Spanish regions would lead to a 25% reduction in the magnitude of the observed differences in student performance across regions. This is a relevant effect as, for example, a one standard deviation reduction in the regional dispersion of parent’s educational levels or occupational status would only lead to at most a 2% reduction in the magnitude of observed differences in performance on PISA across Spanish regions. Put plainly, a substantial portion of the regional variation in test scores appears attributable to effort on the PISA test, and not necessarily just differences in actual knowledge.
{"title":"Non-Cognitive Abilities and Spanish Regional Differences in Student Performance in PISA 2009","authors":"Ildefonso Méndez, Gema Zamarro, J. G. Clavel, Collin Hitt","doi":"10.2139/ssrn.2652322","DOIUrl":"https://doi.org/10.2139/ssrn.2652322","url":null,"abstract":"The goal of this paper is to analyze the role that non-cognitive skills and, in particular, regional differences in those skills, play on the observed differences in 15-year-old student’s academic performance, across Spanish regions, on PISA 2009. Previous research has shown the relevance of differences in student’s personal, family and school characteristics in accounting for academic differences across Spanish regions but it has also found that a sizeable part of the observed differences remained unexplained. We have found that differences in the distribution of certain non-cognitive skills associated to academic performance like focus, perseverance and resilience play a prominent role in accounting for differences in student performance in PISA 2009. We observe these skills by developing new measures of student effort on standardized tests. In particular, our estimates suggest that a standard deviation reduction in the dispersion of non-cognitive skills across Spanish regions would lead to a 25% reduction in the magnitude of the observed differences in student performance across regions. This is a relevant effect as, for example, a one standard deviation reduction in the regional dispersion of parent’s educational levels or occupational status would only lead to at most a 2% reduction in the magnitude of observed differences in performance on PISA across Spanish regions. Put plainly, a substantial portion of the regional variation in test scores appears attributable to effort on the PISA test, and not necessarily just differences in actual knowledge.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114605962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main objective of this study is to empirically test a number of theory-based models (i.e. fixed effects (FE), random effects (RE), and aggregated residuals (AR)) to measure both, the generic knowledge as well as the degree attainment rates and early labor outcomes, gained by students in different programs and institutions in higher education. There are four main findings: First, the results of the paper confirm the need of using models that address the issue of student selection into programs and institutions in order to avoid biased estimates. Second, our findings provide suggestive evidence in favor of using FE models. Third, the results also illustrate the need to use appropriate statistical corrections (e.g., Heckman type selection models) to also address the issue related to students dropping out of college. Finally, our findings confirm our hypotheses that rankings of specific college-program combinations change depending on different educational and labor outcome measures considered. This finding emphasizes the need to use complementary indicators related to the mission of the specific post-secondary institutions that are being ranked. The results of this paper illustrate the importance of validating empirical models intended to rank college-program contributions according to a number of educational and early labor market outcomes. Finally, given the sensitivity of the models to different model specifications, it is not clear that they should be used to make any high-stakes decisions in higher education. They could, however, serve as part of a broader set of indicators to support programs and colleges as part of a formative evaluation.
{"title":"How Can We Accurately Measure Whether Students are Gaining Relevant Outcomes in Higher Education?","authors":"Tatiana Melguizo, Gema Zamarro, Tatiana Velasco, Fábio Sanchez","doi":"10.2139/ssrn.2652376","DOIUrl":"https://doi.org/10.2139/ssrn.2652376","url":null,"abstract":"The main objective of this study is to empirically test a number of theory-based models (i.e. fixed effects (FE), random effects (RE), and aggregated residuals (AR)) to measure both, the generic knowledge as well as the degree attainment rates and early labor outcomes, gained by students in different programs and institutions in higher education. There are four main findings: First, the results of the paper confirm the need of using models that address the issue of student selection into programs and institutions in order to avoid biased estimates. Second, our findings provide suggestive evidence in favor of using FE models. Third, the results also illustrate the need to use appropriate statistical corrections (e.g., Heckman type selection models) to also address the issue related to students dropping out of college. Finally, our findings confirm our hypotheses that rankings of specific college-program combinations change depending on different educational and labor outcome measures considered. This finding emphasizes the need to use complementary indicators related to the mission of the specific post-secondary institutions that are being ranked. The results of this paper illustrate the importance of validating empirical models intended to rank college-program contributions according to a number of educational and early labor market outcomes. Finally, given the sensitivity of the models to different model specifications, it is not clear that they should be used to make any high-stakes decisions in higher education. They could, however, serve as part of a broader set of indicators to support programs and colleges as part of a formative evaluation.","PeriodicalId":336198,"journal":{"name":"University of Arkansas Department of Education Reform Research Paper Series","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}