Pub Date : 2022-07-29DOI: 10.1080/2330443X.2022.2105769
Edward Kim
Abstract This study introduces the signal weighted teacher value-added model (SW VAM), a value-added model that weights student-level observations based on each student’s capacity to signal their assigned teacher’s quality. Specifically, the model leverages the repeated appearance of a given student to estimate student reliability and sensitivity parameters, whereas traditional VAMs represent a special case where all students exhibit identical parameters. Simulation study results indicate that SW VAMs outperform traditional VAMs at recovering true teacher quality when the assumption of student parameter invariance is met but have mixed performance under alternative assumptions of the true data generating process depending on data availability and the choice of priors. Evidence using an empirical dataset suggests that SW VAM and traditional VAM results may disagree meaningfully in practice. These findings suggest that SW VAMs have promising potential to recover true teacher value-added in practical applications and, as a version of value-added models that attends to student differences, can be used to test the validity of traditional VAM assumptions in empirical contexts.
{"title":"Signal Weighted Teacher Value-Added Models","authors":"Edward Kim","doi":"10.1080/2330443X.2022.2105769","DOIUrl":"https://doi.org/10.1080/2330443X.2022.2105769","url":null,"abstract":"Abstract This study introduces the signal weighted teacher value-added model (SW VAM), a value-added model that weights student-level observations based on each student’s capacity to signal their assigned teacher’s quality. Specifically, the model leverages the repeated appearance of a given student to estimate student reliability and sensitivity parameters, whereas traditional VAMs represent a special case where all students exhibit identical parameters. Simulation study results indicate that SW VAMs outperform traditional VAMs at recovering true teacher quality when the assumption of student parameter invariance is met but have mixed performance under alternative assumptions of the true data generating process depending on data availability and the choice of priors. Evidence using an empirical dataset suggests that SW VAM and traditional VAM results may disagree meaningfully in practice. These findings suggest that SW VAMs have promising potential to recover true teacher value-added in practical applications and, as a version of value-added models that attends to student differences, can be used to test the validity of traditional VAM assumptions in empirical contexts.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"149 - 162"},"PeriodicalIF":1.6,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42748072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1080/2330443X.2022.2120137
A. Dorfman, R. Valliant
Abstract Forensic firearms identification, the determination by a trained firearms examiner as to whether or not bullets or cartridges came from a common weapon, has long been a mainstay in the criminal courts. Reliability of forensic firearms identification has been challenged in the general scientific community, and, in response, several studies have been carried out aimed at showing that firearms examination is accurate, that is, has low error rates. Less studied has been the question of consistency, of whether two examinations of the same bullets or cartridge cases come to the same conclusion, carried out by an examiner on separate occasions—intrarater reliability or repeatability—or by two examiners—interrater reliability or reproducibility. One important study, described in a 2020 Report by the Ames Laboratory-USDOE to the Federal Bureau of Investigation, went beyond considerations of accuracy to investigate firearms examination repeatability and reproducibility. The Report’s conclusions were paradoxical. The observed agreement of examiners with themselves or with other examiners appears mediocre. However, the study concluded repeatability and reproducibility are satisfactory, on grounds that the observed agreement exceeds a quantity called the expected agreement. We find that appropriately employing expected agreement as it was intended does not suggest satisfactory repeatability and reproducibility, but the opposite.
{"title":"A Re-Analysis of Repeatability and Reproducibility in the Ames-USDOE-FBI Study","authors":"A. Dorfman, R. Valliant","doi":"10.1080/2330443X.2022.2120137","DOIUrl":"https://doi.org/10.1080/2330443X.2022.2120137","url":null,"abstract":"Abstract Forensic firearms identification, the determination by a trained firearms examiner as to whether or not bullets or cartridges came from a common weapon, has long been a mainstay in the criminal courts. Reliability of forensic firearms identification has been challenged in the general scientific community, and, in response, several studies have been carried out aimed at showing that firearms examination is accurate, that is, has low error rates. Less studied has been the question of consistency, of whether two examinations of the same bullets or cartridge cases come to the same conclusion, carried out by an examiner on separate occasions—intrarater reliability or repeatability—or by two examiners—interrater reliability or reproducibility. One important study, described in a 2020 Report by the Ames Laboratory-USDOE to the Federal Bureau of Investigation, went beyond considerations of accuracy to investigate firearms examination repeatability and reproducibility. The Report’s conclusions were paradoxical. The observed agreement of examiners with themselves or with other examiners appears mediocre. However, the study concluded repeatability and reproducibility are satisfactory, on grounds that the observed agreement exceeds a quantity called the expected agreement. We find that appropriately employing expected agreement as it was intended does not suggest satisfactory repeatability and reproducibility, but the opposite.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"175 - 184"},"PeriodicalIF":1.6,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44790182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-09DOI: 10.1080/2330443X.2022.2050326
N. Hwang, S. Chatterjee, Y. Di, Sharmodeep Bhattacharyya
Abstract We assess the treatment effect of juvenile stay-at-home orders (JSAHO) on reducing the rate of SARS-CoV-2 infection spread in Saline County (“Saline”), Arkansas, by examining the difference between Saline’s and control Arkansas counties’ changes in daily and mean log infection rates of pretreatment (March 28–April 5, 2020) and treatment periods (April 6–May 6, 2020). A synthetic control county is constructed based on the parallel-trends assumption, least-squares fitting on pretreatment and socio-demographic covariates, and elastic-net-based methods, from which the counterfactual outcome is predicted and the treatment effect is estimated using the difference-in-differences, the synthetic control, and the changes-in-changes methodologies. Both the daily and average treatment effects of JSAHO are shown to be significant. Despite its narrow scope and lack of enforcement for compliance, JSAHO reduced the rate of the infection spread in Saline. Supplementary materials for this article are available online.
{"title":"Observational Study of the Effect of the Juvenile Stay-At-Home Order on SARS-CoV-2 Infection Spread in Saline County, Arkansas","authors":"N. Hwang, S. Chatterjee, Y. Di, Sharmodeep Bhattacharyya","doi":"10.1080/2330443X.2022.2050326","DOIUrl":"https://doi.org/10.1080/2330443X.2022.2050326","url":null,"abstract":"Abstract We assess the treatment effect of juvenile stay-at-home orders (JSAHO) on reducing the rate of SARS-CoV-2 infection spread in Saline County (“Saline”), Arkansas, by examining the difference between Saline’s and control Arkansas counties’ changes in daily and mean log infection rates of pretreatment (March 28–April 5, 2020) and treatment periods (April 6–May 6, 2020). A synthetic control county is constructed based on the parallel-trends assumption, least-squares fitting on pretreatment and socio-demographic covariates, and elastic-net-based methods, from which the counterfactual outcome is predicted and the treatment effect is estimated using the difference-in-differences, the synthetic control, and the changes-in-changes methodologies. Both the daily and average treatment effects of JSAHO are shown to be significant. Despite its narrow scope and lack of enforcement for compliance, JSAHO reduced the rate of the infection spread in Saline. Supplementary materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"74 - 84"},"PeriodicalIF":1.6,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43216255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-24DOI: 10.1080/2330443X.2022.2038744
L. Ice, J. Scouras, E. Toton
Abstract Senior leaders in the U.S. Department of Defense, as well as nuclear strategists and academics, have argued that the advent of nuclear weapons is associated with a dramatic decrease in wartime fatalities. This assessment is often supported by an evolving series of figures that show a marked drop in wartime fatalities as a percentage of world population after 1945 to levels well below those of the prior centuries. The goal of this article is not to ascertain whether nuclear weapons are associated with or have led to a decrease in wartime fatalities, but rather to critique the supporting statistical evidence. We assess these wartime fatality figures and find that they are both irreproducible and misleading. We perform a more rigorous and traceable analysis and discover that post-1945 wartime fatalities as a percentage of world population are consistent with those of many other historical periods. Supplementary materials for this article are available online.
{"title":"Wartime Fatalities in the Nuclear Era","authors":"L. Ice, J. Scouras, E. Toton","doi":"10.1080/2330443X.2022.2038744","DOIUrl":"https://doi.org/10.1080/2330443X.2022.2038744","url":null,"abstract":"Abstract Senior leaders in the U.S. Department of Defense, as well as nuclear strategists and academics, have argued that the advent of nuclear weapons is associated with a dramatic decrease in wartime fatalities. This assessment is often supported by an evolving series of figures that show a marked drop in wartime fatalities as a percentage of world population after 1945 to levels well below those of the prior centuries. The goal of this article is not to ascertain whether nuclear weapons are associated with or have led to a decrease in wartime fatalities, but rather to critique the supporting statistical evidence. We assess these wartime fatality figures and find that they are both irreproducible and misleading. We perform a more rigorous and traceable analysis and discover that post-1945 wartime fatalities as a percentage of world population are consistent with those of many other historical periods. Supplementary materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"49 - 57"},"PeriodicalIF":1.6,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49250579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-24DOI: 10.1080/2330443X.2022.2033654
C. Oehlert, Evan T. Schulz, Anne Parker
Abstract When compiling industry statistics or selecting businesses for further study, researchers often rely on North American Industry Classification System (NAICS) codes. However, codes are self-reported on tax forms and reporting incorrect codes or even leaving the code blank has no tax consequences, so they are often unusable. IRSs Statistics of Income (SOI) program validates NAICS codes for businesses in the statistical samples used to produce official tax statistics for various filing populations, including sole proprietorships (those filing Form 1040 Schedule C) and corporations (those filing Forms 1120). In this article we leverage these samples to explore ways to improve NAICS code reporting for all filers in the relevant populations. For sole proprietorships, we overcame several record linkage complications to combine data from SOI samples with other administrative data. Using the SOI-validated NAICS code values as ground truth, we trained classification-tree-based models (randomForest) to predict NAICS industry sector from other tax return data, including text descriptions, for businesses which did or did not initially report a valid NAICS code. For both sole proprietorships and corporations, we were able to improve slightly on the accuracy of valid self-reported industry sector and correctly identify sector for over half of businesses with no informative reported NAICS code.
{"title":"NAICS Code Prediction Using Supervised Methods","authors":"C. Oehlert, Evan T. Schulz, Anne Parker","doi":"10.1080/2330443X.2022.2033654","DOIUrl":"https://doi.org/10.1080/2330443X.2022.2033654","url":null,"abstract":"Abstract When compiling industry statistics or selecting businesses for further study, researchers often rely on North American Industry Classification System (NAICS) codes. However, codes are self-reported on tax forms and reporting incorrect codes or even leaving the code blank has no tax consequences, so they are often unusable. IRSs Statistics of Income (SOI) program validates NAICS codes for businesses in the statistical samples used to produce official tax statistics for various filing populations, including sole proprietorships (those filing Form 1040 Schedule C) and corporations (those filing Forms 1120). In this article we leverage these samples to explore ways to improve NAICS code reporting for all filers in the relevant populations. For sole proprietorships, we overcame several record linkage complications to combine data from SOI samples with other administrative data. Using the SOI-validated NAICS code values as ground truth, we trained classification-tree-based models (randomForest) to predict NAICS industry sector from other tax return data, including text descriptions, for businesses which did or did not initially report a valid NAICS code. For both sole proprietorships and corporations, we were able to improve slightly on the accuracy of valid self-reported industry sector and correctly identify sector for over half of businesses with no informative reported NAICS code.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"58 - 66"},"PeriodicalIF":1.6,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46698012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-17DOI: 10.1080/2330443X.2021.2019152
A. Gelman, Shira Mitchell, J. Sachs, S. Sachs
Abstract The Millennium Villages Project was an integrated rural development program carried out for a decade in 10 clusters of villages in sub-Saharan Africa starting in 2005, and in a few other sites for shorter durations. An evaluation of the 10 main sites compared to retrospectively chosen control sites estimated positive effects on a range of economic, social, and health outcomes (Mitchell et al. 2018). More recently, an outside group performed a prospective controlled (but also nonrandomized) evaluation of one of the shorter-duration sites and reported smaller or null results (Masset et al. 2020). Although these two conclusions seem contradictory, the differences can be explained by the fact that Mitchell et al. studied 10 sites where the project was implemented for 10 years, and Masset et al. studied one site with a program lasting less than 5 years, as well as differences in inference and framing. Insights from both evaluations should be valuable in considering future development efforts of this sort. Both studies are consistent with a larger picture of positive average impacts (compared to untreated villages) across a broad range of outcomes, but with effects varying across sites or requiring an adequate duration for impacts to be manifested.
{"title":"Reconciling Evaluations of the Millennium Villages Project","authors":"A. Gelman, Shira Mitchell, J. Sachs, S. Sachs","doi":"10.1080/2330443X.2021.2019152","DOIUrl":"https://doi.org/10.1080/2330443X.2021.2019152","url":null,"abstract":"Abstract The Millennium Villages Project was an integrated rural development program carried out for a decade in 10 clusters of villages in sub-Saharan Africa starting in 2005, and in a few other sites for shorter durations. An evaluation of the 10 main sites compared to retrospectively chosen control sites estimated positive effects on a range of economic, social, and health outcomes (Mitchell et al. 2018). More recently, an outside group performed a prospective controlled (but also nonrandomized) evaluation of one of the shorter-duration sites and reported smaller or null results (Masset et al. 2020). Although these two conclusions seem contradictory, the differences can be explained by the fact that Mitchell et al. studied 10 sites where the project was implemented for 10 years, and Masset et al. studied one site with a program lasting less than 5 years, as well as differences in inference and framing. Insights from both evaluations should be valuable in considering future development efforts of this sort. Both studies are consistent with a larger picture of positive average impacts (compared to untreated villages) across a broad range of outcomes, but with effects varying across sites or requiring an adequate duration for impacts to be manifested.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"1 - 7"},"PeriodicalIF":1.6,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44919973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-13DOI: 10.1080/2330443X.2021.2016084
Joshua Landon, Joseph Gastwirth
Abstract Recently, Gastwirth proposed two transformations and of the Lorenz curve, which calculates the proportion of a population, cumulated from the poorest or middle, respectively, needed to have the same amount of income as top . Economists and policy makers are often interested in the comparative status of two groups, for example, females versus males or minority versus majority. This article adapts and extends the concept underlying the and curves to provide analogous curves comparing the relative status of two groups. Now one calculates the proportion of the minority group, cumulated from the bottom or middle needed to have the same total income as the top qth fraction of the majority group (after adjusting for sample size). The areas between these curves and the line of equality are analogous to the Gini index. The methodology is used to illustrate the change in the degree of inequality between males and females, as well as between black and white males, in the United States between 2000 and 2017, and can be used to examine disparities between the expenditures on health of minorities and white people.
{"title":"Graphical Measures Summarizing the Inequality of Income of Two Groups","authors":"Joshua Landon, Joseph Gastwirth","doi":"10.1080/2330443X.2021.2016084","DOIUrl":"https://doi.org/10.1080/2330443X.2021.2016084","url":null,"abstract":"Abstract Recently, Gastwirth proposed two transformations and of the Lorenz curve, which calculates the proportion of a population, cumulated from the poorest or middle, respectively, needed to have the same amount of income as top . Economists and policy makers are often interested in the comparative status of two groups, for example, females versus males or minority versus majority. This article adapts and extends the concept underlying the and curves to provide analogous curves comparing the relative status of two groups. Now one calculates the proportion of the minority group, cumulated from the bottom or middle needed to have the same total income as the top qth fraction of the majority group (after adjusting for sample size). The areas between these curves and the line of equality are analogous to the Gini index. The methodology is used to illustrate the change in the degree of inequality between males and females, as well as between black and white males, in the United States between 2000 and 2017, and can be used to examine disparities between the expenditures on health of minorities and white people.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"20 - 25"},"PeriodicalIF":1.6,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46759615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-10DOI: 10.1080/2330443X.2021.2016083
Benjamin J. Lobo, D. Bonds, K. Kafadar
Abstract Currently, the most reliable estimate of the prevalence of obesity in Virginia’s Thomas Jefferson Health District (TJHD) comes from an annual telephone survey conducted by the Centers for Disease Control and Prevention. This district-wide estimate has limited use to decision makers who must target health interventions at a more granular level. A survey is one way of obtaining more granular estimates. This article describes the process of stratifying targeted geographic units (here, ZIP Code Tabulation Areas, or ZCTAs) prior to conducting the survey for those situations where cost considerations make it infeasible to sample each geographic unit (here, ZCTA) in the region (here, TJHD). Feature selection, allocation factor analysis, and hierarchical clustering were used to stratify ZCTAs. We describe the survey sampling strategy that we developed, by creating strata of ZCTAs; the data analysis using the R survey package; and the results. The resulting maps of obesity prevalence show stark differences in prevalence depending on the area of the health district, highlighting the importance of assessing health outcomes at a granular level. Our approach is a detailed and reproducible set of steps that can be used by others who face similar scenarios. Supplementary files for this article are available online.
{"title":"Estimating Local Prevalence of Obesity Via Survey Under Cost Constraints: Stratifying ZCTAs in Virginia’s Thomas Jefferson Health District","authors":"Benjamin J. Lobo, D. Bonds, K. Kafadar","doi":"10.1080/2330443X.2021.2016083","DOIUrl":"https://doi.org/10.1080/2330443X.2021.2016083","url":null,"abstract":"Abstract Currently, the most reliable estimate of the prevalence of obesity in Virginia’s Thomas Jefferson Health District (TJHD) comes from an annual telephone survey conducted by the Centers for Disease Control and Prevention. This district-wide estimate has limited use to decision makers who must target health interventions at a more granular level. A survey is one way of obtaining more granular estimates. This article describes the process of stratifying targeted geographic units (here, ZIP Code Tabulation Areas, or ZCTAs) prior to conducting the survey for those situations where cost considerations make it infeasible to sample each geographic unit (here, ZCTA) in the region (here, TJHD). Feature selection, allocation factor analysis, and hierarchical clustering were used to stratify ZCTAs. We describe the survey sampling strategy that we developed, by creating strata of ZCTAs; the data analysis using the R survey package; and the results. The resulting maps of obesity prevalence show stark differences in prevalence depending on the area of the health district, highlighting the importance of assessing health outcomes at a granular level. Our approach is a detailed and reproducible set of steps that can be used by others who face similar scenarios. Supplementary files for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"8 - 19"},"PeriodicalIF":1.6,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43545360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1080/2330443x.2023.2190008
M. Rubinstein, A. Haviland, J. Breslau
Using the COVID-19 Trends and Impacts Survey (CTIS), we examine the effect of COVID-19 vaccinations on (self-reported) feelings of depression and anxiety ("depression"), isolation, and worries about health, among vaccine-accepting survey respondents during February 2021. Assuming no unmeasured confounding, we estimate that vaccinations caused a -4.3 (-4.7, -3.8), -3.4 (-3.9, -2.9), and -4.8 (-5.4, -4.1) percentage point change in these outcomes, respectively. We further argue that these effects provide a lower bound on the mental health burden of the pandemic, implying that the COVID-19 pandemic was responsible for at least a 28.6 (25.3, 31.9) percent increase in feelings of depression and a 20.5 (17.3, 23.6) percent increase in feelings of isolation during February 2021 among vaccine-accepting CTIS survey respondents. We also posit a model where vaccinations affect depression through worries about health and feelings of isolation, and estimate the proportion mediated by each pathway. We find that feelings of social isolation is the stronger mediator, accounting for 41.0 (37.3, 44.7) percent of the total effect, while worries about health accounts for 9.4 (7.6, 11.1) percent of the total effect. We caution that the causal interpretation of these findings rests on strong assumptions. Nevertheless, as the pandemic continues, policymakers should also target interventions aimed at managing the substantial mental health burden associated with the COVID-19 pandemic.
{"title":"The effect of COVID-19 vaccinations on self-reported depression and anxiety during February 2021","authors":"M. Rubinstein, A. Haviland, J. Breslau","doi":"10.1080/2330443x.2023.2190008","DOIUrl":"https://doi.org/10.1080/2330443x.2023.2190008","url":null,"abstract":"Using the COVID-19 Trends and Impacts Survey (CTIS), we examine the effect of COVID-19 vaccinations on (self-reported) feelings of depression and anxiety (\"depression\"), isolation, and worries about health, among vaccine-accepting survey respondents during February 2021. Assuming no unmeasured confounding, we estimate that vaccinations caused a -4.3 (-4.7, -3.8), -3.4 (-3.9, -2.9), and -4.8 (-5.4, -4.1) percentage point change in these outcomes, respectively. We further argue that these effects provide a lower bound on the mental health burden of the pandemic, implying that the COVID-19 pandemic was responsible for at least a 28.6 (25.3, 31.9) percent increase in feelings of depression and a 20.5 (17.3, 23.6) percent increase in feelings of isolation during February 2021 among vaccine-accepting CTIS survey respondents. We also posit a model where vaccinations affect depression through worries about health and feelings of isolation, and estimate the proportion mediated by each pathway. We find that feelings of social isolation is the stronger mediator, accounting for 41.0 (37.3, 44.7) percent of the total effect, while worries about health accounts for 9.4 (7.6, 11.1) percent of the total effect. We caution that the causal interpretation of these findings rests on strong assumptions. Nevertheless, as the pandemic continues, policymakers should also target interventions aimed at managing the substantial mental health burden associated with the COVID-19 pandemic.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45879409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-12DOI: 10.1080/2330443X.2022.2105770
Ann King, Jacob Murri, Jake Callahan, Adrienne Russell, Tyler J. Jarvis
Abstract We discuss difficulties of evaluating partisan gerrymandering in the congressional districts in Utah and the failure of many common metrics in Utah. We explain why the Republican vote share in the least-Republican district (LRVS) is a good indicator of the advantage or disadvantage each party has in the Utah congressional districts. Although the LRVS only makes sense in settings with at most one competitive district, in that setting it directly captures the extent to which a given redistricting plan gives advantage or disadvantage to the Republican and Democratic parties. We use the LRVS to evaluate the most common measures of partisan gerrymandering in the context of Utah’s 2011 congressional districts. We do this by generating large ensembles of alternative redistricting plans using Markov chain Monte Carlo methods. We also discuss the implications of this new metric and our results on the question of whether the 2011 Utah congressional plan was gerrymandered.
{"title":"Mathematical Analysis of Redistricting in Utah","authors":"Ann King, Jacob Murri, Jake Callahan, Adrienne Russell, Tyler J. Jarvis","doi":"10.1080/2330443X.2022.2105770","DOIUrl":"https://doi.org/10.1080/2330443X.2022.2105770","url":null,"abstract":"Abstract We discuss difficulties of evaluating partisan gerrymandering in the congressional districts in Utah and the failure of many common metrics in Utah. We explain why the Republican vote share in the least-Republican district (LRVS) is a good indicator of the advantage or disadvantage each party has in the Utah congressional districts. Although the LRVS only makes sense in settings with at most one competitive district, in that setting it directly captures the extent to which a given redistricting plan gives advantage or disadvantage to the Republican and Democratic parties. We use the LRVS to evaluate the most common measures of partisan gerrymandering in the context of Utah’s 2011 congressional districts. We do this by generating large ensembles of alternative redistricting plans using Markov chain Monte Carlo methods. We also discuss the implications of this new metric and our results on the question of whether the 2011 Utah congressional plan was gerrymandered.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"136 - 148"},"PeriodicalIF":1.6,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45183055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}