Pub Date : 2020-12-14DOI: 10.1080/2330443X.2020.1859030
G. Mohler, M. Short, F. Schoenberg, Daniel Sledge
ABSTRACT Dynamic estimation of the reproduction number of COVID-19 is important for assessing the impact of public health measures on virus transmission. State and local decisions about whether to relax or strengthen mitigation measures are being made in part based on whether the reproduction number, Rt , falls below the self-sustaining value of 1. Employing branching point process models and COVID-19 data from Indiana as a case study, we show that estimates of the current value of Rt , and whether it is above or below 1, depend critically on choices about data selection and model specification and estimation. In particular, we find a range of Rt values from 0.47 to 1.20 as we vary the type of estimator and input dataset. We present methods for model comparison and evaluation and then discuss the policy implications of our findings.
{"title":"Analyzing the Impacts of Public Policy on COVID-19 Transmission: A Case Study of the Role of Model and Dataset Selection Using Data from Indiana","authors":"G. Mohler, M. Short, F. Schoenberg, Daniel Sledge","doi":"10.1080/2330443X.2020.1859030","DOIUrl":"https://doi.org/10.1080/2330443X.2020.1859030","url":null,"abstract":"ABSTRACT Dynamic estimation of the reproduction number of COVID-19 is important for assessing the impact of public health measures on virus transmission. State and local decisions about whether to relax or strengthen mitigation measures are being made in part based on whether the reproduction number, Rt , falls below the self-sustaining value of 1. Employing branching point process models and COVID-19 data from Indiana as a case study, we show that estimates of the current value of Rt , and whether it is above or below 1, depend critically on choices about data selection and model specification and estimation. In particular, we find a range of Rt values from 0.47 to 1.20 as we vary the type of estimator and input dataset. We present methods for model comparison and evaluation and then discuss the policy implications of our findings.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"8 1","pages":"1 - 8"},"PeriodicalIF":1.6,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2020.1859030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48013695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-25DOI: 10.1080/2330443X.2022.2050328
A. Suzuki
Abstract How should we evaluate the effect of a policy on the likelihood of an undesirable event, such as conflict? The significance test has three limitations. First, relying on statistical significance misses the fact that uncertainty is a continuous scale. Second, focusing on a standard point estimate overlooks the variation in plausible effect sizes. Third, the criterion of substantive significance is rarely explained or justified. A new Bayesian decision-theoretic model, “causal binary loss function model,” overcomes these issues. It compares the expected loss under a policy intervention with the one under no intervention. These losses are computed based on a particular range of the effect sizes of a policy, the probability mass of this effect size range, the cost of the policy, and the cost of the undesirable event the policy intends to address. The model is more applicable than common statistical decision-theoretic models using the standard loss functions or capturing costs in terms of false positives and false negatives. I exemplify the model’s use through three applications and provide an R package. Supplementary materials for this article are available online.
{"title":"Policy Implications of Statistical Estimates: A General Bayesian Decision-Theoretic Model for Binary Outcomes","authors":"A. Suzuki","doi":"10.1080/2330443X.2022.2050328","DOIUrl":"https://doi.org/10.1080/2330443X.2022.2050328","url":null,"abstract":"Abstract How should we evaluate the effect of a policy on the likelihood of an undesirable event, such as conflict? The significance test has three limitations. First, relying on statistical significance misses the fact that uncertainty is a continuous scale. Second, focusing on a standard point estimate overlooks the variation in plausible effect sizes. Third, the criterion of substantive significance is rarely explained or justified. A new Bayesian decision-theoretic model, “causal binary loss function model,” overcomes these issues. It compares the expected loss under a policy intervention with the one under no intervention. These losses are computed based on a particular range of the effect sizes of a policy, the probability mass of this effect size range, the cost of the policy, and the cost of the undesirable event the policy intends to address. The model is more applicable than common statistical decision-theoretic models using the standard loss functions or capturing costs in terms of false positives and false negatives. I exemplify the model’s use through three applications and provide an R package. Supplementary materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"85 - 96"},"PeriodicalIF":1.6,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48273528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-22DOI: 10.1080/2330443X.2021.2024778
Johann D. Gaebler, William Cai, Guillaume W. Basse, Ravi Shroff, Sharad Goel, J. Hill
Abstract In studies of discrimination, researchers often seek to estimate a causal effect of race or gender on outcomes. For example, in the criminal justice context, one might ask whether arrested individuals would have been subsequently charged or convicted had they been a different race. It has long been known that such counterfactual questions face measurement challenges related to omitted-variable bias, and conceptual challenges related to the definition of causal estimands for largely immutable characteristics. Another concern, which has been the subject of recent debates, is post-treatment bias: many studies of discrimination condition on apparently intermediate outcomes, like being arrested, that themselves may be the product of discrimination, potentially corrupting statistical estimates. There is, however, reason to be optimistic. By carefully defining the estimand—and by considering the precise timing of events—we show that a primary causal quantity of interest in discrimination studies can be estimated under an ignorability condition that may hold approximately in some observational settings. We illustrate these ideas by analyzing both simulated data and the charging decisions of a prosecutor’s office in a large county in the United States.
{"title":"A Causal Framework for Observational Studies of Discrimination","authors":"Johann D. Gaebler, William Cai, Guillaume W. Basse, Ravi Shroff, Sharad Goel, J. Hill","doi":"10.1080/2330443X.2021.2024778","DOIUrl":"https://doi.org/10.1080/2330443X.2021.2024778","url":null,"abstract":"Abstract In studies of discrimination, researchers often seek to estimate a causal effect of race or gender on outcomes. For example, in the criminal justice context, one might ask whether arrested individuals would have been subsequently charged or convicted had they been a different race. It has long been known that such counterfactual questions face measurement challenges related to omitted-variable bias, and conceptual challenges related to the definition of causal estimands for largely immutable characteristics. Another concern, which has been the subject of recent debates, is post-treatment bias: many studies of discrimination condition on apparently intermediate outcomes, like being arrested, that themselves may be the product of discrimination, potentially corrupting statistical estimates. There is, however, reason to be optimistic. By carefully defining the estimand—and by considering the precise timing of events—we show that a primary causal quantity of interest in discrimination studies can be estimated under an ignorability condition that may hold approximately in some observational settings. We illustrate these ideas by analyzing both simulated data and the charging decisions of a prosecutor’s office in a large county in the United States.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"9 1","pages":"26 - 48"},"PeriodicalIF":1.6,"publicationDate":"2020-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41351478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/2330443x.2020.1777915
Daryl R. DeFord, M. Duchin, J. Solomon
ABSTRACT The recent wave of attention to partisan gerrymandering has come with a push to refine or replace the laws that govern political redistricting around the country. A common element in several states’ reform efforts has been the inclusion of competitiveness metrics, or scores that evaluate a districting plan based on the extent to which district-level outcomes are in play or are likely to be closely contested. In this article, we examine several classes of competitiveness metrics motivated by recent reform proposals and then evaluate their potential outcomes across large ensembles of districting plans at the Congressional and state Senate levels. This is part of a growing literature using MCMC techniques from applied statistics to situate plans and criteria in the context of valid redistricting alternatives. Our empirical analysis focuses on five states—Utah, Georgia, Wisconsin, Virginia, and Massachusetts—chosen to represent a range of partisan attributes. We highlight situation-specific difficulties in creating good competitiveness metrics and show that optimizing competitiveness can produce unintended consequences on other partisan metrics. These results demonstrate the importance of (1) avoiding writing detailed metric constraints into long-lasting constitutional reform and (2) carrying out careful mathematical modeling on real geo-electoral data in each redistricting cycle.
{"title":"A Computational Approach to Measuring Vote Elasticity and Competitiveness","authors":"Daryl R. DeFord, M. Duchin, J. Solomon","doi":"10.1080/2330443x.2020.1777915","DOIUrl":"https://doi.org/10.1080/2330443x.2020.1777915","url":null,"abstract":"ABSTRACT The recent wave of attention to partisan gerrymandering has come with a push to refine or replace the laws that govern political redistricting around the country. A common element in several states’ reform efforts has been the inclusion of competitiveness metrics, or scores that evaluate a districting plan based on the extent to which district-level outcomes are in play or are likely to be closely contested. In this article, we examine several classes of competitiveness metrics motivated by recent reform proposals and then evaluate their potential outcomes across large ensembles of districting plans at the Congressional and state Senate levels. This is part of a growing literature using MCMC techniques from applied statistics to situate plans and criteria in the context of valid redistricting alternatives. Our empirical analysis focuses on five states—Utah, Georgia, Wisconsin, Virginia, and Massachusetts—chosen to represent a range of partisan attributes. We highlight situation-specific difficulties in creating good competitiveness metrics and show that optimizing competitiveness can produce unintended consequences on other partisan metrics. These results demonstrate the importance of (1) avoiding writing detailed metric constraints into long-lasting constitutional reform and (2) carrying out careful mathematical modeling on real geo-electoral data in each redistricting cycle.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"7 1","pages":"69 - 86"},"PeriodicalIF":1.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443x.2020.1777915","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44071177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/2330443x.2020.1791773
Benjamin Fifield, K. Imai, J. Kawahara, Christopher T. Kenny
ABSTRACT As granular data about elections and voters become available, redistricting simulation methods are playing an increasingly important role when legislatures adopt redistricting plans and courts determine their legality. These simulation methods are designed to yield a representative sample of all redistricting plans that satisfy statutory guidelines and requirements such as contiguity, population parity, and compactness. A proposed redistricting plan can be considered gerrymandered if it constitutes an outlier relative to this sample according to partisan fairness metrics. Despite their growing use, an insufficient effort has been made to empirically validate the accuracy of the simulation methods. We apply a recently developed computational method that can efficiently enumerate all possible redistricting plans and yield an independent sample from this population. We show that this algorithm scales to a state with a couple of hundred geographical units. Finally, we empirically examine how existing simulation methods perform on realistic validation datasets.
{"title":"The Essential Role of Empirical Validation in Legislative Redistricting Simulation","authors":"Benjamin Fifield, K. Imai, J. Kawahara, Christopher T. Kenny","doi":"10.1080/2330443x.2020.1791773","DOIUrl":"https://doi.org/10.1080/2330443x.2020.1791773","url":null,"abstract":"ABSTRACT As granular data about elections and voters become available, redistricting simulation methods are playing an increasingly important role when legislatures adopt redistricting plans and courts determine their legality. These simulation methods are designed to yield a representative sample of all redistricting plans that satisfy statutory guidelines and requirements such as contiguity, population parity, and compactness. A proposed redistricting plan can be considered gerrymandered if it constitutes an outlier relative to this sample according to partisan fairness metrics. Despite their growing use, an insufficient effort has been made to empirically validate the accuracy of the simulation methods. We apply a recently developed computational method that can efficiently enumerate all possible redistricting plans and yield an independent sample from this population. We show that this algorithm scales to a state with a couple of hundred geographical units. Finally, we empirically examine how existing simulation methods perform on realistic validation datasets.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"7 1","pages":"52 - 68"},"PeriodicalIF":1.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443x.2020.1791773","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44089864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/2330443x.2019.1693313
Qing Pan, W. Miao, J. Gastwirth
Abstract In the 1980s, reports from Congress and the Government Accountability Office (GAO) presented statistical evidence showing that employees in the Foreign Service were overwhelmingly White male, especially in the higher positions. To remedy this historical discrimination, the State Department instituted an affirmative action plan during 1990–1992 that allowed females and race-ethnic minorities to apply directly for mid-level positions. A White male employee claimed that he had been disadvantaged by the plan. The appellate court unanimously held that the manifest statistical imbalance supported the Department’s instituting the plan. One judge identified two statistical issues in the analysis of the data that neither party brought up. This article provides an empirical guideline for sample size and a one-sided Hotelling’s T2 test to answer these problems. First, an approximate rule is developed for the minimum number of expected minority appointments needed for a meaningful statistical analysis of under-representation. To avoid the multiple comparison issue when several protected groups are involved, a modification of Hotelling’s T2 test is developed for testing the null hypothesis of fair representation against a one-sided alternative of under-representation in at least one minority group. The test yields p-values less than 1 in 10,000 indicating that minorities were substantially under-represented. Excluding secretarial and clerical jobs led to even larger disparities. Supplemental materials for this article are available online.
{"title":"Statistical Procedures for Assessing the Need for an Affirmative Action Plan: A Reanalysis of Shea v. Kerry","authors":"Qing Pan, W. Miao, J. Gastwirth","doi":"10.1080/2330443x.2019.1693313","DOIUrl":"https://doi.org/10.1080/2330443x.2019.1693313","url":null,"abstract":"Abstract In the 1980s, reports from Congress and the Government Accountability Office (GAO) presented statistical evidence showing that employees in the Foreign Service were overwhelmingly White male, especially in the higher positions. To remedy this historical discrimination, the State Department instituted an affirmative action plan during 1990–1992 that allowed females and race-ethnic minorities to apply directly for mid-level positions. A White male employee claimed that he had been disadvantaged by the plan. The appellate court unanimously held that the manifest statistical imbalance supported the Department’s instituting the plan. One judge identified two statistical issues in the analysis of the data that neither party brought up. This article provides an empirical guideline for sample size and a one-sided Hotelling’s T2 test to answer these problems. First, an approximate rule is developed for the minimum number of expected minority appointments needed for a meaningful statistical analysis of under-representation. To avoid the multiple comparison issue when several protected groups are involved, a modification of Hotelling’s T2 test is developed for testing the null hypothesis of fair representation against a one-sided alternative of under-representation in at least one minority group. The test yields p-values less than 1 in 10,000 indicating that minorities were substantially under-represented. Excluding secretarial and clerical jobs led to even larger disparities. Supplemental materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"7 1","pages":"1 - 8"},"PeriodicalIF":1.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443x.2019.1693313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46315284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/2330443X.2020.1806762
Nicholas Eubank, Jonathan Rodden
Abstract Relative to its overall statewide support, the Republican Party has been over-represented in congressional delegations and state legislatures over the last decade in a number of US states. A challenge is to determine the extent to which this can be explained by intentional gerrymandering as opposed to an underlying inefficient distribution of Democrats in cities. We explain the “spatial inefficiency” of support for Democrats, and demonstrate that it varies substantially both across states and also across legislative chambers within states. We introduce a simple method for measuring this inefficiency by assessing the partisanship of the nearest neighbors of each voter in each US state. Our measure of spatial efficiency helps explain cross-state patterns in legislative representation, and allows us to verify that political geography contributes substantially to inequalities in political representation. At the same time, however, we also show that even after controlling for spatial efficiency, partisan control of the redistricting process has had a substantial impact on the parties’ seat shares. Supplementary materials for this article are available online.
{"title":"Who Is My Neighbor? The Spatial Efficiency of Partisanship","authors":"Nicholas Eubank, Jonathan Rodden","doi":"10.1080/2330443X.2020.1806762","DOIUrl":"https://doi.org/10.1080/2330443X.2020.1806762","url":null,"abstract":"Abstract Relative to its overall statewide support, the Republican Party has been over-represented in congressional delegations and state legislatures over the last decade in a number of US states. A challenge is to determine the extent to which this can be explained by intentional gerrymandering as opposed to an underlying inefficient distribution of Democrats in cities. We explain the “spatial inefficiency” of support for Democrats, and demonstrate that it varies substantially both across states and also across legislative chambers within states. We introduce a simple method for measuring this inefficiency by assessing the partisanship of the nearest neighbors of each voter in each US state. Our measure of spatial efficiency helps explain cross-state patterns in legislative representation, and allows us to verify that political geography contributes substantially to inequalities in political representation. At the same time, however, we also show that even after controlling for spatial efficiency, partisan control of the redistricting process has had a substantial impact on the parties’ seat shares. Supplementary materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"7 1","pages":"87 - 100"},"PeriodicalIF":1.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2020.1806762","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42193194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1080/2330443x.2020.1774452
S. Caldera, Daryl R. DeFord, M. Duchin, Samuel C. Gutekunst, Cara Nix
ABSTRACT In eight states, a “nesting rule” requires that each state Senate district be exactly composed of two adjacent state House districts. In this article, we investigate the potential impacts of these nesting rules with a focus on Alaska, where Republicans have a 2/3 majority in the Senate while a Democratic-led coalition controls the House. Treating the current House plan as fixed and considering all possible pairings, we find that the choice of pairings alone can create a swing of 4–5 seats out of 20 against recent voting patterns, which is similar to the range observed when using a Markov chain procedure to generate plans without the nesting constraint. The analysis enables other insights into Alaska districting, including the partisan latitude available to districters with and without strong rules about nesting and contiguity. Supplementary materials for this article are available online.
{"title":"Mathematics of Nested Districts: The Case of Alaska","authors":"S. Caldera, Daryl R. DeFord, M. Duchin, Samuel C. Gutekunst, Cara Nix","doi":"10.1080/2330443x.2020.1774452","DOIUrl":"https://doi.org/10.1080/2330443x.2020.1774452","url":null,"abstract":"ABSTRACT In eight states, a “nesting rule” requires that each state Senate district be exactly composed of two adjacent state House districts. In this article, we investigate the potential impacts of these nesting rules with a focus on Alaska, where Republicans have a 2/3 majority in the Senate while a Democratic-led coalition controls the House. Treating the current House plan as fixed and considering all possible pairings, we find that the choice of pairings alone can create a swing of 4–5 seats out of 20 against recent voting patterns, which is similar to the range observed when using a Markov chain procedure to generate plans without the nesting constraint. The analysis enables other insights into Alaska districting, including the partisan latitude available to districters with and without strong rules about nesting and contiguity. Supplementary materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"7 1","pages":"39 - 51"},"PeriodicalIF":1.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443x.2020.1774452","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45247124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-06DOI: 10.1080/2330443x.2019.1704330
L. Mentch
Abstract Fatal police shootings in the United States continue to be a polarizing social and political issue. Clear disagreement between racial proportions of victims and nationwide racial demographics together with graphic video footage has created fertile ground for controversy. However, simple population level summary statistics fail to take into account fundamental local characteristics such as county-level racial demography, local arrest demography, and law enforcement density. Using data on fatal police shootings between January 2015 and July 2016, I implement a number of straightforward resampling procedures designed to carefully examine how unlikely the victim totals from each race are with respect to these local population characteristics if no racial bias were present in the decision to shoot by police. I present several approaches considering the shooting locations both as fixed and also as a random sample. In both cases, I find overwhelming evidence of a racial disparity in shooting victims with respect to local population demographics but substantially less disparity after accounting for local arrest demographics. I conclude the analyses by examining the effect of police-worn body cameras and find no evidence that the presence of such cameras impacts the racial distribution of victims. Supplementary materials for this article are available online.
{"title":"On Racial Disparities in Recent Fatal Police Shootings","authors":"L. Mentch","doi":"10.1080/2330443x.2019.1704330","DOIUrl":"https://doi.org/10.1080/2330443x.2019.1704330","url":null,"abstract":"Abstract Fatal police shootings in the United States continue to be a polarizing social and political issue. Clear disagreement between racial proportions of victims and nationwide racial demographics together with graphic video footage has created fertile ground for controversy. However, simple population level summary statistics fail to take into account fundamental local characteristics such as county-level racial demography, local arrest demography, and law enforcement density. Using data on fatal police shootings between January 2015 and July 2016, I implement a number of straightforward resampling procedures designed to carefully examine how unlikely the victim totals from each race are with respect to these local population characteristics if no racial bias were present in the decision to shoot by police. I present several approaches considering the shooting locations both as fixed and also as a random sample. In both cases, I find overwhelming evidence of a racial disparity in shooting victims with respect to local population demographics but substantially less disparity after accounting for local arrest demographics. I conclude the analyses by examining the effect of police-worn body cameras and find no evidence that the presence of such cameras impacts the racial distribution of victims. Supplementary materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"7 1","pages":"9 - 18"},"PeriodicalIF":1.6,"publicationDate":"2019-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443x.2019.1704330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42104168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-30DOI: 10.1080/2330443x.2020.1748552
Daniel Carter, Zach Hunter, Dan Teague, G. Herschlag, Jonathan C. Mattingly
Abstract North Carolina’s constitution requires that state legislative districts should not split counties. However, counties must be split to comply with the “one person, one vote” mandate of the U.S. Supreme Court. Given that counties must be split, the North Carolina legislature and the courts have provided guidelines that seek to reduce counties split across districts while also complying with the “one person, one vote” criterion. Under these guidelines, the counties are separated into clusters; each cluster contains a specified number of districts and that are drawn independent from other clusters. The primary goal of this work is to develop, present, and publicly release an algorithm to optimally cluster counties according to the guidelines set by the court in 2015. We use this tool to investigate the optimality and uniqueness of the enacted clusters under the 2017 redistricting process. We verify that the enacted clusters are optimal, but find other optimal choices. We emphasize that the tool we provide lists all possible optimal county clusterings. We also explore the stability of clustering under changing statewide populations and project what the county clusters may look like in the next redistricting cycle beginning in 2020/2021. Supplementary materials for this article are available online.
{"title":"Optimal Legislative County Clustering in North Carolina","authors":"Daniel Carter, Zach Hunter, Dan Teague, G. Herschlag, Jonathan C. Mattingly","doi":"10.1080/2330443x.2020.1748552","DOIUrl":"https://doi.org/10.1080/2330443x.2020.1748552","url":null,"abstract":"Abstract North Carolina’s constitution requires that state legislative districts should not split counties. However, counties must be split to comply with the “one person, one vote” mandate of the U.S. Supreme Court. Given that counties must be split, the North Carolina legislature and the courts have provided guidelines that seek to reduce counties split across districts while also complying with the “one person, one vote” criterion. Under these guidelines, the counties are separated into clusters; each cluster contains a specified number of districts and that are drawn independent from other clusters. The primary goal of this work is to develop, present, and publicly release an algorithm to optimally cluster counties according to the guidelines set by the court in 2015. We use this tool to investigate the optimality and uniqueness of the enacted clusters under the 2017 redistricting process. We verify that the enacted clusters are optimal, but find other optimal choices. We emphasize that the tool we provide lists all possible optimal county clusterings. We also explore the stability of clustering under changing statewide populations and project what the county clusters may look like in the next redistricting cycle beginning in 2020/2021. Supplementary materials for this article are available online.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"7 1","pages":"19 - 29"},"PeriodicalIF":1.6,"publicationDate":"2019-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443x.2020.1748552","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44573636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}