Pub Date : 2018-01-01DOI: 10.1080/2330443X.2018.1513346
Maria Cuellar
ABSTRACT To adjust for underreporting of marijuana use, researchers multiply the proportion of individuals who reported using marijuana by a constant factor, such as the US Office of National Drug Control Policy’s 1.3. Although the current adjustments are simple, they do not account for changes in reporting over time. This article presents a novel way to explore relative changes in reporting from one survey to another simply by using data already available in a self-reported survey, the National Survey on Drug Use and Health. Using domain estimation to examine the stability in reported marijuana use by age 25 in individuals older than 25, this analysis provides estimates of the trends in marijuana reporting and standard errors, as long as the survey weights properly account for sampling variability. There was no significant evidence of an upward or downward trend in reporting changes from 1979 to 2016 for all birth cohorts, although there were significant differences in reporting between years and a slight downward trend in later years. These results suggest that individuals have become increasingly less willing to report their drug use in recent years, and thus the ONDCP likely underestimated the already drastic increase in use from 1992 to 2016.
{"title":"Trends in Self-Reporting of Marijuana Consumption in the United States","authors":"Maria Cuellar","doi":"10.1080/2330443X.2018.1513346","DOIUrl":"https://doi.org/10.1080/2330443X.2018.1513346","url":null,"abstract":"ABSTRACT To adjust for underreporting of marijuana use, researchers multiply the proportion of individuals who reported using marijuana by a constant factor, such as the US Office of National Drug Control Policy’s 1.3. Although the current adjustments are simple, they do not account for changes in reporting over time. This article presents a novel way to explore relative changes in reporting from one survey to another simply by using data already available in a self-reported survey, the National Survey on Drug Use and Health. Using domain estimation to examine the stability in reported marijuana use by age 25 in individuals older than 25, this analysis provides estimates of the trends in marijuana reporting and standard errors, as long as the survey weights properly account for sampling variability. There was no significant evidence of an upward or downward trend in reporting changes from 1979 to 2016 for all birth cohorts, although there were significant differences in reporting between years and a slight downward trend in later years. These results suggest that individuals have become increasingly less willing to report their drug use in recent years, and thus the ONDCP likely underestimated the already drastic increase in use from 1992 to 2016.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"5 1","pages":"1 - 10"},"PeriodicalIF":1.6,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2018.1513346","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44754073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.1080/2330443X.2018.1460226
Alexandra M. Resch, Eric Isenberg
ABSTRACT Some educators are concerned that students with test scores at top of the test score distribution will negatively affect the value-added estimates of teachers of those students. A conventional wisdom has sprung up suggesting that students with very high test scores have “no room to grow,” so value-added estimates for teachers with high-performing students will be depressed even for highly effective teachers. Using empirical data, we show that under normal circumstances, in which few students score at the ceiling, a teacher of high-performing students—even with many students scoring at the ceiling on the pre-test—can have a high value-added estimate. To understand how more extreme ceiling effects can change value-added estimates, we simulate a low ceiling, causing student test achievement data of high-scoring students to become less precise when a single score represents a large range of possible achievement. We find that the problem of test score ceilings for an evaluation system is not that it pushes the value added of every teacher of high-achieving students toward the bottom of the distribution of teachers, but rather shrinks it toward the middle.
{"title":"How do Test Scores at the Ceiling Affect Value-Added Estimates?","authors":"Alexandra M. Resch, Eric Isenberg","doi":"10.1080/2330443X.2018.1460226","DOIUrl":"https://doi.org/10.1080/2330443X.2018.1460226","url":null,"abstract":"ABSTRACT Some educators are concerned that students with test scores at top of the test score distribution will negatively affect the value-added estimates of teachers of those students. A conventional wisdom has sprung up suggesting that students with very high test scores have “no room to grow,” so value-added estimates for teachers with high-performing students will be depressed even for highly effective teachers. Using empirical data, we show that under normal circumstances, in which few students score at the ceiling, a teacher of high-performing students—even with many students scoring at the ceiling on the pre-test—can have a high value-added estimate. To understand how more extreme ceiling effects can change value-added estimates, we simulate a low ceiling, causing student test achievement data of high-scoring students to become less precise when a single score represents a large range of possible achievement. We find that the problem of test score ceilings for an evaluation system is not that it pushes the value added of every teacher of high-achieving students toward the bottom of the distribution of teachers, but rather shrinks it toward the middle.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 6"},"PeriodicalIF":1.6,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2018.1460226","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47132513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-29DOI: 10.1080/2330443X.2017.1407721
K. McConville, Lynne Stokes, M. Gray
ABSTRACT Recently, voter ID laws have been instituted, modified, or overturned in many states in the U.S. As these laws change, it is important to have accurate measures of their impact. We present the data collection methods and results of class projects that attempted to quantify the impact of the voter ID laws in areas of three states. We also summarize the types of data used to assess the impact of voter ID laws and discuss how our data address some of the shortcomings of the usual techniques for assessing the impact of voter ID laws.
{"title":"Accumulating Evidence of the Impact of Voter ID Laws: Student Engagement in the Political Process","authors":"K. McConville, Lynne Stokes, M. Gray","doi":"10.1080/2330443X.2017.1407721","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1407721","url":null,"abstract":"ABSTRACT Recently, voter ID laws have been instituted, modified, or overturned in many states in the U.S. As these laws change, it is important to have accurate measures of their impact. We present the data collection methods and results of class projects that attempted to quantify the impact of the voter ID laws in areas of three states. We also summarize the types of data used to assess the impact of voter ID laws and discuss how our data address some of the shortcomings of the usual techniques for assessing the impact of voter ID laws.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"5 1","pages":"1 - 8"},"PeriodicalIF":1.6,"publicationDate":"2017-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1407721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48082046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-29DOI: 10.1080/2330443X.2017.1408439
R. Amin, Arlene Nelson, S. McDougall
ABSTRACT Superfund sites are geographic locations selected by the U.S. Environmental Protection Agency as having extreme toxic chemical spills. In this article, we address three main research questions: (1) Are there geographical areas where the number (or density) of Superfund sites is significantly higher than in the rest of the USA? (2) Is there an association between cancer incidence and the number (or density) of Superfund sites? (3) Do counties with Superfund sites have higher proportions of minority populations than the rest of the USA? We study the geographic distribution of the overall cancer incidence rate (2007–2011) in addition to the geographic variation of Superfund sites for 2013. We used the disease surveillance software package SaTScan with its scan statistic to identify locations and relative risks of spatial clusters in cancer rates and in Superfund site count and density. We also used the surveillance software FlexScan to support and complement the results obtained with SaTScan. We find that geographic areas with Superfund sites tend to have elevated cancer risk, and also elevated proportions of minority populations.
{"title":"A Spatial Study of the Location of Superfund Sites and Associated Cancer Risk","authors":"R. Amin, Arlene Nelson, S. McDougall","doi":"10.1080/2330443X.2017.1408439","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1408439","url":null,"abstract":"ABSTRACT Superfund sites are geographic locations selected by the U.S. Environmental Protection Agency as having extreme toxic chemical spills. In this article, we address three main research questions: (1) Are there geographical areas where the number (or density) of Superfund sites is significantly higher than in the rest of the USA? (2) Is there an association between cancer incidence and the number (or density) of Superfund sites? (3) Do counties with Superfund sites have higher proportions of minority populations than the rest of the USA? We study the geographic distribution of the overall cancer incidence rate (2007–2011) in addition to the geographic variation of Superfund sites for 2013. We used the disease surveillance software package SaTScan with its scan statistic to identify locations and relative risks of spatial clusters in cancer rates and in Superfund site count and density. We also used the surveillance software FlexScan to support and complement the results obtained with SaTScan. We find that geographic areas with Superfund sites tend to have elevated cancer risk, and also elevated proportions of minority populations.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"5 1","pages":"1 - 9"},"PeriodicalIF":1.6,"publicationDate":"2017-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1408439","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43813433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2016.1271733
Dan Goldhaber, R. Startz
ABSTRACT It is common to assume that worker productivity is normally distributed, but this assumption is rarely, if ever, tested. We estimate the distribution of worker productivity, where individual productivity is measured with error, using the productivity of teachers as an example. We employ a nonparametric density estimator that explicitly accounts for measurement error using data from the Tennessee STAR experiment, and longitudinal data from North Carolina and Washington. Statistical tests show that the productivity distribution of teachers is not Gaussian, but the differences from the normal distribution tend to be small. Our findings confirm the existing empirical evidence that the differences in the effects of individual teachers on student achievement are large and the assumption that the differences in the upper and lower tails of the teacher performance distribution are far larger than in the middle of the distribution. Specifically, a 10 percentile point movement for teachers at the top (90th) or bottom (10th) deciles of the distribution is estimated to move student achievement by 8–17 student percentile ranks, as compared to a change of 2–7 student percentile ranks for a 10 percentile change in teacher productivity in the middle of the distribution.
{"title":"On the Distribution of Worker Productivity: The Case of Teacher Effectiveness and Student Achievement","authors":"Dan Goldhaber, R. Startz","doi":"10.1080/2330443X.2016.1271733","DOIUrl":"https://doi.org/10.1080/2330443X.2016.1271733","url":null,"abstract":"ABSTRACT It is common to assume that worker productivity is normally distributed, but this assumption is rarely, if ever, tested. We estimate the distribution of worker productivity, where individual productivity is measured with error, using the productivity of teachers as an example. We employ a nonparametric density estimator that explicitly accounts for measurement error using data from the Tennessee STAR experiment, and longitudinal data from North Carolina and Washington. Statistical tests show that the productivity distribution of teachers is not Gaussian, but the differences from the normal distribution tend to be small. Our findings confirm the existing empirical evidence that the differences in the effects of individual teachers on student achievement are large and the assumption that the differences in the upper and lower tails of the teacher performance distribution are far larger than in the middle of the distribution. Specifically, a 10 percentile point movement for teachers at the top (90th) or bottom (10th) deciles of the distribution is estimated to move student achievement by 8–17 student percentile ranks, as compared to a change of 2–7 student percentile ranks for a 10 percentile change in teacher productivity in the middle of the distribution.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 12"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2016.1271733","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49502010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1369914
J. Green, W. Stroup, Pamela S. Fellers
ABSTRACT In an age of accountability, it is critical to define and estimate the effects of teacher education and professional development programs on student learning in ways that allow stakeholders to explore potential reasons for what is observed and to enhance program quality and fidelity. Across the suite of statistical models used for program evaluation, researchers consistently measure program effectiveness using the coefficients of fixed program effects. We propose that program effects are best characterized not as a single effect to be estimated, but as a distribution of teacher-specific effects. In this article, we first discuss this approach and then describe one way it could be used to define and estimate program effects within a value-added modeling context. Using an example dataset, we demonstrate how program effect estimates can be obtained using the proposed methodology and explain how distributions of these estimates provide additional information and insights about programs that are not apparent when only looking at average effects. By examining distributions of teacher-specific effects as proposed, researchers have the opportunity to more deeply investigate and understand the effects of programs on student success.
{"title":"Defining Program Effects: A Distribution-Based Perspective","authors":"J. Green, W. Stroup, Pamela S. Fellers","doi":"10.1080/2330443X.2017.1369914","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1369914","url":null,"abstract":"ABSTRACT In an age of accountability, it is critical to define and estimate the effects of teacher education and professional development programs on student learning in ways that allow stakeholders to explore potential reasons for what is observed and to enhance program quality and fidelity. Across the suite of statistical models used for program evaluation, researchers consistently measure program effectiveness using the coefficients of fixed program effects. We propose that program effects are best characterized not as a single effect to be estimated, but as a distribution of teacher-specific effects. In this article, we first discuss this approach and then describe one way it could be used to define and estimate program effects within a value-added modeling context. Using an example dataset, we demonstrate how program effect estimates can be obtained using the proposed methodology and explain how distributions of these estimates provide additional information and insights about programs that are not apparent when only looking at average effects. By examining distributions of teacher-specific effects as proposed, researchers have the opportunity to more deeply investigate and understand the effects of programs on student success.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 10"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1369914","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46920866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2016.1270175
W. Tobin, H. Sheets, C. Spiegelman
ABSTRACT Comparative Bullet Lead Analysis (CBLA) was discredited as a forensic discipline largely due to the absence of cross-discipline input, primarily metallurgical and statistical, during development and forensic/judicial application of the practice. Of particular significance to the eventual demise of CBLA practice was ignorance of the role of statistics in assessing probative value of claimed bullet “matches” at both the production and retail distribution levels, leading to overstated testimonial claims by expert witnesses. Bitemark comparisons have come under substantial criticism in the last few years, both due to exonerations based on DNA evidence and to research efforts questioning the claimed uniqueness of bitemarks. The fields of fire and arson investigation and of firearm and toolmark comparison are similar to CBLA and bitemarks in the absence of effective statistical support for these practices. The features of the first two disciplines are examined in systemic detail to enhance understanding as to why they became discredited forensic practices, and to identify aspects of the second two disciplines that pose significant concern to critics.
{"title":"Absence of Statistical and Scientific Ethos: The Common Denominator in Deficient Forensic Practices","authors":"W. Tobin, H. Sheets, C. Spiegelman","doi":"10.1080/2330443X.2016.1270175","DOIUrl":"https://doi.org/10.1080/2330443X.2016.1270175","url":null,"abstract":"ABSTRACT Comparative Bullet Lead Analysis (CBLA) was discredited as a forensic discipline largely due to the absence of cross-discipline input, primarily metallurgical and statistical, during development and forensic/judicial application of the practice. Of particular significance to the eventual demise of CBLA practice was ignorance of the role of statistics in assessing probative value of claimed bullet “matches” at both the production and retail distribution levels, leading to overstated testimonial claims by expert witnesses. Bitemark comparisons have come under substantial criticism in the last few years, both due to exonerations based on DNA evidence and to research efforts questioning the claimed uniqueness of bitemarks. The fields of fire and arson investigation and of firearm and toolmark comparison are similar to CBLA and bitemarks in the absence of effective statistical support for these practices. The features of the first two disciplines are examined in systemic detail to enhance understanding as to why they became discredited forensic practices, and to identify aspects of the second two disciplines that pose significant concern to critics.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 11"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2016.1270175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41885182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1399845
Corrie V. Hunt
As Gelman and Azari make clear, there is no single smoking gun to point to as the primary explanation for the 2016 election that took somany of us by surprise. As a pollster at a progressive public opinion research firm, I will admit the election floored me in the most depressing and sickening of ways. It was not because I did not think it was possible. In fact, in the final weeks leading up to the election, I and many of my colleagues grew increasingly fearful that the tightening we saw in internal polls meant that aClinton victorywas far from certain. But I letmyself be reassured by the confidence of the analytics projections. One of the most important lessons practitioners and consumers of public opinion research can learn from this experience is to take a much closer examination of election prediction models (lesson #3) and how nonresponse bias (lesson #5) affects polls in general and the polls that feed into forecast models. And finally, we cannot let ourselves get so fixated on the horserace numbers that we forget to listen to what voters are actually telling us in the rest of the poll and in qualitative research.
{"title":"Response to Gelman and Azari (2017)","authors":"Corrie V. Hunt","doi":"10.1080/2330443X.2017.1399845","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1399845","url":null,"abstract":"As Gelman and Azari make clear, there is no single smoking gun to point to as the primary explanation for the 2016 election that took somany of us by surprise. As a pollster at a progressive public opinion research firm, I will admit the election floored me in the most depressing and sickening of ways. It was not because I did not think it was possible. In fact, in the final weeks leading up to the election, I and many of my colleagues grew increasingly fearful that the tightening we saw in internal polls meant that aClinton victorywas far from certain. But I letmyself be reassured by the confidence of the analytics projections. One of the most important lessons practitioners and consumers of public opinion research can learn from this experience is to take a much closer examination of election prediction models (lesson #3) and how nonresponse bias (lesson #5) affects polls in general and the polls that feed into forecast models. And finally, we cannot let ourselves get so fixated on the horserace numbers that we forget to listen to what voters are actually telling us in the rest of the poll and in qualitative research.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 3"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1399845","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46147619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1389620
S. Ansolabehere, Eitan Hersh
ABSTRACT This article presents an algorithm for record linkage that uses multiple indicators derived from combinations of fields commonly found in databases. Specifically, the quadruplet of Address (A), Date of Birth (D), Gender (G), and Name (N) and any triplet of A-D-G-N (i.e., ADG, ADN, AGN, and DGN) also link records with an extremely high likelihood. Matching on multiple identifiers avoids problems of missing data, inconsistent fields, and typographical errors. We show, using a very large database from the State of Texas, that exact matches using combinations A, D, G, and N produce a rate of matches comparable to 9-Digit Social Security Number. Further examination of the linkage rates show that reporting of the data at a higher level of aggregation, such as Birth Year instead of Date of Birth and omission of names, makes correct matches between databases highly unlikely, protecting an individual’s records.
{"title":"ADGN: An Algorithm for Record Linkage Using Address, Date of Birth, Gender, and Name","authors":"S. Ansolabehere, Eitan Hersh","doi":"10.1080/2330443X.2017.1389620","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1389620","url":null,"abstract":"ABSTRACT This article presents an algorithm for record linkage that uses multiple indicators derived from combinations of fields commonly found in databases. Specifically, the quadruplet of Address (A), Date of Birth (D), Gender (G), and Name (N) and any triplet of A-D-G-N (i.e., ADG, ADN, AGN, and DGN) also link records with an extremely high likelihood. Matching on multiple identifiers avoids problems of missing data, inconsistent fields, and typographical errors. We show, using a very large database from the State of Texas, that exact matches using combinations A, D, G, and N produce a rate of matches comparable to 9-Digit Social Security Number. Further examination of the linkage rates show that reporting of the data at a higher level of aggregation, such as Birth Year instead of Date of Birth and omission of names, makes correct matches between databases highly unlikely, protecting an individual’s records.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 10"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1389620","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43002430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1399846
J. Victor
Scholars, pundits, and wonks will be studying the 2016 election for a long time. The sheer number of unprecedented elements of the 2016 U.S. elections produced some shock fatigue and left even seasoned election watchers scratching their heads (Fallows 2017). Drawing on insights from data science, statistics, and political science, Julia Azari and Andrew Gelman identify an impressive 19 potentially productive threads to pull on in our attempt to unravel the mysteries of 2016. There are so many features of the 2016 election that strayed from the status quo that, like a spoiled experimental design, it is challenging for scholars to explain exactly why the election turned in the surprising ways it did. To name just a few, 2016 included the first female major party candidate, the first modern election with evidence of undue foreign influence, the first election with a nominee who had no government or military experience of any kind, and the list goes on. While some may find the Gelman–Azari treatment dissatisfying for being too shallow on any individual point, too contrived, or just too long of a list, I submit that their holistic approach to breaking down the oddities of 2016 is necessary given the circumstances. Here, I focus on four of the items on their list—two that I find worth underscoring and strongly worthy of further exploration, and two that are perhaps too complex to pursue, even if perfectly valid.
{"title":"Unraveling 2016: Comments on Gelman and Azari's 19 Things","authors":"J. Victor","doi":"10.1080/2330443X.2017.1399846","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1399846","url":null,"abstract":"Scholars, pundits, and wonks will be studying the 2016 election for a long time. The sheer number of unprecedented elements of the 2016 U.S. elections produced some shock fatigue and left even seasoned election watchers scratching their heads (Fallows 2017). Drawing on insights from data science, statistics, and political science, Julia Azari and Andrew Gelman identify an impressive 19 potentially productive threads to pull on in our attempt to unravel the mysteries of 2016. There are so many features of the 2016 election that strayed from the status quo that, like a spoiled experimental design, it is challenging for scholars to explain exactly why the election turned in the surprising ways it did. To name just a few, 2016 included the first female major party candidate, the first modern election with evidence of undue foreign influence, the first election with a nominee who had no government or military experience of any kind, and the list goes on. While some may find the Gelman–Azari treatment dissatisfying for being too shallow on any individual point, too contrived, or just too long of a list, I submit that their holistic approach to breaking down the oddities of 2016 is necessary given the circumstances. Here, I focus on four of the items on their list—two that I find worth underscoring and strongly worthy of further exploration, and two that are perhaps too complex to pursue, even if perfectly valid.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 3"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1399846","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}