Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1399842
R. Y. Shapiro
How soon we forget, and Gelman and Azari did not mention what baseball legend and language master Yogi Berra would have reminded us regarding the 2016 election polling: (1) “It’s ‘de ja vu’ all over again!” And (2) “...But the similarities are different!” (see Shapiro 2017a). This election hearkened back to the 1936 and especially the 1948 elections in which pollsters—as both pollsters and pundits—demonstrated unadulterated arrogance or hubris. In 1936 the folks atThe LiteraryDigestmagazine flaunted the prediction based on their multiple million ballot straw poll (that had been mailed to their subscribers and names from telephone, car registration and other lists—which had a distinctively upper status bias) that Alfred Landon would defeat President Franklin Roosevelt. The poll had gotten the winner right in every election from 1916 through FDR in 1932, so what could go wrong? Everything, thanks to the political realignment in which lower status voters missed in the straw poll disproportionately broke toward the Democrat Roosevelt. That year the more “scientific” (that is, engaging in something closer, but still far from, probability sampling) pollsters George Gallup, Elmo Roper, and Archibald Crossley predicted an easy Roosevelt victory and put theDigest to shame (it went out of business not long afterward). But Crossley and Gallup—who was then and still is themost famous of the lot—still underestimatedRoosevelt’s vote (60.7%) by fully 7 percentage points (compared to the Digest’s 20 points), and Gallup continued to underestimate Roosevelt’s vote in the next two election. So something was still amiss in the polls. The question of poll accuracy during this time, as the pollsters announced their predictions, got some attention, including calls for congressional investigation of the polls (on this forgotten and not well-remembered point, see especially Fried (2012)
{"title":"What We Relearned and Learned from the 2016 Elections: Comment on Gelman and Azari","authors":"R. Y. Shapiro","doi":"10.1080/2330443X.2017.1399842","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1399842","url":null,"abstract":"How soon we forget, and Gelman and Azari did not mention what baseball legend and language master Yogi Berra would have reminded us regarding the 2016 election polling: (1) “It’s ‘de ja vu’ all over again!” And (2) “...But the similarities are different!” (see Shapiro 2017a). This election hearkened back to the 1936 and especially the 1948 elections in which pollsters—as both pollsters and pundits—demonstrated unadulterated arrogance or hubris. In 1936 the folks atThe LiteraryDigestmagazine flaunted the prediction based on their multiple million ballot straw poll (that had been mailed to their subscribers and names from telephone, car registration and other lists—which had a distinctively upper status bias) that Alfred Landon would defeat President Franklin Roosevelt. The poll had gotten the winner right in every election from 1916 through FDR in 1932, so what could go wrong? Everything, thanks to the political realignment in which lower status voters missed in the straw poll disproportionately broke toward the Democrat Roosevelt. That year the more “scientific” (that is, engaging in something closer, but still far from, probability sampling) pollsters George Gallup, Elmo Roper, and Archibald Crossley predicted an easy Roosevelt victory and put theDigest to shame (it went out of business not long afterward). But Crossley and Gallup—who was then and still is themost famous of the lot—still underestimatedRoosevelt’s vote (60.7%) by fully 7 percentage points (compared to the Digest’s 20 points), and Gallup continued to underestimate Roosevelt’s vote in the next two election. So something was still amiss in the polls. The question of poll accuracy during this time, as the pollsters announced their predictions, got some attention, including calls for congressional investigation of the polls (on this forgotten and not well-remembered point, see especially Fried (2012)","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 3"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1399842","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46500450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1317223
L. Billard
ABSTRACT Although it is 45 years since legislation made gender discrimination on university campuses illegal, salary inequities continue to exist today. The seminal work in studying the existence of salary inequities is that of the American Association of University Professors (AAUP), by Scott (1977) and Gray (1980). Subsequently, innumerable analyses based on versions of their multiple regression model have been published. Salary is the dependent variable and is modeled to depend on various independent predictor variables such as years employed. Often, indicator terms, for gender and/or discipline are included in the model as independent predicator variables. Unfortunately, many of these studies are not well grounded in basic statistical science. The most glaring omission is the failure to include indicator by predictor interaction terms in the model when required. The present work draws attention to the broader implications of using these models incorrectly, and the difficulties that ensue when they are not built on an appropriate sound statistical framework. Another issue surrounds the inclusion of “tainted” predictor variables that are themselves gender-biased, the most contentious being the (intuitive) choice of rank. Therefore, a brief look at this issue is included; unfortunately, it is shown that rank still today seems to persist as a tainted variable.
{"title":"Study of Salary Differentials by Gender and Discipline","authors":"L. Billard","doi":"10.1080/2330443X.2017.1317223","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1317223","url":null,"abstract":"ABSTRACT Although it is 45 years since legislation made gender discrimination on university campuses illegal, salary inequities continue to exist today. The seminal work in studying the existence of salary inequities is that of the American Association of University Professors (AAUP), by Scott (1977) and Gray (1980). Subsequently, innumerable analyses based on versions of their multiple regression model have been published. Salary is the dependent variable and is modeled to depend on various independent predictor variables such as years employed. Often, indicator terms, for gender and/or discipline are included in the model as independent predicator variables. Unfortunately, many of these studies are not well grounded in basic statistical science. The most glaring omission is the failure to include indicator by predictor interaction terms in the model when required. The present work draws attention to the broader implications of using these models incorrectly, and the difficulties that ensue when they are not built on an appropriate sound statistical framework. Another issue surrounds the inclusion of “tainted” predictor variables that are themselves gender-biased, the most contentious being the (intuitive) choice of rank. Therefore, a brief look at this issue is included; unfortunately, it is shown that rank still today seems to persist as a tainted variable.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 14"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1317223","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42262339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1400298
Julia Azari, A. Gelman
Five responses from leading scholars of American politics have given us a great deal to think about. Several themes emerge from the responses. The first is the challenge of the addressing how relevant the 2016 election will be for understanding the future of American politics. Several of the discussants also challenge our thinking about the role of white working class pundits, and about how political scientists should think about demographics and politics more generally. In the study of comparative politics, the literature on case selection demands that scholars answer the question, “What kind of case is this?” before proceeding; see for example Gerring and Seawright (2008). Looking forward, is the 2016 typical with some unusual features, or will it in retrospect seem unusual? The answer to this question depends on the research question and the variables of interest. As a result, elections scholars may need to think more deeply about the kinds of questions we pursue and the theoretical assumptions we make. However, we must also wait to find out the impact of 2016 on subsequent contests. As we attempt to classify the 2016 election, we are stuck doing some guesswork. Noel urges scholars to ask how an outlier can sharpen our theories. Masket and Victor both pose the question of whether last year’s contest will turn out to have been anomalous or a new normal. Finally, Shapiro asks whether the election was really so unusual after all. These different classifications suggest not just different interpretations, but that the implications of 2016 depend on what the researcher seeks to explain.
美国主要政治学者的五个回答给了我们很多值得思考的东西。从这些回应中可以看出几个主题。第一个挑战是,如何说明2016年大选对理解美国政治的未来有多大意义。几位讨论者还挑战了我们对白人工人阶级专家角色的看法,以及政治科学家应该如何更广泛地思考人口统计学和政治问题。在比较政治学的研究中,关于案例选择的文献要求学者们回答这样一个问题:“这是一个什么样的案例?”,然后再继续;参见Gerring and Seawright(2008)。展望未来,2016年会有一些不寻常的特点吗?还是现在回想起来会觉得不寻常?这个问题的答案取决于研究问题和感兴趣的变量。因此,选举学者可能需要更深入地思考我们所追求的问题和我们所做的理论假设。然而,我们也必须等待2016年对后续比赛的影响。当我们试图对2016年大选进行分类时,我们陷入了一些猜测。诺埃尔敦促学者们思考一个异常值是如何使我们的理论更加敏锐的。马基特和维克多都提出了一个问题:去年的比赛是反常的,还是新常态?最后,夏皮罗问道,这次选举是否真的如此不同寻常。这些不同的分类不仅表明了不同的解释,而且还表明2016年的含义取决于研究人员试图解释的内容。
{"title":"Rejoinder: How Special was 2016?","authors":"Julia Azari, A. Gelman","doi":"10.1080/2330443X.2017.1400298","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1400298","url":null,"abstract":"Five responses from leading scholars of American politics have given us a great deal to think about. Several themes emerge from the responses. The first is the challenge of the addressing how relevant the 2016 election will be for understanding the future of American politics. Several of the discussants also challenge our thinking about the role of white working class pundits, and about how political scientists should think about demographics and politics more generally. In the study of comparative politics, the literature on case selection demands that scholars answer the question, “What kind of case is this?” before proceeding; see for example Gerring and Seawright (2008). Looking forward, is the 2016 typical with some unusual features, or will it in retrospect seem unusual? The answer to this question depends on the research question and the variables of interest. As a result, elections scholars may need to think more deeply about the kinds of questions we pursue and the theoretical assumptions we make. However, we must also wait to find out the impact of 2016 on subsequent contests. As we attempt to classify the 2016 election, we are stuck doing some guesswork. Noel urges scholars to ask how an outlier can sharpen our theories. Masket and Victor both pose the question of whether last year’s contest will turn out to have been anomalous or a new normal. Finally, Shapiro asks whether the election was really so unusual after all. These different classifications suggest not just different interpretations, but that the implications of 2016 depend on what the researcher seeks to explain.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 3"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1400298","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42623862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2016.1267599
A. Kirpich, E. Leary
ABSTRACT Uncontrolled hazardous wastes sites have the potential to adversely impact human health and damage or disrupt ecological systems and the greater environment. Four decades have passed since the Superfund law was enacted, allowing increased exposure time to these potential health hazards while also allowing advancement of analysis techniques. Florida has the sixth highest number of Superfund sites in the US and, in 2016, Florida was projected to have the second largest number of new cancer cases in the US. We explore statewide cancer incidence in Florida from 1986 to 2010 to determine if differences or associations exist in counties containing Superfund sites compared to counties that do not. To investigate potential environmental associations with cancer incidence; results using spatial and nonspatial mixed models were compared. Using a Poisson–Gamma mixture model, our results provide some evidence of an association between cancer incidence rates and Superfund site hazard levels, as well as proxy measures of water contamination around Superfund sites. In addition, results build upon previously observed gender differences in cancer incidence rates and further indicate spatial differences for cancer incidence. Heterogeneity among cancer incidence rates were observed across Florida with some mild association with Superfund exposure proxies.
{"title":"Superfund Locations and Potential Associations with Cancer Incidence in Florida","authors":"A. Kirpich, E. Leary","doi":"10.1080/2330443X.2016.1267599","DOIUrl":"https://doi.org/10.1080/2330443X.2016.1267599","url":null,"abstract":"ABSTRACT Uncontrolled hazardous wastes sites have the potential to adversely impact human health and damage or disrupt ecological systems and the greater environment. Four decades have passed since the Superfund law was enacted, allowing increased exposure time to these potential health hazards while also allowing advancement of analysis techniques. Florida has the sixth highest number of Superfund sites in the US and, in 2016, Florida was projected to have the second largest number of new cancer cases in the US. We explore statewide cancer incidence in Florida from 1986 to 2010 to determine if differences or associations exist in counties containing Superfund sites compared to counties that do not. To investigate potential environmental associations with cancer incidence; results using spatial and nonspatial mixed models were compared. Using a Poisson–Gamma mixture model, our results provide some evidence of an association between cancer incidence rates and Superfund site hazard levels, as well as proxy measures of water contamination around Superfund sites. In addition, results build upon previously observed gender differences in cancer incidence rates and further indicate spatial differences for cancer incidence. Heterogeneity among cancer incidence rates were observed across Florida with some mild association with Superfund exposure proxies.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 9"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2016.1267599","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46718814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1399844
Hans Noel
No one needs to be told that 2016 was an unusual election year. For social science, its strangeness has two implications. First, it is a learning opportunity. Whether we think of 2016 as a highleverage case or as off the equilibriumpath, an unusual case gives perspective that we do not usually get to see. This is the potential that Julia Azari and Andrew Gelman have exploited. Second, however, is that unusual cases are, well, unusual. They are often outliers. They differ onmultiple dimensions, and we may not know why they came about. Lessons from them may not generalize. The election of 2016 was unusual or even unprecedented in so many ways. Not only do we want to be cautious about extrapolation, but the way we learn from outliers is different than the way we learn from typical cases. They can function asmuch as counterfactuals as cases, unless, of course, we think they are harbingers of a new normal. It is notable how many of the things Azari and Gelman note we learned from 2016 were things that at least some social scientists had already articulated. And I would argue that many of the othersmay not be as large as they are portrayed here. Despite the outrageousness of the 2016 election in so many ways, its lessons are mostly modest revisions of well-established work or raising still unanswered questions about less-established work. I think Azari and Gelman would agree. Most of their points comewith caveats that predictmy reactions. I think if we amplify the caveats over the initial points, we get a very different thesis. The 2016 election was a strange one, but one that can be explained fairly well by existing social science theory, once we know the parameters.With this inmind, a few reactions to some of the points raised by A&G.
{"title":"What We Learn From Unusual Cases: A Review of Azari and Gelman's “19 Things We Learned From the 2016 Election”","authors":"Hans Noel","doi":"10.1080/2330443X.2017.1399844","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1399844","url":null,"abstract":"No one needs to be told that 2016 was an unusual election year. For social science, its strangeness has two implications. First, it is a learning opportunity. Whether we think of 2016 as a highleverage case or as off the equilibriumpath, an unusual case gives perspective that we do not usually get to see. This is the potential that Julia Azari and Andrew Gelman have exploited. Second, however, is that unusual cases are, well, unusual. They are often outliers. They differ onmultiple dimensions, and we may not know why they came about. Lessons from them may not generalize. The election of 2016 was unusual or even unprecedented in so many ways. Not only do we want to be cautious about extrapolation, but the way we learn from outliers is different than the way we learn from typical cases. They can function asmuch as counterfactuals as cases, unless, of course, we think they are harbingers of a new normal. It is notable how many of the things Azari and Gelman note we learned from 2016 were things that at least some social scientists had already articulated. And I would argue that many of the othersmay not be as large as they are portrayed here. Despite the outrageousness of the 2016 election in so many ways, its lessons are mostly modest revisions of well-established work or raising still unanswered questions about less-established work. I think Azari and Gelman would agree. Most of their points comewith caveats that predictmy reactions. I think if we amplify the caveats over the initial points, we get a very different thesis. The 2016 election was a strange one, but one that can be explained fairly well by existing social science theory, once we know the parameters.With this inmind, a few reactions to some of the points raised by A&G.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 3"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1399844","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46105738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1374897
S. Keller, V. Lancaster, S. Shipp
ABSTRACT Existing data flows at the local level, public and administrative records, geospatial data, social media, and surveys are ubiquitous in our everyday life. The Community Learning Data-Driven Discovery (CLD3) process liberates, integrates, and makes these data available to government leaders and researchers to tell their community's story. These narratives can be used to build an equitable and sustainable social transformation within and across communities to address their most pressing needs. CLD3 is scalable to every city and county across the United States through an existing infrastructure maintained by collaboration between U.S. Public and Land Grant Universities and federal, state, and local governments. The CLD3 process starts with asking local leaders to identify questions they cannot answer and the potential data sources that may provide insights. The data sources are profiled, cleaned, transformed, linked, and translated into a narrative using statistical and geospatial learning along with the communities' collective knowledge. These insights are used to inform policy decisions and to develop, deploy, and evaluate intervention strategies based on scientifically based principles. CLD3 is a continuous, sustainable, and controlled feedback loop.
{"title":"Building Capacity for Data-Driven Governance: Creating a New Foundation for Democracy","authors":"S. Keller, V. Lancaster, S. Shipp","doi":"10.1080/2330443X.2017.1374897","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1374897","url":null,"abstract":"ABSTRACT Existing data flows at the local level, public and administrative records, geospatial data, social media, and surveys are ubiquitous in our everyday life. The Community Learning Data-Driven Discovery (CLD3) process liberates, integrates, and makes these data available to government leaders and researchers to tell their community's story. These narratives can be used to build an equitable and sustainable social transformation within and across communities to address their most pressing needs. CLD3 is scalable to every city and county across the United States through an existing infrastructure maintained by collaboration between U.S. Public and Land Grant Universities and federal, state, and local governments. The CLD3 process starts with asking local leaders to identify questions they cannot answer and the potential data sources that may provide insights. The data sources are profiled, cleaned, transformed, linked, and translated into a narrative using statistical and geospatial learning along with the communities' collective knowledge. These insights are used to inform policy decisions and to develop, deploy, and evaluate intervention strategies based on scientifically based principles. CLD3 is a continuous, sustainable, and controlled feedback loop.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":" ","pages":"1 - 11"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1374897","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48216216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1356775
A. Gelman, Julia Azari
ABSTRACT We can all agree that the presidential election result was a shocker. According to news reports, even the Trump campaign team was stunned to come up a winner. So now seems like a good time to go over various theories floating around in political science and political reporting and see where they stand, now that this turbulent political year has drawn to a close. In the present article, we go through several things that we as political observers and political scientists have learned from the election, and then discuss implications for the future.
{"title":"19 Things We Learned from the 2016 Election","authors":"A. Gelman, Julia Azari","doi":"10.1080/2330443X.2017.1356775","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1356775","url":null,"abstract":"ABSTRACT We can all agree that the presidential election result was a shocker. According to news reports, even the Trump campaign team was stunned to come up a winner. So now seems like a good time to go over various theories floating around in political science and political reporting and see where they stand, now that this turbulent political year has drawn to a close. In the present article, we go through several things that we as political observers and political scientists have learned from the election, and then discuss implications for the future.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 10"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1356775","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42942300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1399843
S. Masket
Scholars will be analyzing the 2016 presidential election for many years to come, and Julia Azari and Andrew Gelman have done an excellent job laying out many of the important lessons to emerge and...
{"title":"Response to Azari and Gelman","authors":"S. Masket","doi":"10.1080/2330443X.2017.1399843","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1399843","url":null,"abstract":"Scholars will be analyzing the 2016 presidential election for many years to come, and Julia Azari and Andrew Gelman have done an excellent job laying out many of the important lessons to emerge and...","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 2"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1399843","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48878864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1294037
D. Wright
ABSTRACT Value-added models (VAMs) of student test scores are used within education because they are supposed to measure school and teacher effectiveness well. Much research has compared VAM estimates for different models, with different measures (e.g., observation ratings), and in experimental designs. VAMs are considered here from the perspective of graphical models and situations are identified that are problematic for VAMs. If the previous test scores are influenced by variables that also influence the true effectiveness of the school/teacher and there are variables that influence both the previous and current test scores, then the estimates of effectiveness can be poor. Those using VAMs should consider the models that may give rise to their data and evaluate their methods for these models before using the results for high-stakes decisions.
{"title":"Using Graphical Models to Examine Value-Added Models","authors":"D. Wright","doi":"10.1080/2330443X.2017.1294037","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1294037","url":null,"abstract":"ABSTRACT Value-added models (VAMs) of student test scores are used within education because they are supposed to measure school and teacher effectiveness well. Much research has compared VAM estimates for different models, with different measures (e.g., observation ratings), and in experimental designs. VAMs are considered here from the perspective of graphical models and situations are identified that are problematic for VAMs. If the previous test scores are influenced by variables that also influence the true effectiveness of the school/teacher and there are variables that influence both the previous and current test scores, then the estimates of effectiveness can be poor. Those using VAMs should consider the models that may give rise to their data and evaluate their methods for these models before using the results for high-stakes decisions.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 7"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1294037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49178282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1080/2330443X.2017.1358125
James E Ciecka, Gary R. Skoog
ABSTRACT We find and estimate probability mass functions for labor force related random variables. Complete life expectancy (by age, gender, and two years of labor force history) is decomposed into expected years of future labor force activity and inactivity as well as into expected years until final separation from the labor force and expected years in retirement. We also calculate expected age at retirement and expected years in retirement for people who actually retire. Two consecutive years of inactivity, especially in middle age, is a key indicator for both men and women when accounting for future labor force participation and retirement. For example, women (men) who are out of the labor force at age 49 and again out of the labor force at age 50, can expect to be in the labor force seven (eight) fewer years in the future than their counterparts who were in the labor force at ages 49 and 50. In addition, they have expected retirement ages 4.5–5.5 years younger than their active counterparts.
{"title":"Expected Labor Force Activity and Retirement Behavior by Age, Gender, and Labor Force History","authors":"James E Ciecka, Gary R. Skoog","doi":"10.1080/2330443X.2017.1358125","DOIUrl":"https://doi.org/10.1080/2330443X.2017.1358125","url":null,"abstract":"ABSTRACT We find and estimate probability mass functions for labor force related random variables. Complete life expectancy (by age, gender, and two years of labor force history) is decomposed into expected years of future labor force activity and inactivity as well as into expected years until final separation from the labor force and expected years in retirement. We also calculate expected age at retirement and expected years in retirement for people who actually retire. Two consecutive years of inactivity, especially in middle age, is a key indicator for both men and women when accounting for future labor force participation and retirement. For example, women (men) who are out of the labor force at age 49 and again out of the labor force at age 50, can expect to be in the labor force seven (eight) fewer years in the future than their counterparts who were in the labor force at ages 49 and 50. In addition, they have expected retirement ages 4.5–5.5 years younger than their active counterparts.","PeriodicalId":43397,"journal":{"name":"Statistics and Public Policy","volume":"4 1","pages":"1 - 8"},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/2330443X.2017.1358125","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46777153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}