When a Bayesian learns new information and changes her beliefs, she must on average become concomitantly more certain about the state of the world. Consequently, it is rare for a Bayesian to frequently shift beliefs substantially while remaining relatively uncertain, or, conversely, become very confident with relatively little belief movement. We formalize this intuition by developing specific measures of movement and uncertainty reduction given a Bayesian’s changing beliefs over time, showing that these measures are equal in expectation and creating consequent statistical tests for Bayesianess. We then show connections between these two core concepts and four common psychological biases, suggesting that the test might be particularly good at detecting these biases. We provide support for this conclusion by simulating the performance of our test and other martingale tests. Finally, we apply our test to data sets of individual, algorithmic, and market beliefs.
{"title":"Belief Movement, Uncertainty Reduction, and Rational Updating*","authors":"Ned Augenblick, M. Rabin","doi":"10.1093/QJE/QJAA043","DOIUrl":"https://doi.org/10.1093/QJE/QJAA043","url":null,"abstract":"\u0000 When a Bayesian learns new information and changes her beliefs, she must on average become concomitantly more certain about the state of the world. Consequently, it is rare for a Bayesian to frequently shift beliefs substantially while remaining relatively uncertain, or, conversely, become very confident with relatively little belief movement. We formalize this intuition by developing specific measures of movement and uncertainty reduction given a Bayesian’s changing beliefs over time, showing that these measures are equal in expectation and creating consequent statistical tests for Bayesianess. We then show connections between these two core concepts and four common psychological biases, suggesting that the test might be particularly good at detecting these biases. We provide support for this conclusion by simulating the performance of our test and other martingale tests. Finally, we apply our test to data sets of individual, algorithmic, and market beliefs.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"136 1","pages":"933-985"},"PeriodicalIF":13.7,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/QJE/QJAA043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45803206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The labor share in U.S. manufacturing declined from 62 percentage points (ppts) in 1967 to 41 ppts in 2012. The labor share of the typical U.S. manufacturing establishment, in contrast, rose by over 3 ppts during the same period. Using micro-level data, we document five salient facts: (1) since the 1980s, there has been a dramatic reallocation of value added toward the lower end of the labor share distribution; (2) this aggregate reallocation is not due to entry/exit, to “superstars” growing faster or to large establishments lowering their labor shares, but is instead due to units whose labor share fell as they grew in size; (3) low labor share (LL) establishments benefit from high revenue labor productivity, not low wages; (4) they also enjoy a product price premium relative to their peers, pointing to a significant role for demand-side forces; and (5) they have only temporarily lower labor shares that rebound after five to eight years. This transient pattern has become more pronounced over time, and the dynamics of value added and employment are increasingly disconnected.
{"title":"The Micro-Level Anatomy of the Labor Share Decline*","authors":"Matthias Kehrig, Nicolas Vincent","doi":"10.1093/QJE/QJAB002","DOIUrl":"https://doi.org/10.1093/QJE/QJAB002","url":null,"abstract":"The labor share in U.S. manufacturing declined from 62 percentage points (ppts) in 1967 to 41 ppts in 2012. The labor share of the typical U.S. manufacturing establishment, in contrast, rose by over 3 ppts during the same period. Using micro-level data, we document five salient facts: (1) since the 1980s, there has been a dramatic reallocation of value added toward the lower end of the labor share distribution; (2) this aggregate reallocation is not due to entry/exit, to “superstars” growing faster or to large establishments lowering their labor shares, but is instead due to units whose labor share fell as they grew in size; (3) low labor share (LL) establishments benefit from high revenue labor productivity, not low wages; (4) they also enjoy a product price premium relative to their peers, pointing to a significant role for demand-side forces; and (5) they have only temporarily lower labor shares that rebound after five to eight years. This transient pattern has become more pronounced over time, and the dynamics of value added and employment are increasingly disconnected.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"136 1","pages":"1031-1087"},"PeriodicalIF":13.7,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42342914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Behrens, Giordano Mion, Yasusada Murata, Jens Suedekum
Equilibria and optima generally differ in imperfectly competitive markets. While this is well understood theoretically, it is unclear how large the welfare distortions are in the aggregate economy. Do they matter quantitatively? To answer this question, we develop a multi-sector monopolistic competition model with endogenous firm entry and selection, productivity, and markups. Using French and British data, we quantify the gap between the equilibrium and optimal allocations. In our preferred specification, inefficiencies in the labor allocation and entry between sectors, as well as inefficient selection and output per firm within sectors, generate welfare losses of about 6–10% of GDP.
{"title":"Quantifying the Gap Between Equilibrium and Optimum under Monopolistic Competition*","authors":"K. Behrens, Giordano Mion, Yasusada Murata, Jens Suedekum","doi":"10.1093/QJE/QJAA017","DOIUrl":"https://doi.org/10.1093/QJE/QJAA017","url":null,"abstract":"Equilibria and optima generally differ in imperfectly competitive markets. While this is well understood theoretically, it is unclear how large the welfare distortions are in the aggregate economy. Do they matter quantitatively? To answer this question, we develop a multi-sector monopolistic competition model with endogenous firm entry and selection, productivity, and markups. Using French and British data, we quantify the gap between the equilibrium and optimal allocations. In our preferred specification, inefficiencies in the labor allocation and entry between sectors, as well as inefficient selection and output per firm within sectors, generate welfare losses of about 6–10% of GDP.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"135 1","pages":"2299-2360"},"PeriodicalIF":13.7,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/QJE/QJAA017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45910802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using newly constructed spatially disaggregated data for London from 1801 to 1921, we show that the invention of the steam railway led to the first large-scale separation of workplace and residence. We show that a class of quantitative urban models is remarkably successful in explaining this reorganization of economic activity. We structurally estimate one of the models in this class and find substantial agglomeration forces in both production and residence. In counterfactuals, we find that removing the whole railway network reduces the population and the value of land and buildings in London by up to 51.5% and 53.3% respectively, and decreases net commuting into the historical center of London by more than 300,000 workers.
{"title":"The Making of the Modern Metropolis: Evidence from London*","authors":"Stephan Heblich, S. Redding, D. Sturm","doi":"10.1093/qje/qjaa014","DOIUrl":"https://doi.org/10.1093/qje/qjaa014","url":null,"abstract":"Using newly constructed spatially disaggregated data for London from 1801 to 1921, we show that the invention of the steam railway led to the first large-scale separation of workplace and residence. We show that a class of quantitative urban models is remarkably successful in explaining this reorganization of economic activity. We structurally estimate one of the models in this class and find substantial agglomeration forces in both production and residence. In counterfactuals, we find that removing the whole railway network reduces the population and the value of land and buildings in London by up to 51.5% and 53.3% respectively, and decreases net commuting into the historical center of London by more than 300,000 workers.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"135 1","pages":"2059-2133"},"PeriodicalIF":13.7,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/qje/qjaa014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49092667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Communication facilitates cooperation by ensuring that deviators are collectively punished. We explore how players might misuse communication to threaten one another, and we identify ways that organizations can deter misuse and restore cooperation. In our model, a principal plays trust games with a sequence of short-run agents who communicate with one another. An agent can shirk and then extort pay by threatening to report that the principal deviated. We show that these threats can completely undermine cooperation. Investigations of agents’ efforts, or dyadic relationships between the principal and each agent, can deter extortion and restore some cooperation. Investigations of the principal’s action, on the other hand, typically do not help. Our analysis suggests that collective punishments are vulnerable to misuse unless they are designed with an eye towards discouraging it. JEL: C73, D02, D70. ∗Corresponding author: Daniel Barron, Northwestern University, Kellogg School of Management, Evanston IL 60208; email: d-barron@kellogg.northwestern.edu. The authors would like to thank Nageeb Ali, Charles Angelucci, Nemanja Antic, Alessandro Bonatti, Renee Bowen, Joyee Deb, Wouter Dessein, Matthias Fahn, Benjamin Friedrich, George Georgiadis, Marina Halac, Johannes Hörner, Peter Klibanov, Ilan Kremer, Nicolas Lambert, Stephan Lauermann, Jin Li, Elliot Lipnowski, Shuo Liu, Bentley MacLeod, David Miller, Joshua Mollner, Dilip Mookherjee, Arijit Mukherjee, Jacopo Perego, Michael Powell, Luis Rayo, Jonah Rockoff, Mark Satterthwaite, Andy Skrzypacz, Takuo Sugaya, Jeroen Swinkels, Joel Watson, and audiences at many conferences, workshops, and seminars. We thank the UCSD theory reading group for comments on a draft of this paper, and Andres Espitia for excellent research assistance.
{"title":"The Use and Misuse of Coordinated Punishments*","authors":"Daniel Barron, Yingni Guo","doi":"10.1093/qje/qjaa035","DOIUrl":"https://doi.org/10.1093/qje/qjaa035","url":null,"abstract":"Communication facilitates cooperation by ensuring that deviators are collectively punished. We explore how players might misuse communication to threaten one another, and we identify ways that organizations can deter misuse and restore cooperation. In our model, a principal plays trust games with a sequence of short-run agents who communicate with one another. An agent can shirk and then extort pay by threatening to report that the principal deviated. We show that these threats can completely undermine cooperation. Investigations of agents’ efforts, or dyadic relationships between the principal and each agent, can deter extortion and restore some cooperation. Investigations of the principal’s action, on the other hand, typically do not help. Our analysis suggests that collective punishments are vulnerable to misuse unless they are designed with an eye towards discouraging it. JEL: C73, D02, D70. ∗Corresponding author: Daniel Barron, Northwestern University, Kellogg School of Management, Evanston IL 60208; email: d-barron@kellogg.northwestern.edu. The authors would like to thank Nageeb Ali, Charles Angelucci, Nemanja Antic, Alessandro Bonatti, Renee Bowen, Joyee Deb, Wouter Dessein, Matthias Fahn, Benjamin Friedrich, George Georgiadis, Marina Halac, Johannes Hörner, Peter Klibanov, Ilan Kremer, Nicolas Lambert, Stephan Lauermann, Jin Li, Elliot Lipnowski, Shuo Liu, Bentley MacLeod, David Miller, Joshua Mollner, Dilip Mookherjee, Arijit Mukherjee, Jacopo Perego, Michael Powell, Luis Rayo, Jonah Rockoff, Mark Satterthwaite, Andy Skrzypacz, Takuo Sugaya, Jeroen Swinkels, Joel Watson, and audiences at many conferences, workshops, and seminars. We thank the UCSD theory reading group for comments on a draft of this paper, and Andres Espitia for excellent research assistance.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":" ","pages":""},"PeriodicalIF":13.7,"publicationDate":"2020-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/qje/qjaa035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48202039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The earnings difference between white and black workers fell dramatically in the United States in the late 1960s and early 1970s. This article shows that the expansion of the minimum wage played a critical role in this decline. The 1966 Fair Labor Standards Act extended federal minimum wage coverage to agriculture, restaurants, nursing homes, and other services that were previously uncovered and where nearly a third of black workers were employed. We digitize over 1,000 hourly wage distributions from Bureau of Labor Statistics industry wage reports and use CPS microdata to investigate the effects of this reform on wages, employment, and racial inequality. Using a cross-industry difference-in-differences design, we show that earnings rose sharply for workers in the newly covered industries. The impact was nearly twice as large for black workers as for white workers. Within treated industries, the racial gap adjusted for observables fell from 25 log points prereform to 0 afterward. We can rule out significant disemployment effects for black workers. Using a bunching design, we find no aggregate effect of the reform on employment. The 1967 extension of the minimum wage can explain more than 20% of the reduction in the racial earnings and income gap during the civil rights era. Our findings shed new light on the dynamics of labor market inequality in the United States and suggest that minimum wage policy can play a critical role in reducing racial economic disparities. JEL Codes: J38, J23, J15, J31
{"title":"Minimum Wages and Racial Inequality*","authors":"Ellora Derenoncourt, Claire Montialoux","doi":"10.1093/QJE/QJAA031","DOIUrl":"https://doi.org/10.1093/QJE/QJAA031","url":null,"abstract":"The earnings difference between white and black workers fell dramatically in the United States in the late 1960s and early 1970s. This article shows that the expansion of the minimum wage played a critical role in this decline. The 1966 Fair Labor Standards Act extended federal minimum wage coverage to agriculture, restaurants, nursing homes, and other services that were previously uncovered and where nearly a third of black workers were employed. We digitize over 1,000 hourly wage distributions from Bureau of Labor Statistics industry wage reports and use CPS microdata to investigate the effects of this reform on wages, employment, and racial inequality. Using a cross-industry difference-in-differences design, we show that earnings rose sharply for workers in the newly covered industries. The impact was nearly twice as large for black workers as for white workers. Within treated industries, the racial gap adjusted for observables fell from 25 log points prereform to 0 afterward. We can rule out significant disemployment effects for black workers. Using a bunching design, we find no aggregate effect of the reform on employment. The 1967 extension of the minimum wage can explain more than 20% of the reduction in the racial earnings and income gap during the civil rights era. Our findings shed new light on the dynamics of labor market inequality in the United States and suggest that minimum wage policy can play a critical role in reducing racial economic disparities. JEL Codes: J38, J23, J15, J31","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"1 1","pages":""},"PeriodicalIF":13.7,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/QJE/QJAA031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41502375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
News reports and communication are inherently constrained by space, time, and attention. As a result, news sources often condition the decision of whether to share a piece of information on the similarity between the signal and the prior belief of the audience, which generates a sample selection problem. This article experimentally studies how people form beliefs in these contexts, in particular the mechanisms behind errors in statistical reasoning. I document that a substantial fraction of experimental participants follows a simple “what you see is all there is” heuristic, according to which participants exclusively consider information that is right in front of them, and directly use the sample mean to estimate the population mean. A series of treatments aimed at identifying mechanisms suggests that for many participants, unobserved signals do not even come to mind. I provide causal evidence that the frequency of such incorrect mental models is a function of the computational complexity of the decision problem. These results point to the context dependence of what comes to mind and the resulting errors in belief updating.
{"title":"What You See Is All There Is*","authors":"B. Enke","doi":"10.1093/qje/qjaa012","DOIUrl":"https://doi.org/10.1093/qje/qjaa012","url":null,"abstract":"News reports and communication are inherently constrained by space, time, and attention. As a result, news sources often condition the decision of whether to share a piece of information on the similarity between the signal and the prior belief of the audience, which generates a sample selection problem. This article experimentally studies how people form beliefs in these contexts, in particular the mechanisms behind errors in statistical reasoning. I document that a substantial fraction of experimental participants follows a simple “what you see is all there is” heuristic, according to which participants exclusively consider information that is right in front of them, and directly use the sample mean to estimate the population mean. A series of treatments aimed at identifying mechanisms suggests that for many participants, unobserved signals do not even come to mind. I provide causal evidence that the frequency of such incorrect mental models is a function of the computational complexity of the decision problem. These results point to the context dependence of what comes to mind and the resulting errors in belief updating.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"135 1","pages":"1363-1398"},"PeriodicalIF":13.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/qje/qjaa012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42119766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a simple method of recovering attention costs from choice data. Our method rests on a precise analogy with production theory. Costs of attention determine consumer demand and consumer welfare, just as a competitive firm’s technology determines its supply curve and profits. We implement our recovery method experimentally, outline applications, and link our work to the broader literature on inattention and mistaken decisions.
{"title":"Rational Inattention, Competitive Supply, and Psychometrics*","authors":"Andrew Caplin, Dániel Csaba, John Leahy, O. Nov","doi":"10.1093/qje/qjaa011","DOIUrl":"https://doi.org/10.1093/qje/qjaa011","url":null,"abstract":"We introduce a simple method of recovering attention costs from choice data. Our method rests on a precise analogy with production theory. Costs of attention determine consumer demand and consumer welfare, just as a competitive firm’s technology determines its supply curve and profits. We implement our recovery method experimentally, outline applications, and link our work to the broader literature on inattention and mistaken decisions.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"135 1","pages":"1681-1724"},"PeriodicalIF":13.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/qje/qjaa011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41432853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raj Chetty, John N Friedman, Emmanuel Saez, Nicholas Turner, Danny Yagan
We construct publicly available statistics on parents’ incomes and students’ earnings outcomes for each college in the United States using deidentified data from tax records. These statistics reveal that the degree of parental income segregation across colleges is very high, similar to that across neighborhoods. Differences in postcollege earnings between children from low- and high-income families are much smaller among students who attend the same college than across colleges. Colleges with the best earnings outcomes predominantly enroll students from high-income families, although a few mid-tier public colleges have both low parent income levels and high student earnings. Linking these income data to SAT and ACT scores, we simulate how changes in the allocation of students to colleges affect segregation and intergenerational mobility. Equalizing application, admission, and matriculation rates across parental income groups conditional on test scores would reduce segregation substantially, primarily by increasing the representation of middle-class students at more selective colleges. However, it would have little effect on the fraction of low-income students at elite private colleges because there are relatively few students from low-income families with sufficiently high SAT/ACT scores. Differences in parental income distributions across colleges could be eliminated by giving low- and middle-income students a sliding-scale preference in the application and admissions process similar to that implicitly given to legacy students at elite private colleges. Assuming that 80% of observational differences in students’ earnings conditional on test scores, race, and parental income are due to colleges’ causal effects—a strong assumption, but one consistent with prior work—such changes could reduce intergenerational income persistence among college students by about 25%. We conclude that changing how students are allocated to colleges could substantially reduce segregation and increase intergenerational mobility, even without changing colleges’ educational programs.
{"title":"Income Segregation and Intergenerational Mobility Across Colleges in the United States*","authors":"Raj Chetty, John N Friedman, Emmanuel Saez, Nicholas Turner, Danny Yagan","doi":"10.1093/qje/qjaa005","DOIUrl":"https://doi.org/10.1093/qje/qjaa005","url":null,"abstract":"We construct publicly available statistics on parents’ incomes and students’ earnings outcomes for each college in the United States using deidentified data from tax records. These statistics reveal that the degree of parental income segregation across colleges is very high, similar to that across neighborhoods. Differences in postcollege earnings between children from low- and high-income families are much smaller among students who attend the same college than across colleges. Colleges with the best earnings outcomes predominantly enroll students from high-income families, although a few mid-tier public colleges have both low parent income levels and high student earnings. Linking these income data to SAT and ACT scores, we simulate how changes in the allocation of students to colleges affect segregation and intergenerational mobility. Equalizing application, admission, and matriculation rates across parental income groups conditional on test scores would reduce segregation substantially, primarily by increasing the representation of middle-class students at more selective colleges. However, it would have little effect on the fraction of low-income students at elite private colleges because there are relatively few students from low-income families with sufficiently high SAT/ACT scores. Differences in parental income distributions across colleges could be eliminated by giving low- and middle-income students a sliding-scale preference in the application and admissions process similar to that implicitly given to legacy students at elite private colleges. Assuming that 80% of observational differences in students’ earnings conditional on test scores, race, and parental income are due to colleges’ causal effects—a strong assumption, but one consistent with prior work—such changes could reduce intergenerational income persistence among college students by about 25%. We conclude that changing how students are allocated to colleges could substantially reduce segregation and increase intergenerational mobility, even without changing colleges’ educational programs.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"135 1","pages":"1567-1633"},"PeriodicalIF":13.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/qje/qjaa005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49289283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We construct a dynamic model of election campaigns. In the model, opportunities for candidates to refine/clarify their policy positions are limited and arrive stochastically along the course of the campaign until the predetermined election date. We show that this simple friction leads to rich and subtle campaign dynamics. We first demonstrate these effects in a series of canonical static models of elections that we extend to dynamic settings, including models with valence and a multi-dimensional policy space. We then present general principles that underlie the results from those models. In particular, we establish that candidates spend a long time using ambiguous language during the election campaign in equilibrium. ∗Kamada: Haas School of Business, University of California, Berkeley, Berkeley, CA 94720, e-mail: y.cam.24@gmail.com. Sugaya: Stanford Graduate School of Business, Stanford, CA, 94305, e-mail: tsugaya@stanford.edu; We thank Ernesto Dal Bo, Itay Fainmesser, Drew Fudenberg, Michihiro Kandori, Akitada Kasahara, Kei Kawai, Fuhito Kojima, David Laibson, John B. Londregan, Shih-En Lu, Francisco Mart́ınez-Mora, Adam H. Meirowitz, Stephen Morris, Kristopher Ramsay, Andrzej Skrzypacz, Ken Shepsle, Yuki Takagi, Satoru Takahashi, and the seminar participants at the Harvard University Department of Economics and Department of Government, the Princeton University Department of Economics and Department of Political Science, University of California, Berkeley, Yale University, the 21st Stony Brook International Conference on Game Theory, and the Urrutia Elejalde Workshop on Information, Dynamics and Political Decision Making for helpful comments. In addition, Steven Callander and Josh Gottlieb read through the previous versions of this paper and gave us very detailed comments, which significantly improved the paper. We also thank the careful reading and suggestions by the Editor and the referees. Judith Levi provided a superb professional editorial assistance, which again significantly improved the paper. We thank Emily Her, Douglas Hong, Omair Butt, and especially Lingxuan Wu for their excellent research assistantship. This paper subsumes and supersedes part of our unpublished working paper “Valence Candidates and Ambiguous Platforms in Policy Announcement Games” and its earlier version circulated as “Policy Announcement Game: Valence Candidates and Ambiguous Policies,” which Sugaya first presented at the 21st Stony Brook International Conference on Game Theory in 2010.
{"title":"Optimal Timing of Policy Announcements in Dynamic Election Campaigns*","authors":"Yuichiro Kamada, Takuo Sugaya","doi":"10.1093/qje/qjaa010","DOIUrl":"https://doi.org/10.1093/qje/qjaa010","url":null,"abstract":"We construct a dynamic model of election campaigns. In the model, opportunities for candidates to refine/clarify their policy positions are limited and arrive stochastically along the course of the campaign until the predetermined election date. We show that this simple friction leads to rich and subtle campaign dynamics. We first demonstrate these effects in a series of canonical static models of elections that we extend to dynamic settings, including models with valence and a multi-dimensional policy space. We then present general principles that underlie the results from those models. In particular, we establish that candidates spend a long time using ambiguous language during the election campaign in equilibrium. ∗Kamada: Haas School of Business, University of California, Berkeley, Berkeley, CA 94720, e-mail: y.cam.24@gmail.com. Sugaya: Stanford Graduate School of Business, Stanford, CA, 94305, e-mail: tsugaya@stanford.edu; We thank Ernesto Dal Bo, Itay Fainmesser, Drew Fudenberg, Michihiro Kandori, Akitada Kasahara, Kei Kawai, Fuhito Kojima, David Laibson, John B. Londregan, Shih-En Lu, Francisco Mart́ınez-Mora, Adam H. Meirowitz, Stephen Morris, Kristopher Ramsay, Andrzej Skrzypacz, Ken Shepsle, Yuki Takagi, Satoru Takahashi, and the seminar participants at the Harvard University Department of Economics and Department of Government, the Princeton University Department of Economics and Department of Political Science, University of California, Berkeley, Yale University, the 21st Stony Brook International Conference on Game Theory, and the Urrutia Elejalde Workshop on Information, Dynamics and Political Decision Making for helpful comments. In addition, Steven Callander and Josh Gottlieb read through the previous versions of this paper and gave us very detailed comments, which significantly improved the paper. We also thank the careful reading and suggestions by the Editor and the referees. Judith Levi provided a superb professional editorial assistance, which again significantly improved the paper. We thank Emily Her, Douglas Hong, Omair Butt, and especially Lingxuan Wu for their excellent research assistantship. This paper subsumes and supersedes part of our unpublished working paper “Valence Candidates and Ambiguous Platforms in Policy Announcement Games” and its earlier version circulated as “Policy Announcement Game: Valence Candidates and Ambiguous Policies,” which Sugaya first presented at the 21st Stony Brook International Conference on Game Theory in 2010.","PeriodicalId":48470,"journal":{"name":"Quarterly Journal of Economics","volume":"135 1","pages":"1725-1797"},"PeriodicalIF":13.7,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/qje/qjaa010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42359704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}