Comparing US gross domestic product to the sum of measured payments to labor and imputed rental payments to capital results in a large and volatile residual or “factorless” income. We analyze three common strategies of allocating and interpreting factorless income, specifically that it arises from economic profits (case Π), unmeasured capital (case K), or deviations of the rental rate of capital from standard measures based on bond returns (case R). We are skeptical of case Π because it reveals a tight negative relationship between real interest rates and economic profits, leads to large fluctuations in inferred factor-augmenting technologies, and results in profits that have risen since the early 1980s but that remain lower today than in the 1960s and 1970s. Case K shows how unmeasured capital plausibly accounts for all factorless income in recent decades, but its value in the 1960s would have to be more than half of the capital stock, which we find less plausible. We view case R as most promising as it leads to more stable factor shares and technology growth than the other cases, though we acknowledge that it requires an explanation for the pattern of deviations from common measures of the rental rate. Using a model with multiple sectors and types of capital, we show that our assessment of the drivers of changes in output, factor shares, and functional inequality depends critically on the interpretation of factorless income.
{"title":"Accounting for Factorless Income","authors":"Loukas Karabarbounis, Brent Neiman","doi":"10.1086/700894","DOIUrl":"https://doi.org/10.1086/700894","url":null,"abstract":"Comparing US gross domestic product to the sum of measured payments to labor and imputed rental payments to capital results in a large and volatile residual or “factorless” income. We analyze three common strategies of allocating and interpreting factorless income, specifically that it arises from economic profits (case Π), unmeasured capital (case K), or deviations of the rental rate of capital from standard measures based on bond returns (case R). We are skeptical of case Π because it reveals a tight negative relationship between real interest rates and economic profits, leads to large fluctuations in inferred factor-augmenting technologies, and results in profits that have risen since the early 1980s but that remain lower today than in the 1960s and 1970s. Case K shows how unmeasured capital plausibly accounts for all factorless income in recent decades, but its value in the 1960s would have to be more than half of the capital stock, which we find less plausible. We view case R as most promising as it leads to more stable factor shares and technology growth than the other cases, though we acknowledge that it requires an explanation for the pattern of deviations from common measures of the rental rate. Using a model with multiple sectors and types of capital, we show that our assessment of the drivers of changes in output, factor shares, and functional inequality depends critically on the interpretation of factorless income.","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"61 10","pages":""},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors opened the discussion by thanking Richard Rogerson and MatthewRognlie for their comments. They expressed their appreciation for Rognlie’s effort to frame their paper in the context of the literature. The authors also shared his skepticism about the implications of caseP. AndrewAtkeson spoke next and pointed out that the ratio of after-tax net operating surplus to the capital stock for nonfinancial corporations has remained roughly constant since the 1960s, fluctuating between 6% and 8%. In support of this statement, Atkeson cited figures from the Bureau of Economic Analysis’s annual report on the “Returns for Domestic Nonfinancial Business.” Atkeson argued that the literature has mostly focused on decomposing this series into various components: the return on observed and unobserved physical capital, the return on intangible capital, and monopoly markups. In his view, the relevant source of variation in factorless income is government bond yields. Atkeson noted that a balanced growth model, where the return on capital is stochastic and has a mean of roughly 7%, would be consistent with the empirical evidence on the behavior of after-tax net operating surplus. In this model, the net operating surplus is entirely attributed to the return on physical capital. The authors responded that case R in their paper focuses precisely on the role of bond yields. Although Atkeson’s neoclassical benchmark implies zero profits, the authors mentioned that there is no consensus about the importance of profits and their evolution over time. In addition, there has been growing interest recently in the evolution of markups over the past few decades. The authors noted that when profits are not zero, the counterpart to Atkeson’s measure of profits corresponds to the return on capital (R) plus firms’ profits divided by the capital stock (P/K).
{"title":"Discussion","authors":"","doi":"10.1086/700912","DOIUrl":"https://doi.org/10.1086/700912","url":null,"abstract":"The authors opened the discussion by thanking Richard Rogerson and MatthewRognlie for their comments. They expressed their appreciation for Rognlie’s effort to frame their paper in the context of the literature. The authors also shared his skepticism about the implications of caseP. AndrewAtkeson spoke next and pointed out that the ratio of after-tax net operating surplus to the capital stock for nonfinancial corporations has remained roughly constant since the 1960s, fluctuating between 6% and 8%. In support of this statement, Atkeson cited figures from the Bureau of Economic Analysis’s annual report on the “Returns for Domestic Nonfinancial Business.” Atkeson argued that the literature has mostly focused on decomposing this series into various components: the return on observed and unobserved physical capital, the return on intangible capital, and monopoly markups. In his view, the relevant source of variation in factorless income is government bond yields. Atkeson noted that a balanced growth model, where the return on capital is stochastic and has a mean of roughly 7%, would be consistent with the empirical evidence on the behavior of after-tax net operating surplus. In this model, the net operating surplus is entirely attributed to the return on physical capital. The authors responded that case R in their paper focuses precisely on the role of bond yields. Although Atkeson’s neoclassical benchmark implies zero profits, the authors mentioned that there is no consensus about the importance of profits and their evolution over time. In addition, there has been growing interest recently in the evolution of markups over the past few decades. The authors noted that when profits are not zero, the counterpart to Atkeson’s measure of profits corresponds to the return on capital (R) plus firms’ profits divided by the capital stock (P/K).","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"33 1","pages":"249 - 251"},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1086/700912","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43423637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In most macroeconomic models, time is infinite. Agents are endowed with rational expectations including the cognitive ability to solve complex infinite-horizon planning problems. This is a heroic assumption; but when does it matter? In “Monetary Policy Analysis When Planning Horizons Are Finite,” Michael Woodford reconsiders this unrealistic feature, introduces a novel bounded-rationality framework to address it, and explores under what circumstances this affects the policy conclusions of the standard New Keynesian paradigm. Woodford develops a new cognitive framework in which agents transform their infinite-horizon problem into a sequence of simpler, finite-horizon ones. The solution method used by the agent is to backward induct over a finite set of periods given some perceived value function he has assigned to his perceived terminal nodes. This solution method seems quite natural; in fact, Woodford is motivated by a beautiful analogy to how state-of-the-art artificial intelligence (AI) programs play the games of chess or go. Take chess—a gamewith a finite strategy space and thereby in theory solvable via backward induction. In practice, however, the space of strategies is so large that solving the game in this fashion would require unfathomableprocessing power. Consider then themost effectiveAI programs. A typical decision-making process may be described as follows: at each turn, the machine looks forward at all possible moves for both itself and its opponent a finite number of turns, thereby creating a decision tree with finite nodes. It assigns a value to each of the different possible terminal nodes; these values may be based on past experience or data. Finally, given these terminal node values, the machine backward
{"title":"Comment","authors":"Jennifer La'O","doi":"10.1086/700898","DOIUrl":"https://doi.org/10.1086/700898","url":null,"abstract":"In most macroeconomic models, time is infinite. Agents are endowed with rational expectations including the cognitive ability to solve complex infinite-horizon planning problems. This is a heroic assumption; but when does it matter? In “Monetary Policy Analysis When Planning Horizons Are Finite,” Michael Woodford reconsiders this unrealistic feature, introduces a novel bounded-rationality framework to address it, and explores under what circumstances this affects the policy conclusions of the standard New Keynesian paradigm. Woodford develops a new cognitive framework in which agents transform their infinite-horizon problem into a sequence of simpler, finite-horizon ones. The solution method used by the agent is to backward induct over a finite set of periods given some perceived value function he has assigned to his perceived terminal nodes. This solution method seems quite natural; in fact, Woodford is motivated by a beautiful analogy to how state-of-the-art artificial intelligence (AI) programs play the games of chess or go. Take chess—a gamewith a finite strategy space and thereby in theory solvable via backward induction. In practice, however, the space of strategies is so large that solving the game in this fashion would require unfathomableprocessing power. Consider then themost effectiveAI programs. A typical decision-making process may be described as follows: at each turn, the machine looks forward at all possible moves for both itself and its opponent a finite number of turns, thereby creating a decision tree with finite nodes. It assigns a value to each of the different possible terminal nodes; these values may be based on past experience or data. Finally, given these terminal node values, the machine backward","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"33 1","pages":"51 - 66"},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1086/700898","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43488525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
“Monetary Policy Analysis When Planning Horizons Are Finite” byMichael Woodford fits in a fast-growing literature that attempts to introduce forms of bounded rationality in macroeconomic models. Bounded rationality can be introduced in a variety ofways, depending on howwe describe the agents’ limited ability to process information, to form forecasts, and to compute optimal plans. The paper I am discussing captures bounded rationality by giving agents a finite planning horizon and exploring in depth a variety of consequences of this modeling assumption. The paper provides a nice motivation for the exercise by connecting the macro literature to existing work in artificial intelligence. In this discussion I want tomake two points, one on the role of general equilibrium effects and one on difference between finite lives and finite planning horizons. There is one dimension of bounded rationality that appears in different forms in a variety of models: the limited capacity of agents to think through general equilibrium effects in their environment. My first point is that this limited capacity for general equilibrium thinking also plays an important role in this paper. To make this point, let me use a simple example of the “forward guidance puzzle” (Del Negro, Giannoni, and Patterson 2012), inspired by Farhi and Werning (2017). Take an infinitely lived consumer, with standard time-separable preferences, who receives a deterministic stream of labor income {Yt} and has access to a single bond that pays the real interest rate rt. The optimal behavior of this consumer can be derived from the Euler equation
迈克尔·伍德福德(Michael Woodford)的《规划视野有限时的货币政策分析》(Monetary Policy Analysis When Planning Horizons Are Finite)符合一篇快速增长的文献,该文献试图在宏观经济模型中引入有限理性的形式。有限理性可以通过多种方式引入,这取决于我们如何描述代理处理信息、形成预测和计算最优计划的有限能力。我正在讨论的这篇论文通过给代理人一个有限的规划范围来捕捉有限理性,并深入探讨这种建模假设的各种后果。本文通过将宏观文献与人工智能领域的现有工作联系起来,为这项工作提供了很好的动机。在这次讨论中,我想指出两点,一点是关于一般均衡效应的作用,另一点是有限寿命和有限规划范围之间的区别。有一个维度的有限理性以不同的形式出现在各种模型中:主体通过其环境中的一般均衡效应进行思考的能力有限。我的第一点是,这种有限的一般均衡思维能力在本文中也起着重要作用。为了说明这一点,让我举一个受Farhi和Werning(2017)启发的“前向引导谜题”(Del Negro、Giannoni和Patterson,2012)的简单例子。以一个具有标准时间可分离偏好的无限寿命消费者为例,他获得了确定的劳动力收入流{Yt},并有权获得支付实际利率rt的单一债券。该消费者的最佳行为可以从欧拉方程中得出
{"title":"Comment","authors":"G. Lorenzoni","doi":"10.1086/700899","DOIUrl":"https://doi.org/10.1086/700899","url":null,"abstract":"“Monetary Policy Analysis When Planning Horizons Are Finite” byMichael Woodford fits in a fast-growing literature that attempts to introduce forms of bounded rationality in macroeconomic models. Bounded rationality can be introduced in a variety ofways, depending on howwe describe the agents’ limited ability to process information, to form forecasts, and to compute optimal plans. The paper I am discussing captures bounded rationality by giving agents a finite planning horizon and exploring in depth a variety of consequences of this modeling assumption. The paper provides a nice motivation for the exercise by connecting the macro literature to existing work in artificial intelligence. In this discussion I want tomake two points, one on the role of general equilibrium effects and one on difference between finite lives and finite planning horizons. There is one dimension of bounded rationality that appears in different forms in a variety of models: the limited capacity of agents to think through general equilibrium effects in their environment. My first point is that this limited capacity for general equilibrium thinking also plays an important role in this paper. To make this point, let me use a simple example of the “forward guidance puzzle” (Del Negro, Giannoni, and Patterson 2012), inspired by Farhi and Werning (2017). Take an infinitely lived consumer, with standard time-separable preferences, who receives a deterministic stream of labor income {Yt} and has access to a single bond that pays the real interest rate rt. The optimal behavior of this consumer can be derived from the Euler equation","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"33 1","pages":"67 - 74"},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1086/700899","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44884031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper by Kozlowski, Veldkamp, and Venkateswaran argues that economic agents rationally revised their estimates of tail risk following the Great Recession and that this revision explains, at least in part, the persistent decline of interest rates on safe and liquid assets such as US Treasury securities. In a previous paper (Kozlowski, Veldkamp, and Venkateswaran 2015), the authors argued that the same belief revision can explain the slow recovery of investment and output. One important contribution of this work ismethodological: they propose a tractable approach to embedding learning dynamics in fairly standard quantitative models. Substantively, the overall argument is quite plausible, and I believe the remaining issues are really quantitative: How much did people’s beliefs about tail risk change after the Great Recession? And how sensitive are interest rates (in this paper) or economic activity (in the previous paper) to perceived tail risk? In this discussion, I will address the first question briefly, before turning to the second, and then dissect themechanisms throughwhich interest rates depend on tail risk in the paper. In Kozlowski and colleagues’ model, the risk-free asset combines two qualities: it is safe, and it is excellent collateral. Conceptually, one can separate these two characteristics, even though they are joint in the model and, to some extent, in the data. This allows us to distinguish twomechanisms throughwhich higher tail risk increases the value of the risk-free asset. First, agents’ willingness to pay for safe assets increases with tail risk. I will call this the “safety channel.” This is a standard precautionary savings effect, a wellknown piece of canonical asset-pricing theory. Second, agents’ willingness to pay for assets that are good collateral increases with tail risk, in large part because the tail risk reduces investment and thus the supply
Kozlowski、Veldkamp和Venkateswaran的论文认为,经济主体在大衰退(Great Recession)之后理性地修正了他们对尾部风险的估计,这种修正至少部分解释了美国国债等安全和流动性资产利率持续下降的原因。在之前的一篇论文中(Kozlowski, Veldkamp, and Venkateswaran 2015),作者认为同样的信念修正可以解释投资和产出的缓慢复苏。这项工作的一个重要贡献是方法论:他们提出了一种易于处理的方法,将学习动态嵌入到相当标准的定量模型中。从本质上讲,整个论点是相当合理的,我认为剩下的问题确实是量化的:在大衰退之后,人们对尾部风险的看法发生了多大的变化?利率(在本文中)或经济活动(在前一篇论文中)对感知到的尾部风险有多敏感?在这个讨论中,我将简要地解决第一个问题,然后再转向第二个问题,然后在本文中剖析利率依赖于尾部风险的机制。在科兹洛夫斯基和他的同事的模型中,无风险资产结合了两个特性:它是安全的,它是优秀的抵押品。从概念上讲,人们可以将这两个特征分开,尽管它们在模型中以及在某种程度上在数据中是联合的。这使我们能够区分两种机制,通过这种机制,较高的尾部风险增加了无风险资产的价值。首先,代理人购买安全资产的意愿随着尾部风险的增加而增加。我称之为“安全通道”。这是一种标准的预防性储蓄效应,是众所周知的典型资产定价理论。其次,随着尾部风险的增加,代理人愿意为作为良好抵押品的资产买单,这在很大程度上是因为尾部风险减少了投资,从而减少了供给
{"title":"Comment","authors":"François Gourio","doi":"10.1086/700904","DOIUrl":"https://doi.org/10.1086/700904","url":null,"abstract":"The paper by Kozlowski, Veldkamp, and Venkateswaran argues that economic agents rationally revised their estimates of tail risk following the Great Recession and that this revision explains, at least in part, the persistent decline of interest rates on safe and liquid assets such as US Treasury securities. In a previous paper (Kozlowski, Veldkamp, and Venkateswaran 2015), the authors argued that the same belief revision can explain the slow recovery of investment and output. One important contribution of this work ismethodological: they propose a tractable approach to embedding learning dynamics in fairly standard quantitative models. Substantively, the overall argument is quite plausible, and I believe the remaining issues are really quantitative: How much did people’s beliefs about tail risk change after the Great Recession? And how sensitive are interest rates (in this paper) or economic activity (in the previous paper) to perceived tail risk? In this discussion, I will address the first question briefly, before turning to the second, and then dissect themechanisms throughwhich interest rates depend on tail risk in the paper. In Kozlowski and colleagues’ model, the risk-free asset combines two qualities: it is safe, and it is excellent collateral. Conceptually, one can separate these two characteristics, even though they are joint in the model and, to some extent, in the data. This allows us to distinguish twomechanisms throughwhich higher tail risk increases the value of the risk-free asset. First, agents’ willingness to pay for safe assets increases with tail risk. I will call this the “safety channel.” This is a standard precautionary savings effect, a wellknown piece of canonical asset-pricing theory. Second, agents’ willingness to pay for assets that are good collateral increases with tail risk, in large part because the tail risk reduces investment and thus the supply","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"33 1","pages":"284 - 296"},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1086/700904","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"60683189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors opened the discussion by addressing two points raised by Gregory Mankiw in his discussion. First, they argued that their model allows for sizeable long-run effects. They explained that an adjustment of border taxes leads to an appreciation of the US dollar in their model. Because the United States is a net debtor in its own currency, this appreciation leads to a negative valuation effect, corresponding to a transfer from the United States to the rest of the world of roughly 16% of US gross domestic product. This valuation effect possibly outweighs the short-term benefits of border adjustment. Second, the authors challenged the idea put forth by Mankiw that there is a clear dichotomy between short-run and long-run effects when it comes to trade policy. The authors insisted on the importance of political economy considerations in the short run, which may prevent the adoption of desired tax changes and the realization of long-run benefits. The authors next replied to questions from both discussants regarding the role of dollar pricing and the importance of Calvo pricing in the context of a large tax policy change. They pointed out that dollar pricing plays a key role in the failure of the Lerner symmetry in their model. The adoption of a border tax on imports has two effects on US consumer prices. The direct effect raises these prices, while the indirect effect reduces them by leading to an appreciation of the US dollar. In the presence of dollar pricing, the direct pass-throughof a tax is full,whereas the short-run pass-through of the exchange rate to consumer prices in the United States is low. This asymmetry is responsible for the failure of the Lerner symmetry. The authors noted that dollar pricing is consistent with recent evidence: the dollar appreciated and then depreciated by 10%–12% while border prices remained roughly unchanged. They provide a rationale for dollar pricing as an equilibriumphenomenon. International firms decide to set prices in US dollars because US inputs
{"title":"Discussion","authors":"","doi":"10.1086/700915","DOIUrl":"https://doi.org/10.1086/700915","url":null,"abstract":"The authors opened the discussion by addressing two points raised by Gregory Mankiw in his discussion. First, they argued that their model allows for sizeable long-run effects. They explained that an adjustment of border taxes leads to an appreciation of the US dollar in their model. Because the United States is a net debtor in its own currency, this appreciation leads to a negative valuation effect, corresponding to a transfer from the United States to the rest of the world of roughly 16% of US gross domestic product. This valuation effect possibly outweighs the short-term benefits of border adjustment. Second, the authors challenged the idea put forth by Mankiw that there is a clear dichotomy between short-run and long-run effects when it comes to trade policy. The authors insisted on the importance of political economy considerations in the short run, which may prevent the adoption of desired tax changes and the realization of long-run benefits. The authors next replied to questions from both discussants regarding the role of dollar pricing and the importance of Calvo pricing in the context of a large tax policy change. They pointed out that dollar pricing plays a key role in the failure of the Lerner symmetry in their model. The adoption of a border tax on imports has two effects on US consumer prices. The direct effect raises these prices, while the indirect effect reduces them by leading to an appreciation of the US dollar. In the presence of dollar pricing, the direct pass-throughof a tax is full,whereas the short-run pass-through of the exchange rate to consumer prices in the United States is low. This asymmetry is responsible for the failure of the Lerner symmetry. The authors noted that dollar pricing is consistent with recent evidence: the dollar appreciated and then depreciated by 10%–12% while border prices remained roughly unchanged. They provide a rationale for dollar pricing as an equilibriumphenomenon. International firms decide to set prices in US dollars because US inputs","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"33 1","pages":"472 - 475"},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1086/700915","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41412377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This fine paper by Charles, Hurst, and Schwartz investigates the link between the post-2000 decline in manufacturing employment and the decline of the employment rate, and also analyzes the supporting roles played by transfer payments, geographic mobility, and opioid use. The paper is a particularly useful synthesis because it brings together threads from a number of other papers, including the authors’ own work, and it explores some competing explanations in a standardized empirical framework. The paper is a wonderful read because it tells a clear story based on an impressive marshalling of evidence. The paper contains numerous findings that shed light on a variety of changes that occurred around the same time. The span of the analysis from aggregate to commuting zone level is particularly enlightening. Here are highlights of a just a few of the many interesting results:
{"title":"Comment","authors":"V. Ramey","doi":"10.1086/700907","DOIUrl":"https://doi.org/10.1086/700907","url":null,"abstract":"This fine paper by Charles, Hurst, and Schwartz investigates the link between the post-2000 decline in manufacturing employment and the decline of the employment rate, and also analyzes the supporting roles played by transfer payments, geographic mobility, and opioid use. The paper is a particularly useful synthesis because it brings together threads from a number of other papers, including the authors’ own work, and it explores some competing explanations in a standardized empirical framework. The paper is a wonderful read because it tells a clear story based on an impressive marshalling of evidence. The paper contains numerous findings that shed light on a variety of changes that occurred around the same time. The span of the analysis from aggregate to commuting zone level is particularly enlightening. Here are highlights of a just a few of the many interesting results:","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"33 1","pages":"380 - 388"},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1086/700907","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41437894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is common to analyze the effects of alternative possible monetary policy commitments under the assumption of optimization under rational (or fully model-consistent) expectations. This implicitly assumes unrealistic cognitive abilities on the part of economic decision makers. The relevant question, however, is not whether the assumption can be literally correct, but how much it would matter to model decision making in a more realistic way. A model is proposed, based on the architecture of artificial intelligence programs for problems such as chess or go, in which decision makers look ahead only a finite distance into the future and use a value function learned from experience to evaluate situations that may be reached after a finite sequence of actions by themselves and others. Conditions are discussed under which the predictions of a model with finite-horizon forward planning are similar to those of a rational expectations equilibrium, and under which they are instead quite different. The model is used to reexamine the consequences that should be expected from a central bank commitment to maintain a fixed nominal interest rate for a substantial period of time. “Neo-Fisherian” predictions are shown to depend on using rational expectations equilibrium analysis under circumstances in which it should be expected to be unreliable.
{"title":"Monetary Policy Analysis When Planning Horizons Are Finite","authors":"Michael Woodford","doi":"10.1086/700892","DOIUrl":"https://doi.org/10.1086/700892","url":null,"abstract":"It is common to analyze the effects of alternative possible monetary policy commitments under the assumption of optimization under rational (or fully model-consistent) expectations. This implicitly assumes unrealistic cognitive abilities on the part of economic decision makers. The relevant question, however, is not whether the assumption can be literally correct, but how much it would matter to model decision making in a more realistic way. A model is proposed, based on the architecture of artificial intelligence programs for problems such as chess or go, in which decision makers look ahead only a finite distance into the future and use a value function learned from experience to evaluate situations that may be reached after a finite sequence of actions by themselves and others. Conditions are discussed under which the predictions of a model with finite-horizon forward planning are similar to those of a rational expectations equilibrium, and under which they are instead quite different. The model is used to reexamine the consequences that should be expected from a central bank commitment to maintain a fixed nominal interest rate for a substantial period of time. “Neo-Fisherian” predictions are shown to depend on using rational expectations equilibrium analysis under circumstances in which it should be expected to be unreliable.","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"61 11","pages":""},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138495555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The tax reform process that culminated in the December 2017 enactment of the Tax Cuts and Jobs Act followed an unusual pattern regarding business tax reform. In particular, the original proposal, the “Blueprint” put forward by Republicans in the House of Representatives in June 2016 (Tax Reform Task Force 2016) called for the adoption of an approach that, at the time, was unfamiliar to many in the economics profession, a destination-based cash-flow tax (DBCFT). TheDBCFTwould have represented a sharp break from current policy, and the general lack of familiarity with it led many business leaders, policy makers, and economists to misinterpret its aims, characteristics, and properties. The paper by Omar Barbiero, Emmanuel Farhi, Gita Gopinath, and Oleg Itskhoki represents part of a small and growing literature seeking to analyze the DBCFT, or at least one of its key components: a border tax adjustment on imports and exports. In reading the paper, one is reminded of the advantages of following the more standard tax reform approach of analyzing new proposals before voting on them. This is not to say that I agree with all the paper’s modeling assumptions or conclusions, because I do not. But without such concrete analysis, it is difficult to identify key points of professional disagreement and, more importantly, to try to resolve them. The paper analyzes the short-run macroeconomic effects of adopting border tax adjustments on their own, although this is not what was being proposed. However, this is equivalent in the model to analyzing adoption of a full DBCFT, that is, a “source-based” cash-flow tax—a tax on domestic producers’ cash flows—plus border adjustment that removes tax on exports and imposes tax on imports. This equivalence follows because in themodel, a cash-flow taxwithout border adjustment is anondistortionary tax on pure profits—a lump-sum tax that would then be rebated via an
{"title":"Comment","authors":"A. Auerbach","doi":"10.1086/700908","DOIUrl":"https://doi.org/10.1086/700908","url":null,"abstract":"The tax reform process that culminated in the December 2017 enactment of the Tax Cuts and Jobs Act followed an unusual pattern regarding business tax reform. In particular, the original proposal, the “Blueprint” put forward by Republicans in the House of Representatives in June 2016 (Tax Reform Task Force 2016) called for the adoption of an approach that, at the time, was unfamiliar to many in the economics profession, a destination-based cash-flow tax (DBCFT). TheDBCFTwould have represented a sharp break from current policy, and the general lack of familiarity with it led many business leaders, policy makers, and economists to misinterpret its aims, characteristics, and properties. The paper by Omar Barbiero, Emmanuel Farhi, Gita Gopinath, and Oleg Itskhoki represents part of a small and growing literature seeking to analyze the DBCFT, or at least one of its key components: a border tax adjustment on imports and exports. In reading the paper, one is reminded of the advantages of following the more standard tax reform approach of analyzing new proposals before voting on them. This is not to say that I agree with all the paper’s modeling assumptions or conclusions, because I do not. But without such concrete analysis, it is difficult to identify key points of professional disagreement and, more importantly, to try to resolve them. The paper analyzes the short-run macroeconomic effects of adopting border tax adjustments on their own, although this is not what was being proposed. However, this is equivalent in the model to analyzing adoption of a full DBCFT, that is, a “source-based” cash-flow tax—a tax on domestic producers’ cash flows—plus border adjustment that removes tax on exports and imposes tax on imports. This equivalence follows because in themodel, a cash-flow taxwithout border adjustment is anondistortionary tax on pure profits—a lump-sum tax that would then be rebated via an","PeriodicalId":51680,"journal":{"name":"Nber Macroeconomics Annual","volume":"33 1","pages":"458 - 467"},"PeriodicalIF":7.7,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1086/700908","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42016765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}