In America, charitable giving is a thriving multibillion dollar enterprise, as illustrated in the accompanying chart. Most charitable contributions arise from the generosity of individual donors. In fact, the Giving USA Foundation estimates that individuals gave almost $188 billion to charities in 2004 (a 175 percent inflation-adjusted increase from 1964).1 Generosity is particularly evident after unexpected disasters. In September 2005, for example, the American Red Cross received about $807 million in gifts and pledges earmarked for Hurricane Katrina relief efforts—an increase of about $250 million over total Red Cross contributions during the 2003-04 fiscal year (www.redcross.org/news/ds/hurricanes/katrina_facts.html). Economists analyze the motivations behind individual giving to better understand how contributions are influenced by demographic characteristics, tax policies, and fundraising behavior. A fundamental issue of this analysis is the nature of the benefits that individuals receive when they give to charity. A recent study summarizes two alternative views.2 First, donors may focus on the well-being of charity recipients. In this case, the benefits from giving have a public nature. If the well-being of recipients is tied to the charity’s activities (the provision of disaster relief or the funding of cancer research), donors derive benefits from giving in the same way they derive benefits from public goods such as national defense. That is, a donor cannot exclude anyone else from enjoying the charity’s accomplishments; also, a donor’s enjoyment is not affected by the enjoyment or benefit derived by others. Second, donors may focus on the enjoyment they receive from the act of giving itself—that is, the internal feeling they derive from “doing their share” or “giving back to society.” Donors may also care about public recognition or about signaling wealth status. In these cases, the benefits from giving have a private nature. Individuals derive satisfaction from giving in the same way they derive benefits from consuming other private goods or services, such as clothing or food. These two motives for giving (public and private) have entirely different implications on giving behavior. Donors who experience public benefits look at the overall amount of charitable contributions, and donors who experience private benefits look only at their own contributions. If a donor’s benefits from giving are public, then a total contribution of $100 gives him the same sense that good is being done, even if his own contribution is $10 or $20. On the other hand, if a donor’s benefits from giving are private, a contribution of $20 generates more satisfaction than a contribution of $10, even if the total contribution is $100 either way. Economists predict that, according to the public benefits view, government subsidies to charities that are funded with increased taxes on donors will have no effect on the total contribution. This is because donors will redu
在美国,慈善捐赠是一项蓬勃发展的数十亿美元的事业,如图所示。大多数慈善捐款来自个人捐赠者的慷慨。事实上,美国捐赠基金会(Giving USA Foundation)估计,2004年个人向慈善机构捐赠了近1880亿美元(与1964年相比,经通货膨胀调整后增长了175%)在意外的灾难之后,慷慨尤其明显。例如,2005年9月,美国红十字会收到了约8.07亿美元的捐赠和承诺,专门用于卡特里娜飓风的救援工作,比2003-04财政年度红十字会的捐款总额增加了约2.5亿美元(www.redcross.org/news/ds/hurricanes/katrina_facts.html)。经济学家分析个人捐赠背后的动机,以更好地理解捐赠是如何受到人口特征、税收政策和筹款行为的影响的。这种分析的一个基本问题是,个人在向慈善机构捐款时所获得的利益的性质。最近的一项研究总结了两种不同的观点首先,捐赠者可能会关注慈善接受者的福祉。在这种情况下,给予的好处具有公共性。如果接受者的福利与慈善活动(提供救灾或资助癌症研究)联系在一起,捐赠者从捐赠中获得利益的方式与他们从国防等公共产品中获得利益的方式相同。也就是说,捐赠者不能排除其他人享受慈善机构的成果;同时,捐赠人的享受不受他人的享受或者利益的影响。其次,捐赠者可能会关注他们从捐赠行为中获得的快乐——也就是说,他们从“尽自己的一份力”或“回馈社会”中获得的内心感受。捐赠者可能还关心公众的认可或彰显财富地位。在这些情况下,给予的好处具有私人性质。个人从给予中获得满足,就像他们从消费其他私人物品或服务(如衣服或食物)中获得利益一样。这两种捐赠动机(公共的和私人的)对捐赠行为有着完全不同的含义。获得公共利益的捐赠者关注的是慈善捐款的总量,而获得私人利益的捐赠者只关注自己的捐款。如果捐赠者从捐赠中获得的利益是公开的,那么100美元的总捐款会给他同样的感觉,即使他自己的捐款是10美元或20美元。另一方面,如果捐赠者从捐赠中获得的利益是私人的,那么20美元的捐赠比10美元的捐赠更能产生满足感,即使捐赠总额都是100美元。经济学家预测,根据公共利益观点,政府对慈善机构的补贴是通过增加捐赠者的税收来筹集的,对总捐款没有影响。这是因为,捐赠者不关心自己的捐赠还是通过税收间接捐赠,因此会减少与税收相同数额的私人捐款。换句话说,政府对慈善机构的拨款将完全挤出私人捐款。如果捐助者的利益完全是私人的,就不应该出现挤出现象,因为捐助者不关心捐款的总量,只关心他们自己的。实证研究只发现了有限的挤出效应,这表明大多数捐赠者并不只关心慈善机构的成就,而不考虑捐款的来源;相反,私人动机,如给予或认可的乐趣,在他们的捐赠决定中起着重要作用。
{"title":"The economics of giving","authors":"Rubén Hernández-Murillo","doi":"10.20955/es.2005.24","DOIUrl":"https://doi.org/10.20955/es.2005.24","url":null,"abstract":"In America, charitable giving is a thriving multibillion dollar enterprise, as illustrated in the accompanying chart. Most charitable contributions arise from the generosity of individual donors. In fact, the Giving USA Foundation estimates that individuals gave almost $188 billion to charities in 2004 (a 175 percent inflation-adjusted increase from 1964).1 Generosity is particularly evident after unexpected disasters. In September 2005, for example, the American Red Cross received about $807 million in gifts and pledges earmarked for Hurricane Katrina relief efforts—an increase of about $250 million over total Red Cross contributions during the 2003-04 fiscal year (www.redcross.org/news/ds/hurricanes/katrina_facts.html). Economists analyze the motivations behind individual giving to better understand how contributions are influenced by demographic characteristics, tax policies, and fundraising behavior. A fundamental issue of this analysis is the nature of the benefits that individuals receive when they give to charity. A recent study summarizes two alternative views.2 First, donors may focus on the well-being of charity recipients. In this case, the benefits from giving have a public nature. If the well-being of recipients is tied to the charity’s activities (the provision of disaster relief or the funding of cancer research), donors derive benefits from giving in the same way they derive benefits from public goods such as national defense. That is, a donor cannot exclude anyone else from enjoying the charity’s accomplishments; also, a donor’s enjoyment is not affected by the enjoyment or benefit derived by others. Second, donors may focus on the enjoyment they receive from the act of giving itself—that is, the internal feeling they derive from “doing their share” or “giving back to society.” Donors may also care about public recognition or about signaling wealth status. In these cases, the benefits from giving have a private nature. Individuals derive satisfaction from giving in the same way they derive benefits from consuming other private goods or services, such as clothing or food. These two motives for giving (public and private) have entirely different implications on giving behavior. Donors who experience public benefits look at the overall amount of charitable contributions, and donors who experience private benefits look only at their own contributions. If a donor’s benefits from giving are public, then a total contribution of $100 gives him the same sense that good is being done, even if his own contribution is $10 or $20. On the other hand, if a donor’s benefits from giving are private, a contribution of $20 generates more satisfaction than a contribution of $10, even if the total contribution is $100 either way. Economists predict that, according to the public benefits view, government subsidies to charities that are funded with increased taxes on donors will have no effect on the total contribution. This is because donors will redu","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"5 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117007452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Views expressed do not necessarily reflect official positions of the Federal Reserve System. Statements issued by the Federal Open Market Committee (FOMC) at the conclusion of each meeting suggest that inflation expectations matter a great deal to monetary policymakers. Therefore, if expected inflation moves above or below a level that is viewed as optimal, then the FOMC will presumably take action to counter those expectations. Although there are several measures of inflation expectations, a relatively new and potentially useful measure is one based on futures contracts written on the consumer price index (CPI); these have been traded on the Chicago Mercantile Exchange since February 9, 2004. The contracts are based on the CPI for all urban consumers, all items (not seasonally adjusted). Similar to federal funds futures contracts, they have a pricing structure of 100 minus the contracted inflation rate—the three-month change in the CPI ending in the month prior to the expiration of the contract. According to the Chicago Mercantile Exchange, CPI futures can also be used as a derivative product to hedge inflation risk on other types of financial instruments, particularly Treasury inflation-protected securities (TIPS).1 The prices of CPI futures capture market participants’ expectation of future inflation and the associated risk premium. For simplicity, we assume that the latter is negligible. Therefore, 100 minus the contract’s price is approximately equal to the (annualized) expected inflation rate over the contracted period. If investors believe that the realized inflation rate will be lower than implied by the futures price, they will buy CPI futures and thus drive up the price until a new consensus is reached. In the accompanying chart, the solid line plots the average inflation rate implied by the CPI futures. The average inflation rate, which partially smoothes through the seasonal pattern of the future three-month inflation rates (recall that the contracts are written on non-seasonally adjusted data), is simply the average of the outstanding contracts at any point in time. For example, the December 2005 inflation rate is the average of the yields on the March, June, September, and December 2005 contracts; the point plotted for March 2006 is the average of the March 2005 through March 2006 contracts, and so forth. Since CPI futures contracts are written on the same inflation series used for the TIPS, the average inflation rates are analogous to the rates of inflation compensation derived from yield spreads between nominal and inflation-indexed Treasury securities, with some minor adjustments. One potential use of the CPI futures contracts, therefore, is to gauge the future inflation rate relative to the current rate. In 2004, the CPI rose 3.3 percent, the biggest increase in four years. Although a large part of the CPI increase was attributable to the jump in energy prices, it still raises the concern of whether inflation might be headed hig
{"title":"Reading inflation expectations from CPI futures","authors":"Hui Guo, Kevin L. Kliesen","doi":"10.20955/ES.2005.5","DOIUrl":"https://doi.org/10.20955/ES.2005.5","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. Statements issued by the Federal Open Market Committee (FOMC) at the conclusion of each meeting suggest that inflation expectations matter a great deal to monetary policymakers. Therefore, if expected inflation moves above or below a level that is viewed as optimal, then the FOMC will presumably take action to counter those expectations. Although there are several measures of inflation expectations, a relatively new and potentially useful measure is one based on futures contracts written on the consumer price index (CPI); these have been traded on the Chicago Mercantile Exchange since February 9, 2004. The contracts are based on the CPI for all urban consumers, all items (not seasonally adjusted). Similar to federal funds futures contracts, they have a pricing structure of 100 minus the contracted inflation rate—the three-month change in the CPI ending in the month prior to the expiration of the contract. According to the Chicago Mercantile Exchange, CPI futures can also be used as a derivative product to hedge inflation risk on other types of financial instruments, particularly Treasury inflation-protected securities (TIPS).1 The prices of CPI futures capture market participants’ expectation of future inflation and the associated risk premium. For simplicity, we assume that the latter is negligible. Therefore, 100 minus the contract’s price is approximately equal to the (annualized) expected inflation rate over the contracted period. If investors believe that the realized inflation rate will be lower than implied by the futures price, they will buy CPI futures and thus drive up the price until a new consensus is reached. In the accompanying chart, the solid line plots the average inflation rate implied by the CPI futures. The average inflation rate, which partially smoothes through the seasonal pattern of the future three-month inflation rates (recall that the contracts are written on non-seasonally adjusted data), is simply the average of the outstanding contracts at any point in time. For example, the December 2005 inflation rate is the average of the yields on the March, June, September, and December 2005 contracts; the point plotted for March 2006 is the average of the March 2005 through March 2006 contracts, and so forth. Since CPI futures contracts are written on the same inflation series used for the TIPS, the average inflation rates are analogous to the rates of inflation compensation derived from yield spreads between nominal and inflation-indexed Treasury securities, with some minor adjustments. One potential use of the CPI futures contracts, therefore, is to gauge the future inflation rate relative to the current rate. In 2004, the CPI rose 3.3 percent, the biggest increase in four years. Although a large part of the CPI increase was attributable to the jump in energy prices, it still raises the concern of whether inflation might be headed hig","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115573505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Income inequality has been and continues to be a major public policy topic. With respect to U.S. states, the common wisdom is that poorer states tend to grow faster than richer states and, as a result, per capita incomes of poor states and rich states are converging and will continue to converge in the future.1 We argue that such an assessment is quite possibly misleading. We analyze how the distribution of per capita personal income (PCPI), in percentage differences from the U.S. average, evolves over time for the period 1969-2005. We summarize the dynamics with the corresponding long-run distribution. A long-run distribution with a single-peak is consistent with convergence. A long-run distribution with multiple peaks indicates that, in the long-run, there will be groups of states that tend to cluster at different levels of income. The gray line in the chart is the long-run distribution of income across states. The lowest peak corresponds to a PCPI 19.2 percent below the U.S. average. The highest peak corresponds to a PCPI 3.7 percent below the cross-sectional average. In constructing this distribution, the income of any state, regardless of population, is treated the same as any other state. Things change if the PCPI dynamics calculation is weighted by the number of people within each state. The evolution of California’s PCPI will have a larger impact on the shape of the long-run distribution than Iowa’s PCPI dynamics because of California’s relatively larger population. The population-weighted distribution can be interpreted as the long-run distribution across people in the United States. The long-run distribution of income across people (the blue line in the chart) is still twin-peaked, but the low-income peak is much less pronounced. The population-weighted average PCPI is closer to the U.S. average and its standard deviation is 11 percent lower than that of the unweighted distribution. Convergence across people is driven by the fact that states experiencing a decline in their relative income are also losing population share. For example, Ohio in 1969 had the 15th highest income at 8 percent above the national average. By 2005 Ohio lost ground: It occupied the 30th place with a PCPI of 4.5 percent below the national average. At the same time, Ohio’s population declined from 5.35 percent of the total U.S population in 1969 to below 4 percent in 2005. Conversely, states growing rapidly enough to move up in the overall ranking of states’ income were gaining population, contributing to convergence. Colorado was the 22nd state in terms of PCPI in 1969 and climbed to the 9th place by 2005. During the same period, Colorado’s population share increased from 1.1 to 1.6 percent.2 Contrary to previous findings of convergence across states, our finding of a twin-peaked long-run distribution indicates that state incomes will cluster at different levels rather than converge. However, weighting each state by its population produces a nearly single-peaked
{"title":"Convergence across states and people","authors":"Riccardo DiCecio, Charles S. Gascon","doi":"10.20955/ES.2008.2","DOIUrl":"https://doi.org/10.20955/ES.2008.2","url":null,"abstract":"Income inequality has been and continues to be a major public policy topic. With respect to U.S. states, the common wisdom is that poorer states tend to grow faster than richer states and, as a result, per capita incomes of poor states and rich states are converging and will continue to converge in the future.1 We argue that such an assessment is quite possibly misleading. We analyze how the distribution of per capita personal income (PCPI), in percentage differences from the U.S. average, evolves over time for the period 1969-2005. We summarize the dynamics with the corresponding long-run distribution. A long-run distribution with a single-peak is consistent with convergence. A long-run distribution with multiple peaks indicates that, in the long-run, there will be groups of states that tend to cluster at different levels of income. The gray line in the chart is the long-run distribution of income across states. The lowest peak corresponds to a PCPI 19.2 percent below the U.S. average. The highest peak corresponds to a PCPI 3.7 percent below the cross-sectional average. In constructing this distribution, the income of any state, regardless of population, is treated the same as any other state. Things change if the PCPI dynamics calculation is weighted by the number of people within each state. The evolution of California’s PCPI will have a larger impact on the shape of the long-run distribution than Iowa’s PCPI dynamics because of California’s relatively larger population. The population-weighted distribution can be interpreted as the long-run distribution across people in the United States. The long-run distribution of income across people (the blue line in the chart) is still twin-peaked, but the low-income peak is much less pronounced. The population-weighted average PCPI is closer to the U.S. average and its standard deviation is 11 percent lower than that of the unweighted distribution. Convergence across people is driven by the fact that states experiencing a decline in their relative income are also losing population share. For example, Ohio in 1969 had the 15th highest income at 8 percent above the national average. By 2005 Ohio lost ground: It occupied the 30th place with a PCPI of 4.5 percent below the national average. At the same time, Ohio’s population declined from 5.35 percent of the total U.S population in 1969 to below 4 percent in 2005. Conversely, states growing rapidly enough to move up in the overall ranking of states’ income were gaining population, contributing to convergence. Colorado was the 22nd state in terms of PCPI in 1969 and climbed to the 9th place by 2005. During the same period, Colorado’s population share increased from 1.1 to 1.6 percent.2 Contrary to previous findings of convergence across states, our finding of a twin-peaked long-run distribution indicates that state incomes will cluster at different levels rather than converge. However, weighting each state by its population produces a nearly single-peaked ","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131475522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R ecently the Federal Open Market Committee (FOMC) mentioned deflation as a possible risk for the U.S. economy. In the statement released after the May policy meeting, the Committee stated that “the probability of an unwelcome substantial fall in inflation, though minor, exceeds that of a pickup in inflation from its already low level.” Later, Chairman Greenspan spoke about more than just a mild deflation. In comments to the International Monetary Conference in early June, Chairman Greenspan referred to the risk of “corrosive” deflation “that essentially feeds on itself, creates falling asset prices, which in turn brings down levels of economic activity...” What is the main evidence on deflation? Are there corrosive and benign forms? Does economic perfor mance always suffer during periods of sustained deflation? The accompanying table provides some evidence on deflation and lists periods in which the United States or Japan have experienced three or more years of a declining price level. There are three main episodes: the late 19th century in the United States, the Great Depression in the United States, and Japan since 1999. For each deflationary episode, the table lists the average real GDP growth rate and the average inflation rate based on two popular inflation measures, the GDP deflator and the consumer price index. The table also lists a benchmark average growth rate of GDP for years surrounding the deflation experience, so we can consider whether deflation is associated with lower-than-average growth for the corresponding era or not. Generally speaking, the United States experienced rapid growth during the late 19th century, with GDP growth averaging about 4.0 percent for the period 1876-1900, despite an average deflation of about 1.0 percent. By itself, this suggests that a mild deflation is not necessarily asso ciated with poor economic performance. However, averaging over a long period of time could mask severe distress that may accompany deflation. To address this issue, we examine the subperiods 1876-1879 and 18831885. During the former, GDP growth actually was greater than the bench mark value, despite a rather hefty 3.0 to 4.0 percent annual average decline in the price level. Evidently, this deflationary episode was rather benign. In contrast, the deflationary episode of 1883-1885 was not so benign. GDP growth fell well below the benchmark average for that time. The same is also true at the onset of the Great Depression (1930-1933) and the recent episode in Japan (1999-2002). During the Great Depression, GDP growth averaged a whopping –8.36 percent and inflation was around –6.5 percent. While not as severe, Japan also experienced slow growth during its deflationary episode. As the table shows, U.S. experience with sustained deflation has been limited since the founding of the Federal Reserve in 1914. The results from 1930-1933 are uniformly bad, but deflation may simply have been a by-product of the economic collapse at that time. Ja
{"title":"Deflation, corrosive and otherwise","authors":"James Bullard, Charles M. Hokayem","doi":"10.20955/ES.2003.17","DOIUrl":"https://doi.org/10.20955/ES.2003.17","url":null,"abstract":"R ecently the Federal Open Market Committee (FOMC) mentioned deflation as a possible risk for the U.S. economy. In the statement released after the May policy meeting, the Committee stated that “the probability of an unwelcome substantial fall in inflation, though minor, exceeds that of a pickup in inflation from its already low level.” Later, Chairman Greenspan spoke about more than just a mild deflation. In comments to the International Monetary Conference in early June, Chairman Greenspan referred to the risk of “corrosive” deflation “that essentially feeds on itself, creates falling asset prices, which in turn brings down levels of economic activity...” What is the main evidence on deflation? Are there corrosive and benign forms? Does economic perfor mance always suffer during periods of sustained deflation? The accompanying table provides some evidence on deflation and lists periods in which the United States or Japan have experienced three or more years of a declining price level. There are three main episodes: the late 19th century in the United States, the Great Depression in the United States, and Japan since 1999. For each deflationary episode, the table lists the average real GDP growth rate and the average inflation rate based on two popular inflation measures, the GDP deflator and the consumer price index. The table also lists a benchmark average growth rate of GDP for years surrounding the deflation experience, so we can consider whether deflation is associated with lower-than-average growth for the corresponding era or not. Generally speaking, the United States experienced rapid growth during the late 19th century, with GDP growth averaging about 4.0 percent for the period 1876-1900, despite an average deflation of about 1.0 percent. By itself, this suggests that a mild deflation is not necessarily asso ciated with poor economic performance. However, averaging over a long period of time could mask severe distress that may accompany deflation. To address this issue, we examine the subperiods 1876-1879 and 18831885. During the former, GDP growth actually was greater than the bench mark value, despite a rather hefty 3.0 to 4.0 percent annual average decline in the price level. Evidently, this deflationary episode was rather benign. In contrast, the deflationary episode of 1883-1885 was not so benign. GDP growth fell well below the benchmark average for that time. The same is also true at the onset of the Great Depression (1930-1933) and the recent episode in Japan (1999-2002). During the Great Depression, GDP growth averaged a whopping –8.36 percent and inflation was around –6.5 percent. While not as severe, Japan also experienced slow growth during its deflationary episode. As the table shows, U.S. experience with sustained deflation has been limited since the founding of the Federal Reserve in 1914. The results from 1930-1933 are uniformly bad, but deflation may simply have been a by-product of the economic collapse at that time. Ja","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131204040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Public officials often claim credit for creating jobs through the programs and policies they enact. It is not uncommon to hear, for example, a public official pledging to increase the number of jobs in a particular locality or nationally. Public officials can create jobs in two ways: The first is directly, by creating government jobs. The second is indirectly, by (i) enacting policies that create an economic environment that affects long-run private sector job growth or (ii) using countercyclical fiscal policy to affect short-run private sector job growth.1 How effective have public officials been at creating jobs? The accompanying chart shows the natural logarithm of payroll employment (measured by annual nonfarm payroll employment) from 1946 to 2003, along with the shares of total government, federal government, and state and local government employment. It gives no indication that public officials have created jobs directly. After increasing from 1946 to 1975, total government employment as a percent of payroll employment has trended down. Evidence that public officials create government jobs is even weaker if one considers federal employment. Federal employment as a percent of payroll employment has declined nearly mono tonically over the 1946 to 2003 period, from 5.6 percent in 1946 to 2.1 percent in 2003. Have public officials created jobs indirectly? Again, the chart raises questions about claims they might make. First, consider cyclical variation in payroll employment, as measured relative to a the trend line. With payroll employment expressed in natural logarithms, a constant growth rate is represented by a linear trend. The trend line indicates that payroll employment has grown at an average rate of about 2.1 percent during the post-World War II period. The shaded areas represent years when there was an official recession during at least part of the year. This measure of cyclical variation indicates that the lengths of significant deviations of payroll employment from a 2.1 percent trend line roughly match the lengths of the business cycles, with the exception of the 1960s during the military buildup for the war in Vietnam (armed forces on active duty are excluded from payroll employment). Thus, when it comes to cyclical variation in payroll employment, it seems that the business cycle largely determines the ebb and flow, despite any claims by lawmakers and policymakers that they act to stem the tide. Second, in terms of long-run jobs growth, have policies enacted by public officials affected the average growth rate of payroll employment? Again, the chart suggests that the answer is no. Importantly, there is no indication of a noteworthy break in payroll employment from the 2.1 percent growth path, which is what one would expect if public officials enacted policies that changed the average rate of job growth. The apparent lack of a break from the trend line is especially interesting given the array of national economic policies—changes
{"title":"Public officials and job creation","authors":"T. Garrett, D. Thornton","doi":"10.20955/ES.2004.22","DOIUrl":"https://doi.org/10.20955/ES.2004.22","url":null,"abstract":"Public officials often claim credit for creating jobs through the programs and policies they enact. It is not uncommon to hear, for example, a public official pledging to increase the number of jobs in a particular locality or nationally. Public officials can create jobs in two ways: The first is directly, by creating government jobs. The second is indirectly, by (i) enacting policies that create an economic environment that affects long-run private sector job growth or (ii) using countercyclical fiscal policy to affect short-run private sector job growth.1 How effective have public officials been at creating jobs? The accompanying chart shows the natural logarithm of payroll employment (measured by annual nonfarm payroll employment) from 1946 to 2003, along with the shares of total government, federal government, and state and local government employment. It gives no indication that public officials have created jobs directly. After increasing from 1946 to 1975, total government employment as a percent of payroll employment has trended down. Evidence that public officials create government jobs is even weaker if one considers federal employment. Federal employment as a percent of payroll employment has declined nearly mono tonically over the 1946 to 2003 period, from 5.6 percent in 1946 to 2.1 percent in 2003. Have public officials created jobs indirectly? Again, the chart raises questions about claims they might make. First, consider cyclical variation in payroll employment, as measured relative to a the trend line. With payroll employment expressed in natural logarithms, a constant growth rate is represented by a linear trend. The trend line indicates that payroll employment has grown at an average rate of about 2.1 percent during the post-World War II period. The shaded areas represent years when there was an official recession during at least part of the year. This measure of cyclical variation indicates that the lengths of significant deviations of payroll employment from a 2.1 percent trend line roughly match the lengths of the business cycles, with the exception of the 1960s during the military buildup for the war in Vietnam (armed forces on active duty are excluded from payroll employment). Thus, when it comes to cyclical variation in payroll employment, it seems that the business cycle largely determines the ebb and flow, despite any claims by lawmakers and policymakers that they act to stem the tide. Second, in terms of long-run jobs growth, have policies enacted by public officials affected the average growth rate of payroll employment? Again, the chart suggests that the answer is no. Importantly, there is no indication of a noteworthy break in payroll employment from the 2.1 percent growth path, which is what one would expect if public officials enacted policies that changed the average rate of job growth. The apparent lack of a break from the trend line is especially interesting given the array of national economic policies—changes ","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132864393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over long periods of time, increases in “real” wages—that is, wages adjusted for changes in consumer prices—reflect increases in labor productivity. Economists now widely agree that labor productivity growth increased in the mid-1990s and remains at an elevated pace—at least relative to its anemic pace between 1973 and the mid-1990s. Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products). The impact of more rapid productivity growth on wages continues to be a topic of widespread economic research. Numerous news articles have discussed the apparent failure of wages to increase in line with productivity. Less appreciated, perhaps, is that the productivity acceleration has been accompanied by important changes in the way businesses compensate their employees. Of particular importance is the increased use of “variable pay,” that is, compensation tied either to the performance of individual employees or to the business’s overall performance, including end-ofyear bonuses, “cash awards,” profit sharing, and stock options. The chart compares labor productivity in the nonfarm business sector to two measures of real labor compensation: average hourly earnings for non-supervisory and production workers (AHE) in the upper panel, and total compensation per hour in the lower panel. AHE measures the typical, scheduled hourly wage plus legally required benefits but excludes variable pay—overtime, bonuses, shift premiums, and employer benefits. Total compensation, in contrast, includes variable pay. Increases in these compensation series track productivity quite closely through 1999. Beginning in 2000, however, AHE falls increasingly below productivity and increases little after 2003. Total compensation remains close until 2003, but does not follow 2003’s uptick in productivity growth (behavior which remains a topic for future research). Economists long have noted that focusing on AHE rather than total compensation yields an inaccurate picture of labor compensation due to the omission from AHE of employer-provided benefits. The trend toward increased use of variable pay provides an additional reason for focusing on broader compensation measures. But, why has more of labor compensation become variable pay? And why has this trend widened since 2000? One reason, perhaps, is that the character of the productivity acceleration changed circa 2000. Prior to that date, studies have suggested that the more important effect was an increasing ratio of capital to labor (capital deepening) as businesses substituted relatively less expensive information technology and communication equipment for labor. Since 2000, some studies suggest that the more important factor has been a re-engineering of business practices, which has increased the “skill bias” in
{"title":"How well do wages follow productivity growth","authors":"R. Anderson","doi":"10.20955/ES.2007.7","DOIUrl":"https://doi.org/10.20955/ES.2007.7","url":null,"abstract":"Over long periods of time, increases in “real” wages—that is, wages adjusted for changes in consumer prices—reflect increases in labor productivity. Economists now widely agree that labor productivity growth increased in the mid-1990s and remains at an elevated pace—at least relative to its anemic pace between 1973 and the mid-1990s. Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products). The impact of more rapid productivity growth on wages continues to be a topic of widespread economic research. Numerous news articles have discussed the apparent failure of wages to increase in line with productivity. Less appreciated, perhaps, is that the productivity acceleration has been accompanied by important changes in the way businesses compensate their employees. Of particular importance is the increased use of “variable pay,” that is, compensation tied either to the performance of individual employees or to the business’s overall performance, including end-ofyear bonuses, “cash awards,” profit sharing, and stock options. The chart compares labor productivity in the nonfarm business sector to two measures of real labor compensation: average hourly earnings for non-supervisory and production workers (AHE) in the upper panel, and total compensation per hour in the lower panel. AHE measures the typical, scheduled hourly wage plus legally required benefits but excludes variable pay—overtime, bonuses, shift premiums, and employer benefits. Total compensation, in contrast, includes variable pay. Increases in these compensation series track productivity quite closely through 1999. Beginning in 2000, however, AHE falls increasingly below productivity and increases little after 2003. Total compensation remains close until 2003, but does not follow 2003’s uptick in productivity growth (behavior which remains a topic for future research). Economists long have noted that focusing on AHE rather than total compensation yields an inaccurate picture of labor compensation due to the omission from AHE of employer-provided benefits. The trend toward increased use of variable pay provides an additional reason for focusing on broader compensation measures. But, why has more of labor compensation become variable pay? And why has this trend widened since 2000? One reason, perhaps, is that the character of the productivity acceleration changed circa 2000. Prior to that date, studies have suggested that the more important effect was an increasing ratio of capital to labor (capital deepening) as businesses substituted relatively less expensive information technology and communication equipment for labor. Since 2000, some studies suggest that the more important factor has been a re-engineering of business practices, which has increased the “skill bias” in ","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132801550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When making monetary policy decisions, members of the Federal Open Market Committee (FOMC) want to know as much as possible about current and future economic conditions. Unfortunately, most economic data are reported with a lag of one month or more. Neverthe less, some nonfinancial economic data are available every week, such as initial unemployment claims, auto and steel production, and electricity consumption. Private forecasters use such data to update their estimates of currentquarter gross domestic product (GDP) through their knowledge of how the industrial production number is constructed from weekly data on electricity, autos, and steel. The Fed supplements real-time analysis of the data construction process in two distinct ways. First, it runs a large macro econometric model that uses past trends to project likely economic outcomes from alternative policy choices. Second, the regional Federal Reserve Banks conduct surveys to gather qualitative information about economic conditions in each district. Prior to every FOMC meeting, this anecdotal survey information is compiled into what is known as the Beige Book. The information provided by the Beige Book adds value in two ways. First, business cycle fluctuations are now thought to be more heterogeneous across regions and sectors than they used to be. Hence, one hears references to a “rolling recession” that bottoms out in different regions at different times. State and regional data, however, are much less complete than national data. In this void, the Beige Book can help identify the current regional focal point of such a rolling downturn. Second, for some one-time events, macroeconometric models are not reliable guides because history has not recorded a pattern for how the economy is likely to respond. Examples of such events include the surge in computer and software investment that preceded Y2K and the terrorist attacks on September 11, 2001. The best way to infer the likely consequences of such events is to talk with business leaders to discern their plans. The Beige Book provides a concise compendium of such a survey. Because it is based purely on anecdotal information, there are many reasons to question the usefulness of the Beige Book in assessing economic conditions. For example, it may partly reflect the biases of the economists who compile it, or policymakers may use only the anecdotes that are consistent with the views they already have. Recently, economists within the Federal Reserve System have tried to assess the Beige Book as an indicator of present and future economic activity. A study from the Minneapolis Fed found that the Beige Book has been an accurate predictor of real growth in the current quarter.1 They also found, however, that the Beige Book did not improve upon private sector forecasts of real growth. Their conclusion is that the Beige Book’s value is not as a forecaster of economic activity, but in providing insight and context not found in formal forecasting mode
{"title":"Color me beige","authors":"Howard J Wall","doi":"10.20955/ES.2002.5","DOIUrl":"https://doi.org/10.20955/ES.2002.5","url":null,"abstract":"When making monetary policy decisions, members of the Federal Open Market Committee (FOMC) want to know as much as possible about current and future economic conditions. Unfortunately, most economic data are reported with a lag of one month or more. Neverthe less, some nonfinancial economic data are available every week, such as initial unemployment claims, auto and steel production, and electricity consumption. Private forecasters use such data to update their estimates of currentquarter gross domestic product (GDP) through their knowledge of how the industrial production number is constructed from weekly data on electricity, autos, and steel. The Fed supplements real-time analysis of the data construction process in two distinct ways. First, it runs a large macro econometric model that uses past trends to project likely economic outcomes from alternative policy choices. Second, the regional Federal Reserve Banks conduct surveys to gather qualitative information about economic conditions in each district. Prior to every FOMC meeting, this anecdotal survey information is compiled into what is known as the Beige Book. The information provided by the Beige Book adds value in two ways. First, business cycle fluctuations are now thought to be more heterogeneous across regions and sectors than they used to be. Hence, one hears references to a “rolling recession” that bottoms out in different regions at different times. State and regional data, however, are much less complete than national data. In this void, the Beige Book can help identify the current regional focal point of such a rolling downturn. Second, for some one-time events, macroeconometric models are not reliable guides because history has not recorded a pattern for how the economy is likely to respond. Examples of such events include the surge in computer and software investment that preceded Y2K and the terrorist attacks on September 11, 2001. The best way to infer the likely consequences of such events is to talk with business leaders to discern their plans. The Beige Book provides a concise compendium of such a survey. Because it is based purely on anecdotal information, there are many reasons to question the usefulness of the Beige Book in assessing economic conditions. For example, it may partly reflect the biases of the economists who compile it, or policymakers may use only the anecdotes that are consistent with the views they already have. Recently, economists within the Federal Reserve System have tried to assess the Beige Book as an indicator of present and future economic activity. A study from the Minneapolis Fed found that the Beige Book has been an accurate predictor of real growth in the current quarter.1 They also found, however, that the Beige Book did not improve upon private sector forecasts of real growth. Their conclusion is that the Beige Book’s value is not as a forecaster of economic activity, but in providing insight and context not found in formal forecasting mode","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"135 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133016876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Views expressed do not necessarily reflect official positions of the Federal Reserve System. According to the definition used in government statistics, a person who is actively looking for a paying job but is unable to find one is considered to be unemployed. Some positive level of unemployment always exists because (i) firms continually adjust the size of their work force in response to changing business conditions and (ii) it takes time for an unemployed worker to find a new job. It is common to use the unemployment rate—the number of unemployed workers divided by the total civilian labor force—to gauge labor market conditions, and this measure is closely watched by monetary policymakers as well as financial market participants. The unemployment rate usually rises during business recessions and falls during business expansions; and many economists, e.g., Lilien (1982), argue that sectoral shifts account for a large portion of the cyclical variation in unemployment.1 The underlying premise is as follows: When an economy is hit by an adverse shock, e.g., a sharp increase in crude oil prices, then production resources—including labor—will move from more adversely affected sectors to less adversely affected sectors. Because of the presence of industry-specific skills and the time-consuming nature of the job search, the process of transferring workers across industries tends to be slow and involves spells of unemployment. Therefore, an increase in intersectoral shifts leads to higher unemployment by increasing the amount of labor reallocation. Loungani, Rush, and Tave (1990) suggest that stock market dispersion is a good proxy for the volume of intersectoral shifts.2 Intuitively, because stock prices are equal to expected discounted future cash flows, when stock prices in a sector go up (down), the sector is likely to experience increased (decreased) cash flows and thus demand more (less) labor input in the future. Consistent with Lilien’s (1982) conjecture, Loungani, Rush, and Tave document a significantly positive relation between stock market dispersion and the future unemployment rate using data for the period 1926-87. In the accompanying chart, I replicate Loungani, Rush, and Tave’s main finding for the period 1964:Q1 to 2006:Q4. The solid line is the log stock market dispersion, which is measured by the value-weighted average realized variance of idiosyncratic shocks to all common stocks included in the CRSP (Center for Research in Security Prices) database; and it is lagged by one year.3 The dashed line is the change in the unemployment rate from its level one year ago. As hypothesized, the two variables tend to move in the same direction, with a correlation coefficient of 0.27. In particular, stock market dispersion appears to provide a good explanation for the movement of the labor market in the past few years. After the spectacular run-up in the second half of 1990s, the prices of information technology stocks collapsed in the year 2000. S
本文所表达的观点不一定反映联邦储备系统的官方立场。根据政府统计数据中使用的定义,一个积极寻找有报酬的工作但找不到工作的人被认为是失业的。某种积极的失业水平总是存在的,因为(i)企业不断调整其劳动力规模以应对不断变化的商业环境,(ii)失业工人需要时间找到一份新工作。通常使用失业率(失业人数除以总平民劳动力)来衡量劳动力市场状况,这一指标受到货币政策制定者和金融市场参与者的密切关注。失业率通常在企业衰退时上升,在企业扩张时下降;许多经济学家,如Lilien(1982),认为行业变化在失业的周期性变化中占很大一部分基本前提如下:当一个经济体受到不利冲击(例如原油价格大幅上涨)的打击时,生产资源(包括劳动力)将从受不利影响更大的部门转移到受不利影响较小的部门。由于行业特定技能的存在和求职耗时的本质,跨行业转移工人的过程往往是缓慢的,并涉及失业的一段时间。因此,部门间转移的增加通过增加劳动力再分配的数量导致更高的失业率。Loungani, Rush, and Tave(1990)认为股票市场的分散性是部门间转移量的一个很好的代理直观地说,由于股票价格等于预期的贴现未来现金流,当一个行业的股票价格上涨(下跌)时,该行业可能会经历增加(减少)的现金流,从而在未来需要更多(更少)的劳动力投入。与Lilien(1982)的猜想一致,Loungani、Rush和Tave使用1926- 1987年期间的数据证明了股市分散度与未来失业率之间存在显著的正相关关系。在附带的图表中,我复制了Loungani, Rush和Tave对1964年第一季度至2006年第四季度的主要发现。实线是对数股票市场的离散度,它是由包括在CRSP(证券价格研究中心)数据库中的所有普通股的特殊冲击的价值加权平均实现方差来衡量的;它落后了一年虚线表示与一年前相比失业率的变化。根据假设,这两个变量趋向于同一方向,相关系数为0.27。特别是,股市的分散性似乎很好地解释了过去几年劳动力市场的走势。在经历了20世纪90年代后半期的惊人上涨之后,信息科技股的价格在2000年暴跌。这种戏剧性的跨部门转变表现为股市分散度的急剧增加;紧随其后的是失业率的急剧上升。更重要的是,随着股市分散性最终消退,失业率在样本结束时下降。回族郭
{"title":"Stock market dispersion and unemployment","authors":"Hui Guo","doi":"10.20955/ES.2007.5","DOIUrl":"https://doi.org/10.20955/ES.2007.5","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. According to the definition used in government statistics, a person who is actively looking for a paying job but is unable to find one is considered to be unemployed. Some positive level of unemployment always exists because (i) firms continually adjust the size of their work force in response to changing business conditions and (ii) it takes time for an unemployed worker to find a new job. It is common to use the unemployment rate—the number of unemployed workers divided by the total civilian labor force—to gauge labor market conditions, and this measure is closely watched by monetary policymakers as well as financial market participants. The unemployment rate usually rises during business recessions and falls during business expansions; and many economists, e.g., Lilien (1982), argue that sectoral shifts account for a large portion of the cyclical variation in unemployment.1 The underlying premise is as follows: When an economy is hit by an adverse shock, e.g., a sharp increase in crude oil prices, then production resources—including labor—will move from more adversely affected sectors to less adversely affected sectors. Because of the presence of industry-specific skills and the time-consuming nature of the job search, the process of transferring workers across industries tends to be slow and involves spells of unemployment. Therefore, an increase in intersectoral shifts leads to higher unemployment by increasing the amount of labor reallocation. Loungani, Rush, and Tave (1990) suggest that stock market dispersion is a good proxy for the volume of intersectoral shifts.2 Intuitively, because stock prices are equal to expected discounted future cash flows, when stock prices in a sector go up (down), the sector is likely to experience increased (decreased) cash flows and thus demand more (less) labor input in the future. Consistent with Lilien’s (1982) conjecture, Loungani, Rush, and Tave document a significantly positive relation between stock market dispersion and the future unemployment rate using data for the period 1926-87. In the accompanying chart, I replicate Loungani, Rush, and Tave’s main finding for the period 1964:Q1 to 2006:Q4. The solid line is the log stock market dispersion, which is measured by the value-weighted average realized variance of idiosyncratic shocks to all common stocks included in the CRSP (Center for Research in Security Prices) database; and it is lagged by one year.3 The dashed line is the change in the unemployment rate from its level one year ago. As hypothesized, the two variables tend to move in the same direction, with a correlation coefficient of 0.27. In particular, stock market dispersion appears to provide a good explanation for the movement of the labor market in the past few years. After the spectacular run-up in the second half of 1990s, the prices of information technology stocks collapsed in the year 2000. S","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129661669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T he overall state of the economy is often judged by economic statistics such as inflation, unemployment, and, of course, gross domestic product (GDP). Many of these economic statistics undergo substantial revisions. This is especially true for GDP, which is revised twice in the first three months after its initial release. In the month after each quarter, the Bureau of Economic Analysis releases an advance estimate of GDP. In the two subsequent months, the BEA updates this estimate with preliminary and then final estimates. The initial estimates garner quite a bit of attention in the financial world, but how well do they reflect the true state of the economy? How well do they predict final GDP? The advance estimate of GDP is calculated with incom plete data from the quarter including business inventories, housing, retail sales, and automobile sales. The preliminary estimate is released a month later and incorporates more data from the last month of the quarter. Even final GDP is subject to annual revisions, which have resulted in changes to prior GDP growth rates by more than 1.5 percentage points.1 Economists Karen Dynan and Douglas Elmendorf report that, from 1968 to 2001, the average revision of GDP growth from the advance to the final estimate was 0.67 percentage points. During the same period, revisions around peaks and troughs of the business cycle varied greatly. Near business cycle peaks, revisions were—on average—similar in magnitude to those during the rest of the business cycle. Near troughs, however, estimates were revised quite a bit more. When it comes to detecting the end of a recession, therefore, current GDP estimates may not be the best indicator. The magnitude of the revisions to GDP makes it unclear whether or not the most recent recession will conform to the rule of thumb that a recession includes at least two consecutive quarters of negative GDP growth. Advance and preliminary GDP estimates for the third quarter of 2001 were –0.4 percent and –1.1 percent, respectively. Final GDP growth was revised down to –1.3 percent. Fourth quarter numbers were revised upward by 1.5 percentage points from the advance (0.2 percent) to the final estimate (1.7 percent). These revisions make it increasingly likely that the third quarter of 2001 was the only quarter in the recession with negative growth. Revisions aside, from 1978 to 1991, 88 percent of the time the advance estimate correctly established the direction of quarterly change in real GDP growth.2 Since total revisions do not tend to change the direction of the estimates, the initial numbers may be helpful when determining the direction in which GDP is heading, if not by how much. However, advance and preliminary estimates of GDP around business cycle turning points may be less accurate measures of output. One may take heart, though, that revisions to GDP appear to have gotten smaller (see accompanying figure) during two extended expansions.
经济的总体状况通常由经济统计数据来判断,比如通货膨胀、失业率,当然还有国内生产总值(GDP)。这些经济统计数据中有许多经过了大幅修订。GDP尤其如此,在首次发布后的前三个月,GDP会被修正两次。在每个季度后的一个月,美国经济分析局(Bureau of Economic Analysis)都会发布GDP的预估。在随后的两个月里,经济分析局用初步估计和最终估计更新这一估计。最初的估计在金融界引起了相当多的关注,但它们在多大程度上反映了经济的真实状况?他们对最终GDP的预测有多准确?GDP的预估是根据该季度的完整数据计算的,包括企业库存、住房、零售销售和汽车销售。初步估计将在一个月后公布,并纳入了本季度最后一个月的更多数据。即使是最终的GDP也会受到年度修正的影响,这导致了之前GDP增长率的变化超过1.5个百分点经济学家卡伦·迪南(Karen Dynan)和道格拉斯·埃尔门多夫(Douglas Elmendorf)报告称,从1968年到2001年,从预估到最终估计的GDP增长平均修正幅度为0.67个百分点。在同一时期,对商业周期高峰和低谷的修正差异很大。在接近商业周期峰值时,修正幅度平均与商业周期其余时间相似。然而,在接近低谷的时候,估计的修正幅度更大。因此,在检测衰退结束时,当前的GDP估计可能不是最好的指标。GDP修正幅度之大,让人不清楚最近的衰退是否符合经验法则,即衰退至少包括连续两个季度的GDP负增长。2001年第三季度GDP的预估和初步值分别为- 0.4%和- 1.1%。最终GDP增长率被下修至- 1.3%。第四季度的数据从预估(0.2%)向上修正1.5个百分点至最终预估(1.7%)。这些修正使得2001年第三季度越来越有可能是经济衰退中唯一出现负增长的季度。除了修正之外,从1978年到1991年,88%的时间里,预先估计正确地确定了实际GDP增长的季度变化方向由于总的修正并不会改变估计的方向,所以在确定GDP的走向时,最初的数字可能会有所帮助,如果不是多少的话。然而,围绕商业周期转折点对GDP的预估和初步估计,可能不是衡量产出的准确指标。不过,人们可以振作起来,因为在两次长期扩张期间,GDP的修正幅度似乎变小了(见附图)。
{"title":"Subject to revision","authors":"Abbigail J. Chiodo, Michael T. Owyang","doi":"10.20955/ES.2002.14","DOIUrl":"https://doi.org/10.20955/ES.2002.14","url":null,"abstract":"T he overall state of the economy is often judged by economic statistics such as inflation, unemployment, and, of course, gross domestic product (GDP). Many of these economic statistics undergo substantial revisions. This is especially true for GDP, which is revised twice in the first three months after its initial release. In the month after each quarter, the Bureau of Economic Analysis releases an advance estimate of GDP. In the two subsequent months, the BEA updates this estimate with preliminary and then final estimates. The initial estimates garner quite a bit of attention in the financial world, but how well do they reflect the true state of the economy? How well do they predict final GDP? The advance estimate of GDP is calculated with incom plete data from the quarter including business inventories, housing, retail sales, and automobile sales. The preliminary estimate is released a month later and incorporates more data from the last month of the quarter. Even final GDP is subject to annual revisions, which have resulted in changes to prior GDP growth rates by more than 1.5 percentage points.1 Economists Karen Dynan and Douglas Elmendorf report that, from 1968 to 2001, the average revision of GDP growth from the advance to the final estimate was 0.67 percentage points. During the same period, revisions around peaks and troughs of the business cycle varied greatly. Near business cycle peaks, revisions were—on average—similar in magnitude to those during the rest of the business cycle. Near troughs, however, estimates were revised quite a bit more. When it comes to detecting the end of a recession, therefore, current GDP estimates may not be the best indicator. The magnitude of the revisions to GDP makes it unclear whether or not the most recent recession will conform to the rule of thumb that a recession includes at least two consecutive quarters of negative GDP growth. Advance and preliminary GDP estimates for the third quarter of 2001 were –0.4 percent and –1.1 percent, respectively. Final GDP growth was revised down to –1.3 percent. Fourth quarter numbers were revised upward by 1.5 percentage points from the advance (0.2 percent) to the final estimate (1.7 percent). These revisions make it increasingly likely that the third quarter of 2001 was the only quarter in the recession with negative growth. Revisions aside, from 1978 to 1991, 88 percent of the time the advance estimate correctly established the direction of quarterly change in real GDP growth.2 Since total revisions do not tend to change the direction of the estimates, the initial numbers may be helpful when determining the direction in which GDP is heading, if not by how much. However, advance and preliminary estimates of GDP around business cycle turning points may be less accurate measures of output. One may take heart, though, that revisions to GDP appear to have gotten smaller (see accompanying figure) during two extended expansions.","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"2002 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129676208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Views expressed do not necessarily reflect official positions of the Federal Reserve System. Ringing In the New Year with an Investment Bust? Real outlays by firms for nonresidential capital equipment and software (E&S) plunged 9 percent in 2001, a recession year; it was the largest decline since 1958 and the third largest since World War II. In an attempt to kick-start business investment, President Bush signed legislation in March 2002 that, among other things, allowed firms to immediately expense (depreciate) 30 percent of the cost of E&S purchased between September 10, 2001, and September 11, 2004, and put into service before January 2005. In subsequent tax legislation signed in May 2003, this partial expensing provision was raised to 50 percent and the purchase date was moved back to December 31, 2004. An increase in the depreciation allowance for capital goods increases the present value of the firm’s deductions for tax purposes, which, all else equal, lowers the cost of capital. Accordingly, when the partial expensing provision reverts to its original level on January 1, 2005, the present value of the depreciation deduction will be less—and the cost of capital will be higher—than what it was on December 31, 2004. Although other factors were also probably at work, the recent growth of investment expenditures suggests that firms responded to this incentive, albeit with a lag. From 2001:Q4 to 2003:Q1, real E&S investment fell at about a 1 percent annual rate; however, in the second quarter of 2003, real E&S investment surged at an 11 percent annual rate and has since increased at a 14.5 percent annual rate through the third quarter of 2004. With the expiration of the partial expensing provision fast approaching, some forecasters believe that many firms still plan to shift a portion of their planned capital expenditures from 2005 into 2004. If these expenditures are significant, then we would expect to see an upsurge in business investment in the final quarter of 2004 and then a drop-off in the first quarter of 2005 (or later). Recent surveys compiled by the National Association of Business Economics and the Federal Reserve Bank of Philadelphia suggest that some firms have already shifted, or plan to shift, some of their capital outlays from 2005 into 2004.1 In August, forecasters expected a much larger slowdown in the growth of business fixed investment (BFI), from about 11 percent in 2004:Q4 to 6.5 percent in 2005:Q1.2 Since then, as seen in the table, forecasters have concluded that there will be both a smaller burst in investment spending in the fourth quarter and less of a lull in the first quarter. Even though forecasters repeatedly changed their assessment of the relative strength of investment spending in 2004:Q4 and 2005:Q1—as viewed by the difference between the projected growth rates of real BFI in 2004:Q4 and 2005:Q1—they do not foresee such a swing in real GDP growth. This pattern is consistent with the fact that business investme
本文所表达的观点不一定反映联邦储备系统的官方立场。新年伊始,投资泡沫破灭?企业在非住宅资本设备和软件(E&S)上的实际支出在经济衰退的2001年暴跌了9%;这是自1958年以来的最大降幅,也是第二次世界大战以来的第三大降幅。为了刺激商业投资,布什总统于2002年3月签署了一项法案,除其他外,允许企业在2001年9月10日至2004年9月11日期间购买并在2005年1月之前投入使用的电子和安全设备立即支出(折旧)30%的成本。在2003年5月签署的后续税收立法中,这一部分费用拨备被提高到50%,购买日期被推迟到2004年12月31日。资本货物折旧准备的增加增加了公司用于税收扣除的现值,在其他条件相同的情况下,这降低了资本成本。因此,当部分费用准备恢复到2005年1月1日的原始水平时,折旧扣除的现值将会减少,而资本成本将高于2004年12月31日的水平。虽然其他因素也可能在起作用,但最近投资支出的增长表明,企业对这种激励作出了反应,尽管是滞后的。从2001年第4季度到2003年第1季度,实际E&S投资以每年1%的速度下降;然而,在2003年第二季度,实际的E&S投资以11%的年增长率激增,并在2004年第三季度以14.5%的年增长率增长。随着部分费用准备即将到期,一些预测人士认为,许多公司仍计划将部分计划资本支出从2005年转移到2004年。如果这些支出很大,那么我们预计在2004年最后一个季度会看到商业投资的激增,然后在2005年第一季度(或更晚)出现下降。美国全国商业经济协会(National Association of Business Economics)和费城联邦储备银行(Federal Reserve Bank of Philadelphia)最近进行的调查显示,一些公司已经或计划将部分资本支出从2005年转移到2004年。8月份,预测者预计企业固定投资(BFI)增速将大幅放缓,从2004年第4季度的约11%降至2005年第2季度的6.5%。预测人士得出的结论是,第四季度投资支出的激增幅度将较小,第一季度投资支出的停滞期也将缩短。即使预测者反复改变他们对2004年第四季度和2005年第一季度投资支出相对强度的评估——从2004年第四季度和2005年第一季度实际BFI预测增长率之间的差异来看——他们也没有预见到实际GDP增长会出现这样的波动。这种模式与商业投资(约占GDP的10%)往往比总支出更不稳定的事实是一致的。因此,尽管预测者预计2005年初英国金融机构的增长会有所放缓,但他们并不认为新的一年里会出现投资萧条(或因此导致实际GDP增长明显放缓)。
{"title":"Ringing in the new year with an investment bust","authors":"Kevin L. Kliesen","doi":"10.20955/ES.2004.29","DOIUrl":"https://doi.org/10.20955/ES.2004.29","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. Ringing In the New Year with an Investment Bust? Real outlays by firms for nonresidential capital equipment and software (E&S) plunged 9 percent in 2001, a recession year; it was the largest decline since 1958 and the third largest since World War II. In an attempt to kick-start business investment, President Bush signed legislation in March 2002 that, among other things, allowed firms to immediately expense (depreciate) 30 percent of the cost of E&S purchased between September 10, 2001, and September 11, 2004, and put into service before January 2005. In subsequent tax legislation signed in May 2003, this partial expensing provision was raised to 50 percent and the purchase date was moved back to December 31, 2004. An increase in the depreciation allowance for capital goods increases the present value of the firm’s deductions for tax purposes, which, all else equal, lowers the cost of capital. Accordingly, when the partial expensing provision reverts to its original level on January 1, 2005, the present value of the depreciation deduction will be less—and the cost of capital will be higher—than what it was on December 31, 2004. Although other factors were also probably at work, the recent growth of investment expenditures suggests that firms responded to this incentive, albeit with a lag. From 2001:Q4 to 2003:Q1, real E&S investment fell at about a 1 percent annual rate; however, in the second quarter of 2003, real E&S investment surged at an 11 percent annual rate and has since increased at a 14.5 percent annual rate through the third quarter of 2004. With the expiration of the partial expensing provision fast approaching, some forecasters believe that many firms still plan to shift a portion of their planned capital expenditures from 2005 into 2004. If these expenditures are significant, then we would expect to see an upsurge in business investment in the final quarter of 2004 and then a drop-off in the first quarter of 2005 (or later). Recent surveys compiled by the National Association of Business Economics and the Federal Reserve Bank of Philadelphia suggest that some firms have already shifted, or plan to shift, some of their capital outlays from 2005 into 2004.1 In August, forecasters expected a much larger slowdown in the growth of business fixed investment (BFI), from about 11 percent in 2004:Q4 to 6.5 percent in 2005:Q1.2 Since then, as seen in the table, forecasters have concluded that there will be both a smaller burst in investment spending in the fourth quarter and less of a lull in the first quarter. Even though forecasters repeatedly changed their assessment of the relative strength of investment spending in 2004:Q4 and 2005:Q1—as viewed by the difference between the projected growth rates of real BFI in 2004:Q4 and 2005:Q1—they do not foresee such a swing in real GDP growth. This pattern is consistent with the fact that business investme","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115328362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}