Views expressed do not necessarily reflect official positions of the Federal Reserve System. The commodity futures market has changed dramatically over the past five years. The Goldman Sachs Commodity Index (GSCI) rose from 235 at the end of December 2002 to 787 on the last day of May 2008—an average annual commodity price inflation rate of 25 percent. The price of agricultural commodities rose about 15 percent, and energy prices soared almost twice as fast—at 29 percent. Futures market participants normally include commercial hedgers and speculators. Commercial hedgers are firms that produce the commodity or use it in producing goods and services. For example, wheat farmers sell wheat ahead of the harvest to hedge against a falling price at harvest time. On the other side of the market, the bread and pasta producers buy wheat in advance to hedge against the risk of rising prices in coming months (and years). Speculators bring liquidity to the market and are generally believed to make the market more efficient in discovering the equilibrium price. One big change in this market is the growth of index funds that invest in long positions. In 2002, only a small percentage of the long positions were held by such funds. Over the past five years, however, the index funds’ long positions have grown. They now represent a significant share of the investment in commodity futures. The rise of index funds has been accompanied by a rapid rise in the use of derivatives based on commodity price indices. Note, however, that not all of these investors are speculators. Although it is true that they are not hedging risk in the commodity markets, many large investors, including the employee pension funds for the federal government and the state of California, are using the commodity futures index funds to hedge risk in the stock and bond markets. Why the rapid growth in the use of commodity futures to hedge risk in the stock and bond markets? Readers seeking to understand this change are referred to a recent research paper by Gary Gorton and Geert Rouwenhorst in which they develop a data set on commodity futures prices that spans the period from July 1959 to December 2003.1 They analyze the return an investor would have earned on a long position in an equally weighted portfolio of investments in a broad set of commodity futures. They show that such an investment displays the riskreturn characteristics of a similar investment in equities. The most interesting fact they uncover, however, is that the return to commodity futures was negatively correlated with returns in both stocks and bonds. The commodity future returns are positively correlated with inflation, unexpected inflation, and changes in expected inflation. In other words, an investment in commodity futures would have been an effective hedge against the business cycle and inflation risk that had been thought difficult, if not impossible, to hedge. Of course, the history of returns likely would be different if
{"title":"Index funds: hedgers or speculators?","authors":"W. Gavin","doi":"10.20955/ES.2008.17","DOIUrl":"https://doi.org/10.20955/ES.2008.17","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. The commodity futures market has changed dramatically over the past five years. The Goldman Sachs Commodity Index (GSCI) rose from 235 at the end of December 2002 to 787 on the last day of May 2008—an average annual commodity price inflation rate of 25 percent. The price of agricultural commodities rose about 15 percent, and energy prices soared almost twice as fast—at 29 percent. Futures market participants normally include commercial hedgers and speculators. Commercial hedgers are firms that produce the commodity or use it in producing goods and services. For example, wheat farmers sell wheat ahead of the harvest to hedge against a falling price at harvest time. On the other side of the market, the bread and pasta producers buy wheat in advance to hedge against the risk of rising prices in coming months (and years). Speculators bring liquidity to the market and are generally believed to make the market more efficient in discovering the equilibrium price. One big change in this market is the growth of index funds that invest in long positions. In 2002, only a small percentage of the long positions were held by such funds. Over the past five years, however, the index funds’ long positions have grown. They now represent a significant share of the investment in commodity futures. The rise of index funds has been accompanied by a rapid rise in the use of derivatives based on commodity price indices. Note, however, that not all of these investors are speculators. Although it is true that they are not hedging risk in the commodity markets, many large investors, including the employee pension funds for the federal government and the state of California, are using the commodity futures index funds to hedge risk in the stock and bond markets. Why the rapid growth in the use of commodity futures to hedge risk in the stock and bond markets? Readers seeking to understand this change are referred to a recent research paper by Gary Gorton and Geert Rouwenhorst in which they develop a data set on commodity futures prices that spans the period from July 1959 to December 2003.1 They analyze the return an investor would have earned on a long position in an equally weighted portfolio of investments in a broad set of commodity futures. They show that such an investment displays the riskreturn characteristics of a similar investment in equities. The most interesting fact they uncover, however, is that the return to commodity futures was negatively correlated with returns in both stocks and bonds. The commodity future returns are positively correlated with inflation, unexpected inflation, and changes in expected inflation. In other words, an investment in commodity futures would have been an effective hedge against the business cycle and inflation risk that had been thought difficult, if not impossible, to hedge. Of course, the history of returns likely would be different if ","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132455553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E ach year thousands of high school seniors make the important decision of where to go to college. With tuition at many schools rising faster than the rate of inflation, financing a college education is becoming increasingly challenging. (In fact, in the United States, the growth rate of college costs since 1990 has been, on average, nearly 3 percent higher than the overall inflation rate.) Offers of financial aid—a complex menu of grants, loans, and work-study—vary by school. Indeed, some students may consider a school’s academic attributes and their projected influence on the student’s lifetime earning potential as less important than the school’s financial aid package. Thus, the way students weight financial aid offers can have a substantial impact on their choice of college. Working with counselors from 510 U.S. high schools, economists Christopher Avery and Caroline Hoxby1 surveyed high-aptitude high school seniors (students likely to gain admission and merit scholarships from selective colleges) to study how students assess financial aid pack ages. In particular, they sought to determine how financial aid characteristics affect the probability that the student will choose a particular school, taking into account individual attributes: SAT score, GPA, legacy status, etc. Avery and Hoxby assert at the outset that there are distinguishing characteristics across financial aid packages that do not necessarily add value. Nevertheless, they find that approxi mately 30 percent of the students in their sample responded strongly to what are arguably trivial distinctions between financial aid packages. The first distinguishing characteristic is whether or not a grant is called a “scholarship.” Clearly, the amount of the grant, not its name, should be what matters. (In fact, the authors note that the amount of a grant is actually negatively correlated to it being designated as a “scholarship.”) Nevertheless, Avery and Hoxby find that students are very responsive to this distinction when deciding which college to attend. Students may consider a named scholarship to be more impressive than an unnamed grant when listed on resumes or job applications—perhaps because scholarship connotes merit-based aid and grant connotes need-based aid. The second characteristic Avery and Hoxby consider is whether or not the grant is front-loaded, meaning the student receives more aid in his or her freshman year than in later years. An example would be a grant that gives $10,000 the first year and $2,000 each of the subsequent three years as opposed to a grant that gives $4,000 each of the four years. Avery and Hoxby find strong student response to front-loading. Potential reasons for students to prefer front-loading are clear: Front-loading better allows students to consider the possibility of transferring to a different school after the first year or two; it gives parents more time to save money toward the total cost of college; and it gives parents the opportunity to ea
{"title":"Financial aid and college choice","authors":"Abbigail J. Chiodo, Michael T. Owyang","doi":"10.20955/ES.2003.20","DOIUrl":"https://doi.org/10.20955/ES.2003.20","url":null,"abstract":"E ach year thousands of high school seniors make the important decision of where to go to college. With tuition at many schools rising faster than the rate of inflation, financing a college education is becoming increasingly challenging. (In fact, in the United States, the growth rate of college costs since 1990 has been, on average, nearly 3 percent higher than the overall inflation rate.) Offers of financial aid—a complex menu of grants, loans, and work-study—vary by school. Indeed, some students may consider a school’s academic attributes and their projected influence on the student’s lifetime earning potential as less important than the school’s financial aid package. Thus, the way students weight financial aid offers can have a substantial impact on their choice of college. Working with counselors from 510 U.S. high schools, economists Christopher Avery and Caroline Hoxby1 surveyed high-aptitude high school seniors (students likely to gain admission and merit scholarships from selective colleges) to study how students assess financial aid pack ages. In particular, they sought to determine how financial aid characteristics affect the probability that the student will choose a particular school, taking into account individual attributes: SAT score, GPA, legacy status, etc. Avery and Hoxby assert at the outset that there are distinguishing characteristics across financial aid packages that do not necessarily add value. Nevertheless, they find that approxi mately 30 percent of the students in their sample responded strongly to what are arguably trivial distinctions between financial aid packages. The first distinguishing characteristic is whether or not a grant is called a “scholarship.” Clearly, the amount of the grant, not its name, should be what matters. (In fact, the authors note that the amount of a grant is actually negatively correlated to it being designated as a “scholarship.”) Nevertheless, Avery and Hoxby find that students are very responsive to this distinction when deciding which college to attend. Students may consider a named scholarship to be more impressive than an unnamed grant when listed on resumes or job applications—perhaps because scholarship connotes merit-based aid and grant connotes need-based aid. The second characteristic Avery and Hoxby consider is whether or not the grant is front-loaded, meaning the student receives more aid in his or her freshman year than in later years. An example would be a grant that gives $10,000 the first year and $2,000 each of the subsequent three years as opposed to a grant that gives $4,000 each of the four years. Avery and Hoxby find strong student response to front-loading. Potential reasons for students to prefer front-loading are clear: Front-loading better allows students to consider the possibility of transferring to a different school after the first year or two; it gives parents more time to save money toward the total cost of college; and it gives parents the opportunity to ea","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129859284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Views expressed do not necessarily reflect official positions of the Federal Reserve System. The Federal Open Market Committee (FOMC) has increased the target federal funds rate at each meeting since June 2004, to 2.5 percent following the February 2005 meeting. A puzzling aspect of recent financial market developments has been that, despite the rise of 150 basis points in the Fed’s target rate, longer-term rates have remained roughly constant. In recent testimony, Fed Chairman Alan Greenspan commented extensively on this issue, eventually concluding that the market behavior “remains a conundrum.”1 There was a time when this same behavior would not have been considered so unusual. It occurred in the early 1960s, a cherished era among monetary economists and policymakers. That era sported relatively rapid growth in both real output and productivity, low inflation rates, and low rates of interest, not unlike the present day. Although many years have passed since that time, the intervening years have been associated with higher inflation—at times substantially higher—and have been dominated by Federal Reserve efforts to move inflation lower and build credibility for continued low inflation. The low level of inflation and high level of Fed credibility characteristic of the early 1960s returned in the early 2000s. Thus, the early 1960s may give a better indication of the nature of today’s financial markets than most of the intervening years. The chart shows the evolution of shortand longerterm interest rates during the 38 months following the last month of the recession relevant to each era. The most recent recession ended in November 2001, while for the earlier era it ended in February 1961. The chart shows the effective federal funds rate along with the longer-term 10-year Treasury note yield. The most striking characteristic is that in both eras, once the federal funds rate began rising following the recession, the longer-term bond yield remained anchored near 4 percent. One explanation is that, in both eras, the private sector viewed movements in short-term interest rates as exactly those necessary to keep inflation low and steady, so that longer-term inflation expectations and hence longer-term bond yields could remain anchored. Similarities in the interest rate environment may also be attributable in part to similarities in economic performance. The average annualized quarterly growth rates of key variables were a lot alike during 2002:Q1 to 2004:Q4 as compared with the corresponding 1961:Q2 to 1964:Q4 period. Average nonfarm business sector productivity growth, for instance, was almost identical during the two periods, 3.90 percent today versus 3.80 percent in the early 1960s. Inflation rates were similar as well, with the core consumer price index increasing, on average, 1.80 percent during the current period versus 1.40 percent during the earlier era. Real output growth was robust in both periods as well, 3.50 percent in the early 2000s versu
{"title":"Twist and shout, or back to the sixties","authors":"James Bullard","doi":"10.20955/es.2005.7","DOIUrl":"https://doi.org/10.20955/es.2005.7","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. The Federal Open Market Committee (FOMC) has increased the target federal funds rate at each meeting since June 2004, to 2.5 percent following the February 2005 meeting. A puzzling aspect of recent financial market developments has been that, despite the rise of 150 basis points in the Fed’s target rate, longer-term rates have remained roughly constant. In recent testimony, Fed Chairman Alan Greenspan commented extensively on this issue, eventually concluding that the market behavior “remains a conundrum.”1 There was a time when this same behavior would not have been considered so unusual. It occurred in the early 1960s, a cherished era among monetary economists and policymakers. That era sported relatively rapid growth in both real output and productivity, low inflation rates, and low rates of interest, not unlike the present day. Although many years have passed since that time, the intervening years have been associated with higher inflation—at times substantially higher—and have been dominated by Federal Reserve efforts to move inflation lower and build credibility for continued low inflation. The low level of inflation and high level of Fed credibility characteristic of the early 1960s returned in the early 2000s. Thus, the early 1960s may give a better indication of the nature of today’s financial markets than most of the intervening years. The chart shows the evolution of shortand longerterm interest rates during the 38 months following the last month of the recession relevant to each era. The most recent recession ended in November 2001, while for the earlier era it ended in February 1961. The chart shows the effective federal funds rate along with the longer-term 10-year Treasury note yield. The most striking characteristic is that in both eras, once the federal funds rate began rising following the recession, the longer-term bond yield remained anchored near 4 percent. One explanation is that, in both eras, the private sector viewed movements in short-term interest rates as exactly those necessary to keep inflation low and steady, so that longer-term inflation expectations and hence longer-term bond yields could remain anchored. Similarities in the interest rate environment may also be attributable in part to similarities in economic performance. The average annualized quarterly growth rates of key variables were a lot alike during 2002:Q1 to 2004:Q4 as compared with the corresponding 1961:Q2 to 1964:Q4 period. Average nonfarm business sector productivity growth, for instance, was almost identical during the two periods, 3.90 percent today versus 3.80 percent in the early 1960s. Inflation rates were similar as well, with the core consumer price index increasing, on average, 1.80 percent during the current period versus 1.40 percent during the earlier era. Real output growth was robust in both periods as well, 3.50 percent in the early 2000s versu","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133411084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Views expressed do not necessarily reflect official positions of the Federal Reserve System. Modern economic theory of foreign exchange rates stipulates that the Deutsche mark/U.S. dollar rate, for example, is equal to discounted future fundamentals—e.g., aggregate income, interest rates, and monetary aggregates—in both the United States and Germany. A substantial portion of the variation in these fundamental macroeconomic variables is predictable across time; therefore, fundamentals should provide important information about future movements in exchange rates. In an influential paper, Meese and Rogoff (1983), however, find that a simple random walk model, in which the forecasted value is the most recent realization, outperforms various forecasting models, including those using economic fundamentals as predictors.1 Meese and Rogoff’s result has inspired numerous empirical investigations of exchange rate predictability, and their conclusion has proven to be strikingly robust after 20 years of fresh data and intensive academic research. In light of seemingly compelling evidence, some recent authors argue that exchange rates are indeed unpredictable—possibly because some shocks have a permanent effect on economic fundamentals. In particular, if people discount the future very little relative to the present, then exchange rates could follow a process close to a random walk. Other economists, however, argue that exchange rates are predictable and that existing empirical studies suffer from various misspecifications. For example, some crucial fundamental determinants of exchange rates may have been omitted. Also, many macroeconomic variables are subject to periodic revisions; therefore, the current vintage data, which have been commonly used in the literature, do not contain the same information as that available to investors at the time of forecast. To address these issues, Guo and Savickas (2005) propose using financial variables, which are broad measures of business conditions and never revised, to predict exchange rates.2 Guo and Savickas find that a measure of U.S. aggregate idiosyncratic volatility (IV) is a strong predictor of the exchange rates of the U.S. dollar against major foreign currencies, especially at relatively long horizons. An idiosyncratic shock to a stock is the part of the stock return that is not explained by asset pricing models. To measure IV, Guo and Savickas first estimate idiosyncratic shocks to all (U.S.) common stocks included in the CRSP (Center for Research in Security Prices) database; they then aggregate the realized variance of idiosyncratic shocks across stocks using the share of market capitalization as the weight. The accompanying chart plots IV from the last quarter of each year (in natural logarithms, solid line) along with one-yearahead changes (December 31 to December 31 of the following year, dashed line) in the Deutsche mark/U.S. dollar rate over the period 1973 to 1998 and the Euro/U.S. dollar rate over the
{"title":"Foreign exchange rates are predictable","authors":"Hui Guo","doi":"10.20955/ES.2005.20","DOIUrl":"https://doi.org/10.20955/ES.2005.20","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. Modern economic theory of foreign exchange rates stipulates that the Deutsche mark/U.S. dollar rate, for example, is equal to discounted future fundamentals—e.g., aggregate income, interest rates, and monetary aggregates—in both the United States and Germany. A substantial portion of the variation in these fundamental macroeconomic variables is predictable across time; therefore, fundamentals should provide important information about future movements in exchange rates. In an influential paper, Meese and Rogoff (1983), however, find that a simple random walk model, in which the forecasted value is the most recent realization, outperforms various forecasting models, including those using economic fundamentals as predictors.1 Meese and Rogoff’s result has inspired numerous empirical investigations of exchange rate predictability, and their conclusion has proven to be strikingly robust after 20 years of fresh data and intensive academic research. In light of seemingly compelling evidence, some recent authors argue that exchange rates are indeed unpredictable—possibly because some shocks have a permanent effect on economic fundamentals. In particular, if people discount the future very little relative to the present, then exchange rates could follow a process close to a random walk. Other economists, however, argue that exchange rates are predictable and that existing empirical studies suffer from various misspecifications. For example, some crucial fundamental determinants of exchange rates may have been omitted. Also, many macroeconomic variables are subject to periodic revisions; therefore, the current vintage data, which have been commonly used in the literature, do not contain the same information as that available to investors at the time of forecast. To address these issues, Guo and Savickas (2005) propose using financial variables, which are broad measures of business conditions and never revised, to predict exchange rates.2 Guo and Savickas find that a measure of U.S. aggregate idiosyncratic volatility (IV) is a strong predictor of the exchange rates of the U.S. dollar against major foreign currencies, especially at relatively long horizons. An idiosyncratic shock to a stock is the part of the stock return that is not explained by asset pricing models. To measure IV, Guo and Savickas first estimate idiosyncratic shocks to all (U.S.) common stocks included in the CRSP (Center for Research in Security Prices) database; they then aggregate the realized variance of idiosyncratic shocks across stocks using the share of market capitalization as the weight. The accompanying chart plots IV from the last quarter of each year (in natural logarithms, solid line) along with one-yearahead changes (December 31 to December 31 of the following year, dashed line) in the Deutsche mark/U.S. dollar rate over the period 1973 to 1998 and the Euro/U.S. dollar rate over the","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"24 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129817034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The chorus from Travis’s 1947 song about the plight of coal miners might ring true for someone looking at average hourly earnings (AHE) of production and nonsupervisory workers. By this measure, as shown in the chart, the pay for an hour of work fell in real terms by 3 percent between 1975 and 2006. Is the average worker actually receiving less per hour of work today than 31 years ago? The answer is likely no. In fact, an alternative measure of compensation, national labor income per hour, increased 44 percent during this period. What accounts for these conflicting results and why do we conclude that the average worker’s real compensation per hour has increased since the mid-1970s? Both the AHE and the national labor income series are adjusted for inflation. However, AHE is adjusted using the consumer price index for all urban wage earners and clerical workers (CPI-W), while national labor income per hour is adjusted using the personal consumption expenditures (PCE) implicit price deflator. To calculate the purchasing power of an hour of work, it is more appropriate to use the PCE implicit price deflator to adjust for inflation because this index better reflects the basket of goods and services actually consumed. Contrary to the CPI-W, which assumes that the same basket of goods and services is purchased for several years, the PCE deflator is calculated using expenditures from the current and preceding period. After applying the PCE deflator, AHE show an 11 percent increase rather than a 3 percent decrease between 1975 and 2006. Another difference in the construction of the two data series is that national labor income per hour includes not only wages and salaries, but also fringe benefits. Given the importance of benefits to a worker’s standard of living, we think many would disagree with the use of the label “fringe.” The benefits of employer contributions to worker’s pension and insurance funds and to government social insurance are included in national labor income per hour, but are not in the AHE series.1 These benefits have become a larger share of worker compensation over time, rising from 14 percent in 1975 to 19 percent in 2006. Once the AHE data are adjusted to include estimated benefits per hour and the PCE deflator is applied, the calculated increase in real wages and benefits reaches 16 percent between 1975 and 2006. Without question, the 16 percent increase in average hourly earnings following the two adjustments described above remains far short of the 44 percent increase in national labor income per hour. What accounts for the remaining difference is unclear. Part of the difference is likely due to the fact that the AHE is restricted to production and nonsupervisory workers. What is clear, however, is that the average worker is receiving more in 2006 for “sixteen tons” than 31 years ago.
{"title":"What do you get for \"Sixteen Tons\"?","authors":"Cletus C. Coughlin, Lesli S. Ott","doi":"10.20955/es.2007.29","DOIUrl":"https://doi.org/10.20955/es.2007.29","url":null,"abstract":"The chorus from Travis’s 1947 song about the plight of coal miners might ring true for someone looking at average hourly earnings (AHE) of production and nonsupervisory workers. By this measure, as shown in the chart, the pay for an hour of work fell in real terms by 3 percent between 1975 and 2006. Is the average worker actually receiving less per hour of work today than 31 years ago? The answer is likely no. In fact, an alternative measure of compensation, national labor income per hour, increased 44 percent during this period. What accounts for these conflicting results and why do we conclude that the average worker’s real compensation per hour has increased since the mid-1970s? Both the AHE and the national labor income series are adjusted for inflation. However, AHE is adjusted using the consumer price index for all urban wage earners and clerical workers (CPI-W), while national labor income per hour is adjusted using the personal consumption expenditures (PCE) implicit price deflator. To calculate the purchasing power of an hour of work, it is more appropriate to use the PCE implicit price deflator to adjust for inflation because this index better reflects the basket of goods and services actually consumed. Contrary to the CPI-W, which assumes that the same basket of goods and services is purchased for several years, the PCE deflator is calculated using expenditures from the current and preceding period. After applying the PCE deflator, AHE show an 11 percent increase rather than a 3 percent decrease between 1975 and 2006. Another difference in the construction of the two data series is that national labor income per hour includes not only wages and salaries, but also fringe benefits. Given the importance of benefits to a worker’s standard of living, we think many would disagree with the use of the label “fringe.” The benefits of employer contributions to worker’s pension and insurance funds and to government social insurance are included in national labor income per hour, but are not in the AHE series.1 These benefits have become a larger share of worker compensation over time, rising from 14 percent in 1975 to 19 percent in 2006. Once the AHE data are adjusted to include estimated benefits per hour and the PCE deflator is applied, the calculated increase in real wages and benefits reaches 16 percent between 1975 and 2006. Without question, the 16 percent increase in average hourly earnings following the two adjustments described above remains far short of the 44 percent increase in national labor income per hour. What accounts for the remaining difference is unclear. Part of the difference is likely due to the fact that the AHE is restricted to production and nonsupervisory workers. What is clear, however, is that the average worker is receiving more in 2006 for “sixteen tons” than 31 years ago.","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128191065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Views expressed do not necessarily reflect official positions of the Federal Reserve System. U.S. firms rarely engage in international trade. In 2000, for example, there were 5.5 million firms in the United States; of these only about 4 percent were exporters. And the top 10 percent of these exporters accounted for 96 percent of total U.S. exports. Not surprisingly, goods-producing firms account for the majority of exports (as measured by value). The table shows the distribution of exporting firms among 10 manufacturing industries ranked by their share of total manufacturing employment in 2002. Most manufacturing industries have some firms that export, but the share of those firms in each industry is relatively small, varying between 12 and 38 percent for the larger industries and between 12 and 25 percent for the smaller industries. Furthermore, across all industries, on average, a firm’s foreign shipments represent only a small proportion (never exceeding 21 percent) of total shipments. In manufacturing as a whole in 2002, only 18 percent of firms were exporters and only about 14 percent of total firm shipments were exports. Not only are exporting firms rare, they also stand out in several ways: Studies show that exporting firms are more productive in terms of value-added per worker and total factor productivity and that they ship a higher volume of products. They use more skilled workers, capital, and sophisticated technology than non-exporting firms. They also pay higher wages and are more innovative. (These differences persist even after accounting for firm size and industry type.) On the surface, exporting seems beneficial. So why don’t more firms export? One important distinction may offer a clue: Although the productivity level of exporting firms is higher than that of non-exporting firms, their productivity growth is not—which suggests that high productivity is a requirement for and not a consequence of engaging in international trade. High entry costs for exporting may be a barrier to all but the most efficient firms. At the same time, economists have also found that once firms begin exporting they experience faster growth than non-exporting firms in both employment and output (in both foreign and domestic shipments).1 Andrew Bernard and J. Bradford Jensen, along with coauthors, argue that the higher initial productivity of exporters, combined with higher output and employment growth after entry, suggests an important role for trade liberalization (a reduction of trade barriers) in improving the aggregate productivity of the economy.2 The reason is that a reduction in trade barriers would improve the profits of existing exporting firms and would reduce the initial productivity level necessary for additional firms to enter the export market. This additional entry would, in turn, generate an increased demand for labor and therefore higher wages. Low-productivity non-exporting firms would be forced to exit the industry, and both capital an
{"title":"U.S. exporters: a rare breed","authors":"Rubén Hernández-Murillo","doi":"10.20955/ES.2007.20","DOIUrl":"https://doi.org/10.20955/ES.2007.20","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. U.S. firms rarely engage in international trade. In 2000, for example, there were 5.5 million firms in the United States; of these only about 4 percent were exporters. And the top 10 percent of these exporters accounted for 96 percent of total U.S. exports. Not surprisingly, goods-producing firms account for the majority of exports (as measured by value). The table shows the distribution of exporting firms among 10 manufacturing industries ranked by their share of total manufacturing employment in 2002. Most manufacturing industries have some firms that export, but the share of those firms in each industry is relatively small, varying between 12 and 38 percent for the larger industries and between 12 and 25 percent for the smaller industries. Furthermore, across all industries, on average, a firm’s foreign shipments represent only a small proportion (never exceeding 21 percent) of total shipments. In manufacturing as a whole in 2002, only 18 percent of firms were exporters and only about 14 percent of total firm shipments were exports. Not only are exporting firms rare, they also stand out in several ways: Studies show that exporting firms are more productive in terms of value-added per worker and total factor productivity and that they ship a higher volume of products. They use more skilled workers, capital, and sophisticated technology than non-exporting firms. They also pay higher wages and are more innovative. (These differences persist even after accounting for firm size and industry type.) On the surface, exporting seems beneficial. So why don’t more firms export? One important distinction may offer a clue: Although the productivity level of exporting firms is higher than that of non-exporting firms, their productivity growth is not—which suggests that high productivity is a requirement for and not a consequence of engaging in international trade. High entry costs for exporting may be a barrier to all but the most efficient firms. At the same time, economists have also found that once firms begin exporting they experience faster growth than non-exporting firms in both employment and output (in both foreign and domestic shipments).1 Andrew Bernard and J. Bradford Jensen, along with coauthors, argue that the higher initial productivity of exporters, combined with higher output and employment growth after entry, suggests an important role for trade liberalization (a reduction of trade barriers) in improving the aggregate productivity of the economy.2 The reason is that a reduction in trade barriers would improve the profits of existing exporting firms and would reduce the initial productivity level necessary for additional firms to enter the export market. This additional entry would, in turn, generate an increased demand for labor and therefore higher wages. Low-productivity non-exporting firms would be forced to exit the industry, and both capital an","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114474410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Saving for a rainy day","authors":"T. Garrett","doi":"10.20955/ES.2004.9","DOIUrl":"https://doi.org/10.20955/ES.2004.9","url":null,"abstract":"","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116836475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cities as centers of innovation","authors":"Rubén Hernández-Murillo","doi":"10.20955/ES.2003.7","DOIUrl":"https://doi.org/10.20955/ES.2003.7","url":null,"abstract":"","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127721151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In 2010, the first of the Baby Boom generation will reach age 65. Many will choose to begin what they hope will be a long and financially secure retirement funded in part by Social Security. Social Security, however, has a looming fiscal problem. Social Security payments to current recipients are funded mainly by taxes levied on current workers. As more and more baby boomers retire, the number of persons receiving Social Security benefits will increase rapidly relative to the number of persons paying taxes to fund those benefits. Accord ing to the Trustees of the Social Security and Medicare Trust Funds, by 2017 Social Security benefit payments will exceed payroll tax revenues and by 2041 all trust fund assets likely will be exhausted.1 The Social Security System’s revenue shortfall mainly reflects a rising elderly dependency ratio: that is, the number of elderly persons (65+ years) relative to the number of working-age persons (20 to 64 years). As shown in the chart, in 1950 there were some 14 persons age 65 and older for every 100 persons between the ages of 20 and 64. By 2000, there were 20; and, as more of the baby boom generation reaches age 65, the ratio will rise to 35 by 2030. Although the coming stampede of baby boomers will cause the dependency ratio to increase sharply after 2010, rising adult life expectancy has been a major reason why the dependency ratio has risen and will continue to rise. The life expectancy of the typical 65year-old man has risen over the years: In 1940, he could expect to live another 12.7 years; by 2005, he could expect to live another 17.1 years; and demographers expect that, by 2030, he could expect to live another 18.7 years. Although the age at which persons are eligible for full Social Security benefits—long fixed at 65—will gradually rise to 67 by 2025, this won’t prevent System revenues from falling short of payments. Declining fertility has also contributed to this rising dependency ratio. In 1950, the U.S. fertility rate was 3.0, meaning the average woman had three children in her lifetime. By 2002, the fertility rate had fallen to 2.0. This decline implies that fewer young persons will enter the labor force to support the growing elderly population. Clearly, the looming Social Security funding crisis largely reflects changing U.S. demographics. The aging baby boom generation, increased adult life expectancy, and declining fertility will rapidly increase the number of retired persons drawing benefits relative to persons paying taxes to fund those benefits. Possible solutions to the problem include policies to (i) slow the growth in the number of retired persons per worker, perhaps by larger and more rapid increases in the age at which persons become eligible for benefits; (ii) otherwise reduce promised benefits; (iii) encourage more immigration of young workers; and/or (iv) substantially raise taxes on current workers. A more radical proposal would replace all or part of the existing system with a syst
{"title":"Can Social Security survive the baby boomers","authors":"C. Aubuchon, David C. Wheelock","doi":"10.20955/ES.2007.22","DOIUrl":"https://doi.org/10.20955/ES.2007.22","url":null,"abstract":"In 2010, the first of the Baby Boom generation will reach age 65. Many will choose to begin what they hope will be a long and financially secure retirement funded in part by Social Security. Social Security, however, has a looming fiscal problem. Social Security payments to current recipients are funded mainly by taxes levied on current workers. As more and more baby boomers retire, the number of persons receiving Social Security benefits will increase rapidly relative to the number of persons paying taxes to fund those benefits. Accord ing to the Trustees of the Social Security and Medicare Trust Funds, by 2017 Social Security benefit payments will exceed payroll tax revenues and by 2041 all trust fund assets likely will be exhausted.1 The Social Security System’s revenue shortfall mainly reflects a rising elderly dependency ratio: that is, the number of elderly persons (65+ years) relative to the number of working-age persons (20 to 64 years). As shown in the chart, in 1950 there were some 14 persons age 65 and older for every 100 persons between the ages of 20 and 64. By 2000, there were 20; and, as more of the baby boom generation reaches age 65, the ratio will rise to 35 by 2030. Although the coming stampede of baby boomers will cause the dependency ratio to increase sharply after 2010, rising adult life expectancy has been a major reason why the dependency ratio has risen and will continue to rise. The life expectancy of the typical 65year-old man has risen over the years: In 1940, he could expect to live another 12.7 years; by 2005, he could expect to live another 17.1 years; and demographers expect that, by 2030, he could expect to live another 18.7 years. Although the age at which persons are eligible for full Social Security benefits—long fixed at 65—will gradually rise to 67 by 2025, this won’t prevent System revenues from falling short of payments. Declining fertility has also contributed to this rising dependency ratio. In 1950, the U.S. fertility rate was 3.0, meaning the average woman had three children in her lifetime. By 2002, the fertility rate had fallen to 2.0. This decline implies that fewer young persons will enter the labor force to support the growing elderly population. Clearly, the looming Social Security funding crisis largely reflects changing U.S. demographics. The aging baby boom generation, increased adult life expectancy, and declining fertility will rapidly increase the number of retired persons drawing benefits relative to persons paying taxes to fund those benefits. Possible solutions to the problem include policies to (i) slow the growth in the number of retired persons per worker, perhaps by larger and more rapid increases in the age at which persons become eligible for benefits; (ii) otherwise reduce promised benefits; (iii) encourage more immigration of young workers; and/or (iv) substantially raise taxes on current workers. A more radical proposal would replace all or part of the existing system with a syst","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125851517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Views expressed do not necessarily reflect official positions of the Federal Reserve System. On the first Friday of each month, the Bureau of Labor Statistics (BLS) releases its closely scrutinized monthly employment report. Data in this report are derived from two surveys: the Census Bureau’s Current Population Survey, also known as the household survey; and the Current Employ ment Statistics survey (CES), which is a survey of nonagricultural business establishments (including government offices).1 One of the limitations of the CES is that data for average weekly hours and average hourly earnings are reported for only a subset of workers: currently, production, construction, and non-supervisory workers. The production classification is used in the goods-producing sector, while the non-supervisory classification is used in the service-producing industries. Workers in these two categories account for about 80 percent of private nonagricultural employment. According to the BLS, this classification system has become increasingly archaic. Many employers do not classify workers by these two categories, which has led to relatively high non response rates.2 Looking to the future, the BLS began publishing an experimental series in April 2006 that measures average hourly earnings and average weekly hours of all nonfarm privatesector employees. The BLS also began publishing an experimental gross monthly earnings series that includes both wages and salaries and benefits such as bonuses, stock options, and employer contributions to 401(k) plans. The existing BLS average hourly earnings series excludes these kinds of benefits. These experimental data are relatively new and so are not seasonally adjusted, as the existing data are. Moreover, these data are published with a two-month lag. (For example, if the official data are available for January 2008, the experimental series are available only through November 2007.) The experimental all-employee series for hours and earnings will become the official data in February 2010, with the release of the January 2010 data. At that time the BLS believes that it will have had enough time to reliably estimate monthly seasonal factors. These experimental hours and earnings series have potentially significant implications for measures of nonfarm business productivity and personal income. For example, the BLS uses the CES-based series of hours paid of production and nonsupervisory workers as the key input into its measure of hours worked (the denominator in output per hour).3 If the existing CES series is not reporting a complete picture of hours worked, then measures of productivity will be affected. The charts show the percentage difference between the existing and experimental series for average hourly earnings and average weekly hours (not seasonally adjusted): The experimental measure of hours is on average between 1 and 3 percent higher than the existing measure. The experimental measure of average hourly earnings is
本文所表达的观点不一定反映联邦储备系统的官方立场。每个月的第一个星期五,美国劳工统计局(BLS)都会发布经过严格审查的月度就业报告。本报告中的数据来自两项调查:人口普查局的当前人口调查,也被称为住户调查;当前就业统计调查(CES),这是对非农业商业机构(包括政府办公室)的调查CES的局限性之一是,平均每周工作时间和平均每小时收入的数据只报告了一小部分工人:目前,生产、建筑和非监督工人。生产分类用于商品生产部门,而非监管分类用于服务生产行业。这两类工人约占私营非农业就业人数的80%。根据劳工统计局的说法,这种分类系统已经变得越来越过时了。许多雇主没有将员工分为这两类,这导致了相对较高的不回复率展望未来,劳工统计局于2006年4月开始发布一系列实验数据,衡量所有非农业私营部门雇员的平均时薪和平均周薪。劳工统计局还开始发布一个试验性的月度总收入系列,其中包括工资、薪金和福利,如奖金、股票期权和雇主对401(k)计划的缴款。现有的劳工统计局平均时薪数据不包括这些福利。这些实验数据相对较新,因此不像现有数据那样经过季节性调整。此外,这些数据的发布有两个月的滞后。(例如,如果官方数据是2008年1月的,那么实验系列数据只能到2007年11月。)2010年2月,随着2010年1月数据的发布,所有员工工作时间和收入的实验系列将成为官方数据。到那时,劳工统计局相信它将有足够的时间来可靠地估计月度季节性因素。这些实验时间和收入序列对非农业企业生产率和个人收入的衡量具有潜在的重要意义。例如,劳工统计局将生产和非监督工人的工资时间序列作为衡量工作时间(每小时产出的分母)的关键输入如果现有的CES系列没有报告工作时间的完整情况,那么生产力度量将受到影响。图表显示了现有和实验系列的平均小时收入和平均每周工作时间(不经季节性调整)之间的百分比差异:实验测量的工作时间平均比现有测量高1%到3%。平均时薪的实验指标也一直高于现有的指标:截至2007年11月,这一差异约为19.5%(分别为21.12美元和17.63美元)。这些图表似乎暗示,目前未被归类为生产或非管理员工的工人往往工作时间更长,每小时收入更高。-Kevin L. Kliesen 1欲了解更多细节,请参阅BLS的《方法手册》;www.bls.gov opub /轨/ home.htm。
{"title":"An expanded look at employment","authors":"Kevin L. Kliesen","doi":"10.20955/es.2008.7","DOIUrl":"https://doi.org/10.20955/es.2008.7","url":null,"abstract":"Views expressed do not necessarily reflect official positions of the Federal Reserve System. On the first Friday of each month, the Bureau of Labor Statistics (BLS) releases its closely scrutinized monthly employment report. Data in this report are derived from two surveys: the Census Bureau’s Current Population Survey, also known as the household survey; and the Current Employ ment Statistics survey (CES), which is a survey of nonagricultural business establishments (including government offices).1 One of the limitations of the CES is that data for average weekly hours and average hourly earnings are reported for only a subset of workers: currently, production, construction, and non-supervisory workers. The production classification is used in the goods-producing sector, while the non-supervisory classification is used in the service-producing industries. Workers in these two categories account for about 80 percent of private nonagricultural employment. According to the BLS, this classification system has become increasingly archaic. Many employers do not classify workers by these two categories, which has led to relatively high non response rates.2 Looking to the future, the BLS began publishing an experimental series in April 2006 that measures average hourly earnings and average weekly hours of all nonfarm privatesector employees. The BLS also began publishing an experimental gross monthly earnings series that includes both wages and salaries and benefits such as bonuses, stock options, and employer contributions to 401(k) plans. The existing BLS average hourly earnings series excludes these kinds of benefits. These experimental data are relatively new and so are not seasonally adjusted, as the existing data are. Moreover, these data are published with a two-month lag. (For example, if the official data are available for January 2008, the experimental series are available only through November 2007.) The experimental all-employee series for hours and earnings will become the official data in February 2010, with the release of the January 2010 data. At that time the BLS believes that it will have had enough time to reliably estimate monthly seasonal factors. These experimental hours and earnings series have potentially significant implications for measures of nonfarm business productivity and personal income. For example, the BLS uses the CES-based series of hours paid of production and nonsupervisory workers as the key input into its measure of hours worked (the denominator in output per hour).3 If the existing CES series is not reporting a complete picture of hours worked, then measures of productivity will be affected. The charts show the percentage difference between the existing and experimental series for average hourly earnings and average weekly hours (not seasonally adjusted): The experimental measure of hours is on average between 1 and 3 percent higher than the existing measure. The experimental measure of average hourly earnings is","PeriodicalId":305484,"journal":{"name":"National Economic Trends","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131885407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}