{"title":"有什么区别?","authors":"Lars Aagaard-Mogensen","doi":"10.1515/semi.1975.15.2.171","DOIUrl":null,"url":null,"abstract":"Each night on the news we hear the level of the Dow Jones Industrial Average along with the \"first difference,\" which is today's price-weighted average minus yesterday's. It is that series of first differences that excites or depresses us each night as it reflects whether stocks made or lost money that day. Furthermore, the differences form the data series that has the most addressable statistical features. In particular, the differences have the stationarity requirement, which justifies standard distributional results such as asymptotically normal distributions of parameter estimates. Differencing arises in many practical time series because they seem to have what are called \"unit roots,\" which mathematically indicate the need to take differences. In 1976, Dickey and Fuller developed the first well-known tests to decide whether differencing is needed. These tests are part of the by SAS® ARIMA procedure in SAS/ETS® in addition to many other time series analysis products. I'll review a little of what is was like to do the development and the required computing back then, say a little about why this is an important issue, and focus on examples. INTRODUCTION Most methodologies used in time series modelling and forecasting are either direct applications of autoregressive integrated moving average models (ARIMA) or are variations on or special cases of these. An example is exponential smoothing, a forecasting method in which the first differences of a time series, Yt -Yt-1, are modeled as a moving average et – θet-1, of independent error terms et. Most known theory involved in time series modelling is based on an assumption of second order stationarity. This concept is defined by the requirements that the expected value of Y is constant and the covariance between any two observations is a function only of their separation in time. This implies that the variance (time separation 0) is constant over time. In the specific case of ARIMA models, the roots of a polynomial constructed from the autoregressive coefficients determine whether the series is stationary. For example if Yt – 1.2Yt-1+0.2Yt-2=et, this so-called “characteristic polynomial” is m2-1.2m+.2 = (m-.2)(m-1) with roots m=0.2 and m=1 (a unit root). Unit roots imply that the series is not stationary but its first differences are as long as there is only one unit root and the rest are less than 1. Testing for unit roots has become a standard part of a time series analyst’s toolkit since the development of unit root tests, the first of which is the so-called Dickey-Fuller test named (by others) after Professor Wayne A. Fuller and myself. In this paper I will show some informative examples of situations in which unit root tests are applied and will reminisce a bit about the development of the test and the state of computing back in the mid 70’s when Professor Fuller and I were working on this. The intent of the paper is to show the reader how and when to use unit root tests and a little bit about how these differ from standard tests like regression t tests even though they use the same t test formulas. Results will only be reviewed. No mathematical theory or proofs are provided, only the results and how to use them along with a little bit of history. INTRODUCTORY EXAMPLES The first example consists of the winning percentages from the San Francisco Giants all the way back to when they began as the New York Gothams in 1883. Figure 1 is a graph. Vertical lines mark the transitions from Gothams to New York Giants and then to San Francisco Giants. Do the data seem stationary? Visually, there does not seem to be any long term trend in the data and the variance appears reasonably constant over time. The guess might be that these are stationary data. Can we back that up with a formal statistical test?","PeriodicalId":40067,"journal":{"name":"Thresholds","volume":null,"pages":null},"PeriodicalIF":0.1000,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/semi.1975.15.2.171","citationCount":"225","resultStr":"{\"title\":\"WHAT’S THE DIFFERENCE?\",\"authors\":\"Lars Aagaard-Mogensen\",\"doi\":\"10.1515/semi.1975.15.2.171\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Each night on the news we hear the level of the Dow Jones Industrial Average along with the \\\"first difference,\\\" which is today's price-weighted average minus yesterday's. It is that series of first differences that excites or depresses us each night as it reflects whether stocks made or lost money that day. Furthermore, the differences form the data series that has the most addressable statistical features. In particular, the differences have the stationarity requirement, which justifies standard distributional results such as asymptotically normal distributions of parameter estimates. Differencing arises in many practical time series because they seem to have what are called \\\"unit roots,\\\" which mathematically indicate the need to take differences. In 1976, Dickey and Fuller developed the first well-known tests to decide whether differencing is needed. These tests are part of the by SAS® ARIMA procedure in SAS/ETS® in addition to many other time series analysis products. I'll review a little of what is was like to do the development and the required computing back then, say a little about why this is an important issue, and focus on examples. INTRODUCTION Most methodologies used in time series modelling and forecasting are either direct applications of autoregressive integrated moving average models (ARIMA) or are variations on or special cases of these. An example is exponential smoothing, a forecasting method in which the first differences of a time series, Yt -Yt-1, are modeled as a moving average et – θet-1, of independent error terms et. Most known theory involved in time series modelling is based on an assumption of second order stationarity. This concept is defined by the requirements that the expected value of Y is constant and the covariance between any two observations is a function only of their separation in time. This implies that the variance (time separation 0) is constant over time. In the specific case of ARIMA models, the roots of a polynomial constructed from the autoregressive coefficients determine whether the series is stationary. For example if Yt – 1.2Yt-1+0.2Yt-2=et, this so-called “characteristic polynomial” is m2-1.2m+.2 = (m-.2)(m-1) with roots m=0.2 and m=1 (a unit root). Unit roots imply that the series is not stationary but its first differences are as long as there is only one unit root and the rest are less than 1. Testing for unit roots has become a standard part of a time series analyst’s toolkit since the development of unit root tests, the first of which is the so-called Dickey-Fuller test named (by others) after Professor Wayne A. Fuller and myself. In this paper I will show some informative examples of situations in which unit root tests are applied and will reminisce a bit about the development of the test and the state of computing back in the mid 70’s when Professor Fuller and I were working on this. The intent of the paper is to show the reader how and when to use unit root tests and a little bit about how these differ from standard tests like regression t tests even though they use the same t test formulas. Results will only be reviewed. No mathematical theory or proofs are provided, only the results and how to use them along with a little bit of history. INTRODUCTORY EXAMPLES The first example consists of the winning percentages from the San Francisco Giants all the way back to when they began as the New York Gothams in 1883. Figure 1 is a graph. Vertical lines mark the transitions from Gothams to New York Giants and then to San Francisco Giants. Do the data seem stationary? Visually, there does not seem to be any long term trend in the data and the variance appears reasonably constant over time. The guess might be that these are stationary data. Can we back that up with a formal statistical test?\",\"PeriodicalId\":40067,\"journal\":{\"name\":\"Thresholds\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.1000,\"publicationDate\":\"2020-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1515/semi.1975.15.2.171\",\"citationCount\":\"225\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Thresholds\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/semi.1975.15.2.171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Thresholds","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/semi.1975.15.2.171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"ARCHITECTURE","Score":null,"Total":0}
Each night on the news we hear the level of the Dow Jones Industrial Average along with the "first difference," which is today's price-weighted average minus yesterday's. It is that series of first differences that excites or depresses us each night as it reflects whether stocks made or lost money that day. Furthermore, the differences form the data series that has the most addressable statistical features. In particular, the differences have the stationarity requirement, which justifies standard distributional results such as asymptotically normal distributions of parameter estimates. Differencing arises in many practical time series because they seem to have what are called "unit roots," which mathematically indicate the need to take differences. In 1976, Dickey and Fuller developed the first well-known tests to decide whether differencing is needed. These tests are part of the by SAS® ARIMA procedure in SAS/ETS® in addition to many other time series analysis products. I'll review a little of what is was like to do the development and the required computing back then, say a little about why this is an important issue, and focus on examples. INTRODUCTION Most methodologies used in time series modelling and forecasting are either direct applications of autoregressive integrated moving average models (ARIMA) or are variations on or special cases of these. An example is exponential smoothing, a forecasting method in which the first differences of a time series, Yt -Yt-1, are modeled as a moving average et – θet-1, of independent error terms et. Most known theory involved in time series modelling is based on an assumption of second order stationarity. This concept is defined by the requirements that the expected value of Y is constant and the covariance between any two observations is a function only of their separation in time. This implies that the variance (time separation 0) is constant over time. In the specific case of ARIMA models, the roots of a polynomial constructed from the autoregressive coefficients determine whether the series is stationary. For example if Yt – 1.2Yt-1+0.2Yt-2=et, this so-called “characteristic polynomial” is m2-1.2m+.2 = (m-.2)(m-1) with roots m=0.2 and m=1 (a unit root). Unit roots imply that the series is not stationary but its first differences are as long as there is only one unit root and the rest are less than 1. Testing for unit roots has become a standard part of a time series analyst’s toolkit since the development of unit root tests, the first of which is the so-called Dickey-Fuller test named (by others) after Professor Wayne A. Fuller and myself. In this paper I will show some informative examples of situations in which unit root tests are applied and will reminisce a bit about the development of the test and the state of computing back in the mid 70’s when Professor Fuller and I were working on this. The intent of the paper is to show the reader how and when to use unit root tests and a little bit about how these differ from standard tests like regression t tests even though they use the same t test formulas. Results will only be reviewed. No mathematical theory or proofs are provided, only the results and how to use them along with a little bit of history. INTRODUCTORY EXAMPLES The first example consists of the winning percentages from the San Francisco Giants all the way back to when they began as the New York Gothams in 1883. Figure 1 is a graph. Vertical lines mark the transitions from Gothams to New York Giants and then to San Francisco Giants. Do the data seem stationary? Visually, there does not seem to be any long term trend in the data and the variance appears reasonably constant over time. The guess might be that these are stationary data. Can we back that up with a formal statistical test?