An Analytical Evaluation of the Power of Tests for the Absence of Cointegration

This paper proposes a theoretical explanation for the common empirical results in which different tests for cointegration give different answers. Using local to unity parametrization, I analytically compute the power of four tests for the null of no cointegration: The ADF test on the residuals of the cointegration regression, Johansen’s maximum eigenvalue test, the t-test on the Error Correction term, and Boswijk (1994) Wald test. The tests statistics are shown to converge under a local alternative to random variables whose distribution are functions of Brownian Motions and Ornstein-Uhlenbeck processes and of a single nuisance parameter. The nuisance parameter is determined by the correlation at frequency zero of the errors in the cointegration relation with the shocks of the right hand variables. I show that, when this correlation is high, system approaches like the Johansen maximum eigenvalue or tests of the Error Correction model can exploit this correlation and significantly outperform single equation tests. Many of the varying results from applying different tests can be attributed to different values of this nuisance parameter.

Residuals Bases Tests for the Null of No Cointegration: an Analytical Comparison.

This paper computes the asymptotic distribution of five residuals-based tests for the null of no cointegration under a local alternative when the tests are computed using both OLS and GLS detrended variables. The local asymptotic power of the tests is shown to be a function of Brownian Motion and Ornstein-Uhlenbeck processes, depending on a single nuisance parameter, which is determined by the correlation at frequency zero of the errors of the cointegration regression with the shocks to the right-hand variables. The tests are compared in terms of power in large and small samples. It is shown that, while no significant improvement can be achieved by using different unit root tests than the OLS detrended t-test originally proposed by Engle and Granger (1987), the power of GLS residuals tests can be higher than the power of system tests for some values of the nuisance parameter.

On the Failure of PPP for Bilateral Exchange Rates After 1973 (with Graham Elliott)

Point estimates suggest mean reversion after shocks in the real exchange rate, however it still remains uncomfortable that models without any mean reversion at all are often compatible with individual country pair data from the floating period. Studies with data over longer periods find mean reversion, but at the cost of mixing in data from earlier exchange rate arrangements. Pooling the floating period data for a number of countries also finds evidence of mean reversion, but at the expense of potentially mixing in country pairs with and without mean reversion. We examine tests for mean reversion for individual country pairs where greater power against close alternatives is gained through modeling other economic variables with the real exchange rate. Taking into account monetary factors for example results in rejecting unit roots in the real exchange rates between European countries vis a vis the US dollar.

Optimal Power for Testing Potential Cointegrating Vectors with known parameters for Nonstationarity (with Graham Elliott and Michael Jansson)

Theory often specifies a particular cointegrating vector amongst integrated variables and it is often required that one test for a unit root in the known cointegrating vector. Although it is common to simply employ a univariate test for a unit root for this test, it is known that this does not take into account all available information. We show here that in such testing situations a family of tests with optimality properties exists. We use this to characterize the extent of the loss in power from using popular methods, as well as to derive a test that works well in practice. We also characterize the extent of the losses of not imposing the cointegrating vector in the testing procedure. We apply various tests to the hypothesis that price forecasts from the Livingston data survey are cointegrated with prices, and find that although most tests fail to reject the presence of a unit root in forecast errors the tests presented here strongly reject this (implausible) hypothesis.

The Decline in U.S. Output Volatility: Structural Changes and Inventory Investment (with Ana Maria Herrera)

Explanations for the decline in US output volatility since the mid-1980s comprise: “better policy”, “good luck”, and technological change. Our multiple break estimates suggest that reductions in volatility since the mid-1980s extend not only to manufacturing inventories but also to sales. This finding, along with a concentration of the reduction in the volatility of inventories in material and supplies, and the lack of a significant break in the inventory-sales covariance, imply that new inventory technology cannot account for the majority of the decline in output volatility.

Small Sample Confidence Intervals for Multivariate Impulse Response Functions at Long Horizons (with Barbara Rossi)

Existing methods for constructing confidence bands for multivariate impulse response functions may have poor coverage at long lead times when variables are highly persistent. The goal of this paper is to propose a simple method that is not pointwise and that is robust to the presence of highly persistent processes. We use approximations based on local-to-unity asymptotic theory, and allow the horizon to be a fixed fraction of the sample size. We show that our method has better coverage properties at long horizons than existing methods, and may provide different economic conclusions in empirical applications. We also propose a modification of this method which has good coverage properties at both short and long horizons.

Do Technology Shocks Drive Hours Up or Down? A Little Evidence From an Agnostic Procedure (with Barbara Rossi)

This paper analyzes the robustness of the estimate of a positive productivity shock on hours to the presence of a possible unit root in hours. Estimations in levels or in first differences provide opposite conclusions. We rely on an agnostic procedure in which the researcher does not have to choose between a specification in levels or in first differences. We find that a positive productivity shock has a negative effect on hours, as in Francis and Ramey (2001), but the effect is much more short-lived, and disappears after two quarters. The effect becomes positive at business cycle frequencies, as in Christiano et al. (2003).

“Impulse Responses Confidence Intervals for Persistent Data: What Have We Learned?” (with Barbara Rossi)

This paper provides a comprehensive comparison of existing methods for constructing confidence bands for univariate impulse response functions in the presence of high persistence. Monte Carlo results show that the methods proposed in Kilian (1999), Wright (2000), Gospodinov (2004) and Pesavento and Rossi (2005) have favorable coverage properties, although they differ in terms of robustness at various horizons, median unbiasedness, and reliability in the possible presence of a unit or mildly explosive root. On the other hand, methods like Runkle’s (1987) bootstrap, Andrews and Chen (1994), and regressions in levels or first differences (even when based on pre-tests) may not have accurate coverage properties. The paper makes recommendations as to the appropriateness of each method in empirical work.

“Oil Price Shocks, Systematic Monetary Policy and the “Great Moderation”” (with Ana Maria Herrera)

The U.S. economy has experienced a reduction in volatility since the mid 1980’s. In this paper we investigate the changes in the response of the economy to an oil price shock and the role of the systematic monetary policy response in accounting for changes in the response of output, prices, inventories, sales and the overall decline in volatility. Our results suggest a smaller and more short-lived response of most macro variables during the Volcker-Greenspan period. It also appears that while the systematic monetary policy response has dampened fluctuations in economic activity during the 1970s, it has had virtually no effect after the `Great Moderation’.

“The Comovement in Inventory Investments and in Sales: Higher and Higher” (with Ana Maria Herrera)

We re-examine changes in the cross-section correlation pattern of sales and inventories using Ng’s (2006) “uniform spacing” method, which permits the estimation of the number of correlated pairs and focuses on the conditional correlations. In contrast to the literature, we find that the correlation of shocks across industries increased after the ‘Great Moderation’.

“Sensitivity of Impulse Responses to Small Low Frequency Co-movements: Reconciling the Evidence on the Effects of Technology Shocks”(with Nikolay Gospodiov and Alex Maynard)

This paper clarifies the empirical source of the debate on the effect of technology shocks on hours worked. We find that the contrasting conclusions from levels and differenced VAR specifications can be explained by a small, but important, low frequency co-movement between hours worked and labour productivity growth, which is allowed for in the levels specification but is implicitly set to zero in the differenced VAR. Our theoretical analysis shows that, even when the root of hours is very close to one and the low frequency co-movement is quite small, assuming away or explicitly removing the low frequency component can have large implications for the long-run identifying restrictions, giving rise to biases large enough to account for the empirical difference between the two specifications.

“Testing the null of no cointegration when covariates are known to have a unit root”(with Graham Elliott)

A number of tests have been suggested for the test of the null of no cointegration. Under this null, correlations are spurious in the sense of Granger and Newbold (1974) and Phillips (1986). We examine a set of models local to the null of no cointegration, and derive tests with optimality properties in order to examine the efficiency of available tests. We find that, for a sufficiently tight weighting over potential cointegrating vectors, commonly employed full system tests have power that can, in some situations, be quite far from the power bounds for the models examined.

“Near-Optimal Unit Root Test with Stationary Covariate with Better Finite Sample Size”

Numerous tests for integration and cointegration have been proposed in the literature. Since Elliott, Rothemberg and Stock (1996) the search for tests with better power has moved in the direction of finding tests with some optimality properties both in univariate and multivariate models. Although the optimal tests constructed so far have asymptotic power that is indistinguishable from the power envelope, it is well known that they can have severe size distortions in finite samples. This paper proposes a simple and powerful test that can be used to test for unit root or for no cointegration when the cointegration vector is known. Although this test is not optimal in the sense of Elliott and Jansson (2003), it has better finite sample size properties while having asymptotic power curves that are indistinguishable from the power curves of optimal tests. Similarly to Hansen (1995), Elliott and Jansson (2003), Zivot (2000), and Elliott, Jansson and Pesavento (2005) the proposed test achieves higher power by using additional information contained in covariates correlated with the variable being tested. The test is constructed by applying Hansen’s test to variables that are detrended under the alternative in a regression augmented with leads and lags of the stationary covariates. Using local to unity parametrization, the asymptotic distribution of the test under the null and the local alternative is analytically computed.

No comments yet.

Leave a Reply