McAfee SECURE sites help keep you safe from identity theft, credit card fraud, spyware, spam, viruses and online scams

Regressions and Econometric Results

Autocorrelated error terms represent a violation of one of the assumption of the classical linear regression models. Indeed one of the major assumptions of CLRM dictates that error terms should be serially uuncorrelated, i.e. E (utut-j) = 0 for j = 1,2…. Where u is the error term and subscript t represents time. The effect of autocorrelation on inference is that it makes the standard error biased, and hence of course affects the t-statistics and F-statistics. Most macroeconomic time series tend to exhibit positive autocorrelation. And this tends to lead to a downward bias in the standard error. Obviously, the t-statistic will be inflated, hence leading to the possibility of rejecting a null hypothesis when it is true. In the following paper, we provide an example of a model that exhibits serially autocorrelated error terms. We test for autocorrelation and attempt to correct for it. A Cochrane-Orcutt procedure with an autoregressive error process of order 2 was found to be the better model. 

The objective of this research is to model the rate of change (proxied by the natural logarithm) of consumption in the UK for the period 1948 to 1997. A simple (specific to general) modelling approach was used where by three variables were initially used to model the logarithm of the consumption. The three explanatory variables were logarithm of income, inflation rate and the logarithm of the interest rate. Ordinary Least Squares (OLS) is used initially. The original equation to be estimated is:
The results invoke the problem of positive serial autocorrelation. Indeed, the Durbin Watson d -statistic implies that the null hypothesis of no serial correlation is rejected in favour of positive serial correlation (ρ>0). The calculated d (0.395997) turns out to be less than the lower d with numerator 49 and denominator 4 (1.38). Hence we cannot adequately perform inference based on the t-statistics and the F-statistics since the standard error term of the coefficients are understated.  At a glance the t-statistics and the f-statistics offer reasonably good results, the possibility of making Type 1 errors exist. As a further informal check, the graphical plots of the residuals are plotted.

In a nutshell, the above analysis has shown that the regression equation suffers from autocorrelation and inference from the ordinary least squares model is going to give us invalid conclusions. Hence we attempted to remove the autocorrelation by considering three procedures., namely the Cochrane-Orcutt with one lagged error term, Cochrane Orcutt with two lagged error terms and finally, adding a lagged dependent variable in the model. We found that the previously positive autocorrelation amongst the error terms was controlled only in the Cochrane-Orcutt model with 2 residuals. It is worth noting however that all three models improve on the conventional Ordinary Least Squares results. The statistics used to detect autocorrelation were the Durbin Watson d statistic in the first three models and the Durbin Watson h statistic in the last model.

However, we note that many issues other than autocorrelation are worth taking into account. Clearly, given the small sample size of the regression model, proper inference is restrictive and the tests for autocorrelation might not be so powerful. A proper remedy might be to take quarterly observations leading to a higher power by the autocorrelation tests. Moreover, it is worth mentioning that the conventional goodness of fit measures might be reduced once non-stationarity of the variables is taken into account.

Related Links
To Top