Skip to main content

Distribution of the Durbin–Watson Statistic in Near Integrated Processes

  • Chapter
  • First Online:
Empirical Economic and Financial Research

Part of the book series: Advanced Studies in Theoretical and Applied Econometrics ((ASTA,volume 48))

  • 1886 Accesses

Abstract

This paper analyzes the Durbin–Watson (DW) statistic for near-integrated processes. Using the Fredholm approach the limiting characteristic function of DW is derived, in particular focusing on the effect of a “large initial condition” growing with the sample size. Random and deterministic initial conditions are distinguished. We document the asymptotic local power of DW when testing for integration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Further applications of this approach in a similar context are by Nabeya (2000) to seasonal unit roots, and by Kurozumi (2002) and Presno and López (2003) to stationarity testing.

  2. 2.

    Nabeya (20002001) extend the Fredholm approach to cover a class of discontinuous kernels.

  3. 3.

    For a more extensive discussion of integral equations of the Fredholm type, we refer to Hochstadt (1973) and Kondo (1991).

  4. 4.

    Nabeya and Tanaka (1988) use a similar method to find the FD of kernel of the general form \(K\left (s,t\right ) =\min \left (s,t\right ) +\sum \nolimits _{ k=1}^{n}\phi _{k}\left (s\right )\psi _{k}\left (t\right )\). See page 148 of Tanaka (1996) for some examples.

  5. 5.

    The inversion formula (14) is derived in Gurland (1948) and Gil-Pelaez (1951).

References

  • Bhargava, A. (1986). On the theory of testing for unit roots in observed time series. Review of Economic Studies, 53, 369–384.

    Google Scholar 

  • Durbin, J., & Watson, G. S. (1950). Testing for serial correlation in least squares regression. I. Biometrika, 37, 409–428.

    Google Scholar 

  • Durbin, J., & Watson, G. S. (1951). Testing for serial correlation in least squares regression. II. Biometrika, 38, 159–178.

    Article  Google Scholar 

  • Durbin, J., & Watson, G. S. (1971). Testing for serial correlation in least squares regression. III. Biometrika, 58, 1–19.

    Google Scholar 

  • Elliott, G., & Müller, U. K. (2006). Minimizing the impact of the initial condition on testing for unit roots. Journal of Econometrics, 135, 285–310.

    Article  Google Scholar 

  • Gil-Pelaez, J. (1951). Note on the inversion theorem. Biometrika, 38, 481–482.

    Article  Google Scholar 

  • Girsanov, I. (1960). On transforming a certain class of stochastic processes by absolutely continuous substitution of measures. Theory of Probability and Its Applications, 5, 285–301.

    Article  Google Scholar 

  • Gurland, J. (1948). Inversion formulae for the distribution of ratios. Annals of Mathematical Statistics, 19, 228–237.

    Article  Google Scholar 

  • Hamilton, J. D. (1994). Time series analysis. Cambridge: Cambridge University Press.

    Google Scholar 

  • Harvey, D. I., Leybourne, S. J., & Taylor, A. M. R. (2009). Unit root testing in practice: Dealing with uncertainty over the trend and initial condition. Econometric Theory, 25, 587–636.

    Article  Google Scholar 

  • Hisamatsu, H., & Maekawa, K. (1994). The distribution of the Durbin-Watson statistic in integrated and near-integrated models. Journal of Econometrics, 61, 367–382.

    Article  Google Scholar 

  • Hochstadt, H. (1973). Integral equations. New York: Wiley.

    Google Scholar 

  • Imhof, J. P. (1961). Computing the distribution of quadratic forms in normal variables. Biometrika, 4, 419–426.

    Article  Google Scholar 

  • Kondo, J. (1991). Integral equations. Oxford: Clarendon Press.

    Google Scholar 

  • Kurozumi, E. (2002). Testing for stationarity with a break. Journal of Econometrics, 108, 63–99.

    Article  Google Scholar 

  • Müller, U. K., & Elliott, G. (2003). Tests for unit roots and the initial condition. Econometrica, 71, 1269–1286.

    Article  Google Scholar 

  • Nabeya, S. (2000). Asymptotic distributions for unit root test statistics in nearly integrated seasonal autoregressive models. Econometric Theory, 16, 200–230.

    Article  Google Scholar 

  • Nabeya, S. (2001). Approximation to the limiting distribution of t- and F-statistics in testing for seasonal unit roots. Econometric Theory, 17, 711–737.

    Article  Google Scholar 

  • Nabeya, S., & Tanaka, K. (1988). Asymptotic theory of a test for the constancy of regression coefficients against the random walk alternative. Annals of Statistics, 16, 218–235.

    Article  Google Scholar 

  • Nabeya, S., & Tanaka, K. (1990a). A general approach to the limiting distribution for estimators in time series regression with nonstable autoregressive errors. Econometrica, 58, 145–163.

    Article  Google Scholar 

  • Nabeya, S., & Tanaka, K. (1990b). Limiting power of unit-root tests in time-series regression. Journal of Econometrics, 46, 247–271.

    Article  Google Scholar 

  • Phillips, P. C. B. (1987). Towards a unified asymptotic theory for autoregressions. Biometrika, 74, 535–547.

    Article  Google Scholar 

  • Phillips, P. C. B., & Solo, V. (1992). Asymptotics for linear processes. The Annals of Statistics, 20, 971–1001.

    Article  Google Scholar 

  • Presno, M. J., & López, A. J. (2003). Testing for stationarity in series with a shift in the mean. A Fredholm approach. Test, 12, 195–213.

    Google Scholar 

  • Sargan, J. D., & Bhargava, A. (1983). Testing residuals from least squares regression for being generated by the gaussian random walk. Econometrica, 51, 153–174.

    Article  Google Scholar 

  • Tanaka, K. (1990). The Fredholm approach to asymptotic inference on nonstationary and noninvertible time series models. Econometric Theory, 6, 411–432.

    Article  Google Scholar 

  • Tanaka, K. (1996). Time series analysis: Nonstationary and noninvertible distribution theory. New York: Wiley.

    Google Scholar 

  • White, J. S. (1958). The limiting distribution of the serial correlation coefficient in the explosive case. Annals of Mathematical Statistics, 29, 1188–1197.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Uwe Hassler .

Editor information

Editors and Affiliations

Appendix

Appendix

First we present a preliminary result. Lemma 2 contains the required limiting distributions in terms of Riemann integrals.

Lemma 2

Let \(\left \{y_{t}\right \}\)  be generated according to ( 1 ) and satisfy Assumptions  1 and  2 . It then holds for the test statistics from ( 2 ) asymptotically

$$\displaystyle{ \mathit{DW }_{j,T}\mathop{ \rightarrow }\limits^{ D}\mathit{DW }_{j} = \left (\int \left \{w_{c}^{j}\left (r\right )\right \}^{2}dr\right )^{-1}\text{, }j =\mu,\ \tau \text{,} }$$

where under Assumption  3 b)

$$\displaystyle\begin{array}{rcl} w_{c}^{\mu }\left (r\right )& =& w_{ c}\left (r\right ) -\int w_{c}\left (s\right )\mathit{ds}, {}\\ w_{c}^{\tau }\left (r\right )& =& w_{ c}^{\mu }\left (r\right ) - 12\left (r -\frac{1} {2}\right )\int \left (s -\frac{1} {2}\right )w_{c}\left (s\right )\mathit{ds}, {}\\ \end{array}$$

with \(w_{c}\left (r\right ) = w\left (r\right )\)  for c = 0 and

$$\displaystyle{ w_{c}\left (r\right ) =\delta \, \left (e^{-cr} - 1\right )\left (2c\right )^{-1/2} + J_{ c}\left (r\right )\ \mbox{ for }c > 0 }$$

 and the standard Ornstein–Uhlenbeck process \(J_{c}\left (r\right ) =\int _{ 0}^{r}e^{-c\left (r-s\right )}\mathit{dw}\left (s\right )\) .

Proof

The proof is standard by using similar arguments as in Phillips (1987) and Müller and Elliott (2003).

1.1 Proof of Proposition 1

We set \(\upsilon = \sqrt{\lambda -c^{2}}\). For DW μ we have \(w_{c}^{\mu }\left (s\right ) = w_{c}\left (r\right ) -\int w_{c}\left (s\right )\mathit{ds}\), thus

$$\displaystyle\begin{array}{rcl} \mathit{Cov}\left [w_{c}^{\mu }\left (s\right ),w_{ c}^{\mu }\left (t\right )\right ]& =& \mathit{Cov}\left [J_{ c}\left (s\right ) -\int J_{c}\left (s\right )\mathit{ds},J_{c}\left (t\right ) -\int J_{c}\left (s\right )\mathit{ds}\right ] {}\\ & =& K_{1}\left (s,t\right ) -\int \mathit{Cov}\left [J_{c}\left (s\right ),J_{c}\left (t\right )\right ]\mathit{ds} -\int \mathit{Cov}\left [J_{c}\left (s\right ),J_{c}\left (t\right )\right ]\mathit{dt} {}\\ & & +\int \int \mathit{Cov}\left [J_{c}\left (s\right ),J_{c}\left (t\right )\right ]\mathit{ds} {}\\ & =& K_{1}\left (s,t\right ) - g\left (t\right ) - g\left (s\right ) +\omega _{0} {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} g\left (t\right )& =& \frac{e^{-c\left (1+s\right )}\left (1 - e^{\mathit{cs}}\right )\left (1 - 2e^{c} + e^{\mathit{cs}}\right )} {2c^{2}}, {}\\ \omega _{0}& =& -\frac{3 - 2c + e^{-2c} - 4e^{-c}} {2c^{3}}. {}\\ \end{array}$$

For DW τ we have \(w_{c}^{\tau }\left (s\right ) = w_{c}^{\mu }\left (s\right ) - 12\left (s -\frac{1} {2}\right )\int \left (u -\frac{1} {2}\right )w_{c}\left (u\right )\mathit{du}\), thus

$$\displaystyle\begin{array}{rcl} \mathit{Cov}\left [w_{c}^{\tau }\left (s\right ),w_{ c}^{\tau }\left (t\right )\right ]& =& \mathit{Cov}\left [w_{ c}^{\mu }\left (s\right ),w_{ c}^{\mu }\left (t\right )\right ] - 12\left (t -\frac{1} {2}\right )\int \left (u -\frac{1} {2}\right ) {}\\ & & \mathit{Cov}\left [J_{c}\left (s\right ) -\int J_{c}\left (v\right )dv,J_{c}\left (u\right )\right ]\mathit{du} {}\\ & & -12\left (s -\frac{1} {2}\right )\int \left (u -\frac{1} {2}\right )\mathit{Cov}\left [J_{c}\left (u\right ),J_{c}\left (t\right )\right. {}\\ & & \left.-\int J_{c}\left (v\right )dv\right ]\mathit{du} {}\\ & & +144\left (s -\frac{1} {2}\right )\left (t -\frac{1} {2}\right )\int \int \left (u -\frac{1} {2}\right )\left (v -\frac{1} {2}\right ) {}\\ & & \times \mathit{Cov}\left [J_{c}\left (u\right ),J_{c}\left (v\right )\right ]\mathit{dudv} {}\\ & =& \mathit{Cov}\left [w_{c}^{\mu }\left (s\right ),w_{ c}^{\mu }\left (t\right )\right ] {}\\ & & -12\left (t -\frac{1} {2}\right )\int \left (u -\frac{1} {2}\right )\mathit{Cov}\left [J_{c}\left (s\right ),J_{c}\left (u\right )\right ]\mathit{du} {}\\ & & +12\left (t -\frac{1} {2}\right )\int \int \left (u -\frac{1} {2}\right )\mathit{Cov}\left [J_{c}\left (v\right ),J_{c}\left (u\right )\right ]\mathit{dvdu} {}\\ & & -12\left (s -\frac{1} {2}\right )\int \left (u -\frac{1} {2}\right )\mathit{Cov}\left [J_{c}\left (u\right ),J_{c}\left (t\right )\right ]\mathit{du} {}\\ & & +12\left (s -\frac{1} {2}\right )\int \int \left (u -\frac{1} {2}\right )\mathit{Cov}\left [J_{c}\left (u\right ),J_{c}\left (v\right )\right ]\mathit{dvdu} {}\\ & & +144\left (s -\frac{1} {2}\right )\left (t -\frac{1} {2}\right )\int \int \left (u -\frac{1} {2}\right )\left (v -\frac{1} {2}\right ) {}\\ & & \times \mathit{Cov}\left [J_{c}\left (u\right ),J_{c}\left (v\right )\right ]\mathit{dudv} {}\\ \end{array}$$

With some calculus the desired result is obtained. In particular we have with \(\phi _{1}\left (s\right ) = -1\), \(\phi _{2}\left (s\right ) = -g\left (s\right )\), \(\phi _{3}\left (s\right ) = -3f_{1}\left (s\right )\), \(\phi _{4}\left (s\right ) = -3\left (s - 1/2\right )\), \(\phi _{5}\left (s\right ) = 3\omega _{1}\), \(\phi _{6}\left (s\right ) = 3\omega _{1}\left (s - 1/2\right )\), \(\phi _{7}\left (s\right ) = 6\omega _{2}\left (s - 1/2\right )\), \(\phi _{8}\left (s\right ) =\omega _{0}\), \(\psi _{1}\left (t\right ) = g\left (t\right )\), \(\psi _{2}\left (t\right ) =\psi _{6}\left (t\right ) =\psi _{8}\left (t\right ) = 1\), \(\psi _{3}\left (t\right ) =\psi _{5}\left (t\right ) =\psi _{7}\left (t\right ) = t - 1/2\) and \(\psi _{4}\left (t\right ) = f_{1}\left (t\right )\) while

$$\displaystyle{ f_{1}\left (s\right ) = \frac{e^{-c\left (1+s\right )}} {c^{3}} \times \left [2 + c + 2ce^{c} -\left (2 + c\right )e^{2cs} + 2ce^{c+cs}\left (2s - 1\right )\right ], }$$
$$\displaystyle\begin{array}{rcl} \omega _{1}& =& \frac{e^{-2c}\left (e^{c} - 1\right )} {c^{4}} \times \left [2 + c + \left (c - 2\right )e^{c}\right ]\text{,} {}\\ \omega _{2}& =& \frac{e^{-2c}} {c^{5}} \times \left [-3\left (c + 2\right )^{2} - 12c\left (2 + c\right )e^{c} + \left (12c - 9c^{2} + 2c^{3} + 12\right )e^{2c}\right ]. {}\\ \end{array}$$

This completes the proof.

1.2 Proof of Proposition 2

Let \(\mathcal{L}(X) = \mathcal{L}(Y )\) stand for equality in distribution of X and Y and set \(A =\delta \left (2c\right )^{-1/2}\). To begin with, we do the proofs conditioning on \(\delta\). Consider first DW μ . To shorten the proofs for DW μ we work with the following representation for a demeaned Ornstein–Uhlenbeck process given under Theorem 3 of Nabeya and Tanaka (1990b), for their \(R_{1}^{\left (2\right )}\) test statistic, i.e. we write

$$\displaystyle{ \mathcal{L}\left (\int \left \{J_{c}\left (r\right ) -\int J_{c}\left (s\right )\mathit{ds}\right \}^{2}dr\right ) = \mathcal{L}\left (\int \int K_{ 0}\left (s,t\right )\mathit{dw}\left (t\right )\mathit{dw}\left (s\right )\right ), }$$

where \(K_{0}\left (s,t\right ) = \frac{1} {2c}\left [e^{-c\left \vert s-t\right \vert } - e^{-c\left (2-s-t\right )}\right ] - \frac{1} {c^{2}} p\left (s\right )p\left (t\right )\) with \(p\left (t\right ) = 1 - e^{-c\left (1-t\right )}\). Using Lemma 2, we find that

$$\displaystyle{ n_{\mu }\left (t\right ) = A\left (e^{-ct} - 1\right ) - A\int \left (e^{-ct} - 1\right )\mathit{dt}. }$$

For DW μ we will be looking for \(h_{\mu }\left (t\right )\) in

$$\displaystyle{ h_{\mu }\left (t\right ) = m_{\mu }\left (t\right ) +\lambda \int K_{0}\left (s,t\right )h_{\mu }\left (t\right )\left (s\right )\mathit{ds}, }$$
(15)

where \(m_{\mu }\left (t\right ) =\int K_{0}\left (s,t\right )n_{\mu }\left (s\right )\mathit{ds}\). Equation (15) is equivalent to the following boundary condition differential equation

$$\displaystyle{ h_{\mu }^{{\prime\prime}}\left (t\right ) +\upsilon ^{2}h_{\mu }\left (t\right ) = m_{\mu }^{{\prime\prime}}\left (t\right ) - c^{2}m_{\mu }\left (t\right ) +\lambda b_{ 1}\text{,} }$$
(16)

with

$$\displaystyle\begin{array}{rcl} h_{\mu }\left (1\right )& =& m_{\mu }\left (1\right ) - \frac{1} {c^{2}}\lambda b_{1}p\left (1\right ),{}\end{array}$$
(17)
$$\displaystyle\begin{array}{rcl} h_{\mu }^{{\prime}}\left (1\right )& =& m_{\mu }^{{\prime}}\left (1\right ) -\lambda e^{-c}b_{ 2} - \frac{1} {c^{2}}\lambda p^{{\prime}}\left (1\right )b_{ 1},{}\end{array}$$
(18)

where \(b_{1} =\int p\left (s\right )h_{\mu }\left (s\right )\mathit{ds}\) and \(b_{2} =\int e^{\mathit{cs}}h_{\mu }\left (s\right )\mathit{ds}\). Thus have

$$\displaystyle{ h_{\mu }\left (t\right ) = c_{1}^{\mu }\cos \upsilon t + c_{ 2}^{\mu }\sin \upsilon t + g_{\mu }\left (t\right ) + b_{ 1}g_{1}\left (t\right ), }$$

where \(g_{\mu }\left (t\right )\) is a special solution to \(g_{\mu }^{{\prime\prime}}\left (t\right ) +\upsilon ^{2}g_{\mu }\left (t\right ) = m_{\mu }^{{\prime\prime}}\left (t\right ) - c^{2}m_{\mu }\left (t\right )\) and \(g_{1}\left (t\right )\) is a special solution to \(g_{1}^{{\prime\prime}}\left (t\right ) +\upsilon ^{2}g_{1}\left (t\right ) =\lambda\). Boundary conditions (17) and (18) together with \(h_{\mu }\left (t\right )\) imply

$$\displaystyle\begin{array}{rcl} c_{1}^{\mu }\cos \upsilon + c_{ 1}^{\mu }\sin \upsilon + \left [g_{ 1}\left (1\right ) + \frac{1} {c^{2}}\lambda p\left (1\right )\right ]b_{1}& =& m_{\mu }\left (1\right ) - g_{\mu }\left (1\right ), {}\\ -c_{1}^{\mu }\upsilon \sin \upsilon + c_{ 1}^{\mu }\upsilon \cos \upsilon + \left [ \frac{1} {c^{2}}\lambda p^{{\prime}}\left (1\right ) + g_{ 1}^{{\prime}}\left (1\right )\right ]b_{ 1} +\lambda e^{-c}b_{ 2}& =& m_{\mu }^{{\prime}}\left (1\right ) - g_{\mu }^{{\prime}}\left (1\right ), {}\\ \end{array}$$

while expressions for b 1 and b 2 imply that

$$\displaystyle\begin{array}{rcl} & & c_{1}^{\mu }\int p\left (s\right )\cos \upsilon \mathit{sds} + c_{ 1}^{\mu }\int p\left (s\right )\sin \upsilon \mathit{sds} + b_{ 1}\left (\int p\left (s\right )g_{1}\left (s\right )\mathit{ds} - 1\right ) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad = -\int p\left (s\right )g_{\mu }\left (s\right )\mathit{ds} {}\\ & & c_{1}^{\mu }\int e^{\mathit{cs}}\cos \upsilon \mathit{sds} + c_{ 1}^{\mu }\int e^{\mathit{cs}}\sin \upsilon \mathit{sds} + b_{ 1}\int e^{\mathit{cs}}g_{ 1}\left (s\right )\mathit{ds} - b_{2} {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad = -\int e^{\mathit{cs}}g_{\mu }\left (s\right )\mathit{ds} {}\\ \end{array}$$

These form a system of linear equations in c 1 μ, c 2 μ, b 1, and b 2, which in turn identifies them. With some calculus we write

$$\displaystyle\begin{array}{rcl} \int n_{\mu }\left (t\right )h_{\mu }\left (t\right )\mathit{dt}& =& \frac{Ae^{-2c}} {2c^{2}\lambda \upsilon } \times {}\\ & &\left [-A\left (-1 + e^{c}\right )\left (2 + c + (-2 + c)e^{c}\right )\right. {}\\ & & +2e^{c}\left (cc_{ 2}^{\mu }\lambda - ce^{c}\left (c^{2}c_{ 2}^{\mu } - c^{2}c_{ 1}^{\mu } - (-1 + c)c_{ 2}^{\mu }\upsilon ^{2}\right )\right ) {}\\ & & +2ce^{c}\left (c^{3}c_{ 2}^{\mu } - cc_{ 2}^{\mu }\lambda + c_{ 2}^{\mu }\left (-1 + e^{c}\right )\lambda - c^{2}c_{ 1}^{\mu }\upsilon \right )\cos \upsilon {}\\ & & \left.-2ce^{c}\left (c^{3}c_{ 1}^{\mu } - cc_{ 1}^{\mu }\lambda + c_{ 1}^{\mu }\left (-1 + e^{c}\right )\lambda + c^{2}c_{ 2}^{\mu }\upsilon \right )\sin \upsilon \right ]. {}\\ \end{array}$$

Solving for c 1 μ and c 2 μ we find that they are both a multiple of A, hence

$$\displaystyle{ \varPsi _{\mu }\left (\theta;c\right ) = \frac{1} {A^{2}}\int n_{\mu }\left (t\right )h_{\mu }\left (t\right )\mathit{dt}\text{,} }$$

is free of A. Now with \(\varTheta _{\mu } =\int n_{\mu }^{2}\left (t\right )\mathit{dt}\), an application of Lemma 1 results in

$$\displaystyle{ E\left [e^{i\theta \int \left \{w_{c}^{\mu }\left (r\right )\right \}^{2}dr }\vert \delta \right ] = \left [D_{\mu }\left (2i\theta \right )\right ]^{-1/2}\exp \left [i\theta A^{2}\varTheta _{ \mu } - 2\theta ^{2}A^{2}\varPsi _{ \mu }\left (\theta;c\right )\right ]. }$$

As \(\sqrt{2c}A =\delta \sim N\left (\mu _{\delta },\sigma _{\delta }^{2}\right )\), standard manipulations complete the proof for j = μ.

Next we turn to DW τ . Using Lemma 2 we find that

$$\displaystyle{ n_{\tau }\left (t\right ) = A\left [\left (e^{-ct} - 1\right ) -\int \left (e^{-\mathit{ct}} - 1\right )\mathit{dt} - 12\left (t - 1/2\right )\int \left (t - 1/2\right )\left (e^{-\mathit{ct}} - 1\right )\mathit{dt}\right ]. }$$

Here we will be looking for \(h_{\tau }\left (t\right )\) in the

$$\displaystyle{ h_{\tau }\left (t\right ) = m_{\tau }\left (t\right ) +\lambda \int K_{\tau }\left (s,t\right )h_{\tau }\left (t\right )\left (s\right )\mathit{ds}, }$$
(19)

where \(m_{\tau }\left (t\right ) =\int K_{\tau }\left (s,t\right )n_{\tau }\left (s\right )\mathit{ds}\) and \(K_{\tau }\left (s,t\right )\) is from Proposition 1. Equation (19) can be written as

$$\displaystyle{ h_{\tau }^{{\prime\prime}}\left (t\right ) +\upsilon ^{2}h_{\tau }\left (t\right ) = m_{\tau }^{{\prime\prime}}\left (t\right ) - c^{2}m_{\tau }\left (t\right ) +\lambda \sum \nolimits _{ k=1}^{8}b_{ k}\left [\psi _{k}^{{\prime\prime}}\left (t\right ) - c^{2}\psi _{ k}\left (t\right )\right ], }$$
(20)

with the following boundary conditions

$$\displaystyle\begin{array}{rcl} h_{\tau }\left (0\right )& =& m_{\tau }\left (0\right ) +\lambda \sum \nolimits _{ k=1}^{8}b_{ k}\psi _{k}\left (0\right ),{}\end{array}$$
(21)
$$\displaystyle\begin{array}{rcl} h_{\tau }^{{\prime}}\left (0\right )& =& m_{\tau }^{{\prime}}\left (0\right ) +\lambda \sum \nolimits _{ k=1}^{8}b_{ k}\psi _{k}^{{\prime}}\left (0\right ) +\lambda b_{ 9},{}\end{array}$$
(22)

where

$$\displaystyle{ b_{k} =\int \phi _{k}\left (s\right )h_{\tau }\left (s\right )\mathit{ds}\text{, }k = 1,\ldots,8\text{ and }b_{9} =\int e^{-cs}h\left (s\right )\mathit{ds}. }$$
(23)

The solution to (20) is

$$\displaystyle{ h_{\tau }\left (t\right ) = c_{1}^{\tau }\cos \upsilon t + c_{ 2}^{\tau }\sin \upsilon t + g_{\tau }\left (t\right ) +\sum \nolimits _{ k=1}^{8}b_{ k}g_{k}\left (t\right ) }$$
(24)

where \(g_{k}\left (t\right )\), \(k = 1,2,\ldots,8\), are special solutions to the following differential equations

$$\displaystyle{ g_{k}^{{\prime\prime}}\left (t\right ) +\upsilon ^{2}g_{ k}\left (t\right ) =\lambda \left [\psi _{k}^{{\prime\prime}}\left (t\right ) - c^{2}\psi _{ k}\left (t\right )\right ]\text{, }k = 1,2,\ldots,8, }$$

and \(g_{\tau }\left (t\right )\) is a special solution of \(g_{\tau }^{{\prime\prime}}\left (t\right ) +\upsilon ^{2}g_{\tau }\left (t\right ) = m_{\tau }^{{\prime\prime}}\left (t\right ) - c^{2}m_{\tau }\left (t\right )\). The solution given in (24) can be written as

$$\displaystyle\begin{array}{rcl} h_{\tau }\left (t\right )& =& c_{1}^{\tau }\cos \upsilon t + c_{ 2}^{\tau }\sin \upsilon t + g_{\tau }\left (t\right ) - b_{ 1} \frac{\lambda } {\upsilon ^{2}} -\frac{\lambda c^{2}} {\upsilon ^{2}} (b_{2} + b_{6} + b_{8}) \\ & +& \left (b_{3} + b_{5} + b_{7}\right )\lambda c^{2}\frac{1 - 2t} {2\upsilon ^{2}} + b_{4}\lambda \frac{1 - 2t} {2\upsilon ^{2}}. {}\end{array}$$
(25)

The boundary conditions in (21) and (22) imply

$$\displaystyle\begin{array}{rcl} m_{\tau }\left (0\right ) - g_{\tau }\left (0\right )& =& c_{1}^{\tau } + \frac{\lambda } {2\upsilon ^{2}}\left (-2b_{1} + b_{4}\right ) {}\\ & & + \frac{\lambda } {2}\left (\frac{c^{2}} {\upsilon ^{2}} + 1\right )\left (-2b_{2} + b_{3} + b_{5} - 2b_{6} + b_{7} - 2b_{8}\right ) {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} m_{\tau }^{{\prime}}\left (0\right ) - g_{\tau }^{{\prime}}\left (0\right )& =& c_{ 2}^{\tau }\mu -\lambda b_{ 1}g^{{\prime}}\left (0\right ) -\lambda \left (\frac{c^{2}} {\upsilon ^{2}} + 1\right )\left (b_{3} + b_{5} + b_{7}\right ) {}\\ & & -\left (\frac{1} {\upsilon ^{2}} + f^{{\prime}}\left (0\right )\right )\lambda b_{ 4} -\lambda b_{5}, {}\\ \end{array}$$

while expressions given under (23) characterize nine more equations. These equations form a system of linear equations in unknowns \(c_{1}^{\tau },\ c_{2}^{\tau },\ b_{1},\ \ldots,\ b_{9}\), which can be simply solved to fully identify (25). Let \(\varTheta _{\tau } =\int n_{\tau }\left (t\right )^{2}\mathit{dt}\). Also as for the constant case we set

$$\displaystyle{ \varPsi _{\tau }\left (\theta;c\right ) = \frac{1} {A^{2}}\int n_{\tau }\left (t\right )h_{\tau }\left (t\right )\mathit{dt}\text{,} }$$

whose expression is long and we do not report here. When solving this integral we see that \(\varPsi _{\tau }\left (\theta;c\right )\) is free of A. As before we apply Lemma 1 to establish the following

$$\displaystyle{ E\left [e^{i\theta \int \left \{w_{c}^{\tau }\left (r\right )\right \}^{2}dr }\vert \delta \right ] = \left [D_{\tau }\left (2i\theta \right )\right ]^{-1/2}\exp \left [i\theta A^{2}\varTheta _{ \tau } - 2\theta ^{2}A^{2}\varPsi _{ \tau }\left (\theta;c\right )\right ]. }$$

Now, using \(E\left [e^{i\theta \int \left \{w_{c}^{\tau }\left (r\right )\right \}^{2}dr }\right ] = EE\left [e^{i\theta \int \left \{w_{c}^{\tau }\left (r\right )\right \}^{2}dr }\vert \delta \right ]\), standard manipulations complete the proof.

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Hassler, U., Hosseinkouchack, M. (2015). Distribution of the Durbin–Watson Statistic in Near Integrated Processes. In: Beran, J., Feng, Y., Hebbel, H. (eds) Empirical Economic and Financial Research. Advanced Studies in Theoretical and Applied Econometrics, vol 48. Springer, Cham. https://doi.org/10.1007/978-3-319-03122-4_26

Download citation

Publish with us

Policies and ethics