Skip to main content

Eigen Decomposition

  • Chapter
  • First Online:
Advanced Statistics for the Behavioral Sciences
  • 1501 Accesses

Abstract

In Chap. 2, we learned how to decompose a rectangular matrix into an orthonormal basis Q and an upper triangular matrix R, and in Chap. 3 we applied the decomposition to a linear regression model. In this chapter you will learn a related decomposition that can create an orthonormal basis from a square, symmetric matrix. The decomposition is known as the eigen decomposition, and it has applications across a range of problems in math, science, and engineering.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The eigen decomposition is sometimes called the spectral decomposition.

  2. 2.

    All of the eigenvalues of a projection matrix are 1 or 0.

  3. 3.

    By convention, the largest eigenvalue is designated first, with the size of each successive eigenvalue decreasing in magnitude.

  4. 4.

    We can eliminate the denominator in Eq. (4.7) if our eigenvector is in unit length.

  5. 5.

    The \( \mathcal{R} \) code that accompanies this section outputs an orthonormal matrix, P, such that P′P = I and A = PHP′.

  6. 6.

    The version we will learn, known as the single-shift, Francis algorithm, is appropriate for matrices with real eigenvalues. A double-shift version is used for matrices with complex eigenvalues. Details can be found in Golub and van Loan (2013).

  7. 7.

    Using the LU decomposition from Chap. 1, the \( \mathcal{R} \) code that accompanies this section can be set to also find the smallest eigen pair.

  8. 8.

    This example is commonly used to illustrate the predator-prey model.

  9. 9.

    The actual algorithm that Google uses is a bit more complicated than the one presented here. For example, they include a damping parameter to model the likelihood that the surfer will simply stay on the current page or exit her browser. The true value is proprietary, but the suspected value is .85.

  10. 10.

    Because more than one eigenvector can be chosen to begin the decomposition, the Schur decomposition does not produce a unique solution.

  11. 11.

    With some modifications, the Schur decomposition can also be performed using the QR algorithm presented in Section 4.2.5 or the Francis algorithm presented in Section 4.2.7.

Reference

  • Golub, G. H., & van Loan, C. F. (2013). Matrix computations (4th ed.). Baltimore: John Hopkins.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Brown, J.D. (2018). Eigen Decomposition. In: Advanced Statistics for the Behavioral Sciences. Springer, Cham. https://doi.org/10.1007/978-3-319-93549-2_4

Download citation

Publish with us

Policies and ethics