Skip to main content

Sequential and Square-Root Algorithms

  • Chapter
  • First Online:
Kalman Filtering
  • 4765 Accesses

Abstract

It is now clear that the only time-consuming operation in the Kalman filtering process is the computation of the Kalman gain matrices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles K. Chui .

Exercises

Exercises

  1. 7.1.

    Give a proof of Lemma 7.1.

  2. 7.2.

    Find the lower triangular matrix L that satisfies:

    1. (a)

      \(LL^{\top }=\left[ \begin{array}{c@{\quad }c@{\quad }c} 1 &{} 2 &{} 3\\ 2 &{} 8 &{} 2\\ 3 &{} 2 &{} 14 \end{array}\right] .\)

    2. (b)

      \(LL^{\top }=\left[ \begin{array}{c@{\quad }c@{\quad }c} 1 &{} 1 &{} 1\\ 1 &{} 3 &{} 2\\ 1 &{} 2 &{} 4 \end{array}\right] .\)

  3. 7.3.
    1. (a)

      Derive a formula to find the inverse of the matrix

      $$\begin{aligned} L=\left[ \begin{array}{c@{\quad }c@{\quad }c} \ell _{11} &{} 0 &{} 0\\ \ell _{21} &{} \ell _{22} &{} 0\\ \ell _{31} &{} \ell _{32} &{} \ell _{33} \end{array}\right] , \end{aligned}$$

      where \(\ell _{11},\ \ell _{22}\), and \(\ell _{33}\) are nonzero.

    2. (b)

      Formulate the inverse of

      $$\begin{aligned} L=\left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \ell _{11} &{} 0 &{} 0 &{} \cdots &{} 0\\ \ell _{21} &{} \ell _{22} &{} 0 &{} \cdots &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \ddots &{} \vdots \\ \vdots &{} \vdots &{} &{} \ddots &{} 0\\ \ell _{n1} &{} \ell _{n2} &{} \cdots &{} \cdots &{} \ell _{nn} \end{array}\right] , \end{aligned}$$

      where \(\ell _{11},\ \cdots ,\ \ell _{nn}\) are nonzero.

  4. 7.4.

    Consider the following computer simulation of the Kalman filtering process. Let \(\epsilon \ll 1\) be a small positive number such that

    $$\begin{aligned}\begin{gathered} 1-\epsilon \not \simeq 1 \\ 1-\epsilon ^{2}\simeq 1 \end{gathered}\end{aligned}$$

    where “\(\simeq \)” denotes equality after rounding in the computer. Suppose that we have

    $$\begin{aligned} P_{k, k}=\left[ \begin{array}{c@{\quad }c} \frac{\epsilon ^{2}}{1\epsilon ^{2}} &{} 0\\ 0 &{} 1\end{array}\right] . \end{aligned}$$

    Compare the standard Kalman filter with the square-root filter for this example. Note that this example illustrates the improved numerical characteristics of the square-root filter.

  5. 7.5.

    Prove that to any positive definite symmetric matrix A, there is a unique upper triangular matrix \(A^{u}\) such that \(A=A^{u}(A^{u})^{\top }\).

  6. 7.6.

    Using the upper triangular decompositions instead of the lower triangular ones, derive a new square-root Kalman filter.

  7. 7.7.

    Combine the sequential algorithm and the square-root scheme with upper triangular decompositions to derive a new filtering algorithm.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Chui, C.K., Chen, G. (2017). Sequential and Square-Root Algorithms. In: Kalman Filtering. Springer, Cham. https://doi.org/10.1007/978-3-319-47612-4_7

Download citation

Publish with us

Policies and ethics