Skip to main content
  • 5681 Accesses

Abstract

Let X be an n×u matrix of given coefficients with full column rank, i.e. rankX = u, β a u × 1 random vector of unknown parameters, y an n × 1 random vector of observations, D(y|σ2) = σ2P-1 the n× n covariance matrix of y, σ2 the unknown random variable which is called variance factor or variance of unit weight and P the known positive definite weight matrix of the observations. Then \( X\beta = E(y|\beta ){\mathbf{ }}with{\mathbf{ }}D(y|\sigma ^2 ) = \sigma ^2 P^{ - 1} {\mathbf{ }} \) is called a linear model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

(2007). Linear Model. In: Introduction to Bayesian Statistics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72726-2_4

Download citation

Publish with us

Policies and ethics