Least Squares Estimators¶
This section provides a practical introduction to least squares estimation procedures in a linear algebra framework that is hopefully generic enough to be useful in practical applications.
Fitting a straight line¶
The classical example of a least squares estimator is fitting a straight line \(f(x)=p_1 + p_2 x\) to a set of \(N\) measurements \(\{x_i, f_i\}\) for \(i=1,\ldots,N\). In case all points lie exactly on the straight line we have:
for all \(i=1,\ldots,N\). For data obtained through measurements, there will be some unavoidable deviation from this situation, instead we will have:
where \(e_i\) is some (hopefully) small deviation (error) from the model. The goal of the LSQ estimation procedure now is to find the values of the parameters \(p_1\) and \(p_2\) such that the sum of the squared errors is minimal. The total squared error is:
We are now looking for those values of the parameters \(p_1\) and \(p_2\) that make the total squared error minimal. A necessary condition for a function \(\epsilon\) in two parameters (\(p_1,p_2)\) to have an extreme value (either local maxumum or local minimum) at parameter values \((p_1,p_2)\) is that the derivatives of \(\epsilon\) with respect to both parameters are zero:
Or equivalently:
This can be rewritten as:
In a matrix vector notation we can write:
The optimal parameters are thus found as:
Now consider the problem of estimating the second order polynomial \(f(x)=p_1 + p_2 x + p_3 x^2\) given a set of observations \(\{x_i, f_i\}\). It is not too difficult to go through the entire process again of calculating the total square error, differentiating with respect to the parameters,setting all derivatives equal to zero and solving the resulting set of equations. It is however a tedious and error prone exercise that we would like to generalize to all sorts of least squares estimation problems. That is what we will do in the next section.
Least Squares Estimators¶
Let us look again at the problem of fitting a straight line. For each of the observations \(\{x_i, f_i\}\) the deviation from the model equals
or equivalently:
In a matrix vector equation we write:
Note that we have such an equation for each \(i=1,\ldots,N\) and that we may combine all these observations in one matrix vector equation:
Let us denote the matrices and vectors in the above equation with \(\v X, \v p, \v f\) and \(\v v\) respectively:
or equivalently:
The sum of the squared errors is equal to:
Again we can calculate the parameter values, collected in the vector \(\v p = (p_1\;p_2\cdots p_n)\T\), by differentiating \(\epsilon(\v p)\) with respect to all \(p_i\) and solving for the parameter values to make all derivatives equal to zero. It is of course possible to expand the vectorial error equation in its elements and perform the derivatives explicitly (see vectorial differentiation).
It is easier to make use of the following vectorial derivatives. First let \(f\) be a scalar function in \(N\) parameters \(\v v[1],\ldots,\v v[N]\) then we define \(\D_{\v v} f\) as the vector of all partial derivatives:
The following `vector derivative’ expressions are easy to prove
by calculating the derivatives seperately for all components. With these derivatives it is straightforward to show that:
Solving \(\D_{\v p} \epsilon = \v 0\) we arrive at:
In case \(\v X\T \v X\) is non-singular we get:
In case you read your notes of linear algebra class again, you will undoubtedly find a remark somewhere that in general it is a bad idea to do a matrix inversion in case the goal is to solve a system of linear equations. But, that is exactly what we are doing… Your linear algebra teacher was right nevertheless. In practice you should use a method to solve a system of linear equations, not doing the matrix inverse explicitly.
Examples of least squares estimators¶
In this section we use the expressions from the previous section to tackle some least squares estimation problems.
2nd-order polynomial¶
Consider the observations \(\{x_i,f_i\}\) where the assumed functional relation between \(x\) and \(f(x)\) is given by a second order polynomial:
The error vector in this case is:
The above scheme for a second order polynomial can be extended to work for a polynomial function of any order.
Multivariate polynomials¶
The facet model used in image processing and computer vision is an example of a least squares estimator of a multivariate polynomial function.
Consider the discrete image \(F\) only known through its samples \(F(i,j)\) for \(i=1,\ldots,M\) and \(j=1,\ldots,N\). In many situations we need to estimate the image function values inbetween the sample points, say the point \((x_0,y_0)\). What is then done is: first come up with a function \(f(x,y)\) defined on the continuous plane that is consistent with \(F\) and then take \(f(x_0,y_0)\) as the value you were looking for. The function \(f\) can either be defined as a function that interpolates \(F\), i.e.\(f(i,j)=F(i,j)\) (for all \(i\) and \(j\)) or it can be defined as a function that approximates the samples. Such an approximation is what we are after here.
Consider the point \((i_0,j_0)\) and define \(F_0(i,j)=F(i-i_0,j-j_0)\), i.e. we shift the origin to the point of interest. The task we set ourselves to now is to approximate \(F_0\) with a polynomial function
Such a simple function evidently cannot hope to approximate the image data over the entire domain. We restrict the region in which to approximate \(F_0\) to a small neighborhood of the origin (say a \(3\times3\) neighborhood). For one point \((i,j)\) with image value \(F_0(i,j)\) we have:
In case we have \(K\) sample points in the considered neighborhood we can stack up the rows and obtain an \(\v X\) matrix. Equivalently we obtain the image value vector \(\v F_0\), error vector \(\v e\) and parameter vector \(\v p\) leading to:
and the least squares estimator:
For the 9 points \((i,j)\) in the \(3\times3\) neighborhood centered around the origin we obtain:
leading to
The first row of the above matrix gives the weights to calculate the zero order coefficient of the 2nd order polynomial that approximates the image function:
The weights can be arranged in a spatial layout as:
The above template or kernel can be interpreted as: multiply the image values at the corresponding positions with the kernel values and sum all resulting values to give the value of \(p_1\).
The same analysis can be made for the other coefficients \(p_i\) in the polynomial approximation of the image function.