5.1.5. Linear Regression from Basic Principles
5.1.5.1. A Statistical View on Regression
We consider a system with scalar input \(x\) and scalar output \(y\). We hypothesize that in the absence of ‘noise’ we have \(y = m(x)\). The exact analytical form of the function \(m\) is not known, all we have are examples \((x\ls i,y\ls i)\). Unfortunately we know that \(y\ls i \not= m(x\ls i)\) due to noise. We assume that the noise is iid for all \(x\) and normal distributed. This makes our observation a stochastic variable:
where \(R\) is the noise random variable. We assume \(R\sim\Normal(\mu,\sigma^2)\). Thus
The goal of linear regression is to come up with an expression for the function \(m\). To make this mathematically tractable we approximate \(m\) with a parameterized hypothesis function \(h_{\v\theta}\) arriving at:
With our learning set \((x\ls i,y\ls i)\) for \(i=1,\ldots,m\) the regression task is to find the parameter vector \(\v\theta\) that best ‘fits’ the data.
5.1.5.2. Maximum Likelihood Estimator
We are looking for the parameter vector \(\v\theta\) that makes the observed data most plausible. This can be casted in a form that is called a maximum likelihood estimator. Let \(f\) be the joint probability density function for all observations \(y\ls i\) given the \(x\ls i\) values and parameter vector \(\v\theta\). The MLE then can be written as:
As we have assumed that the noise in observations are iid and normally distributed we can write:
Our goal is to maximize the probability density and thus we may maximize the logarithm of it. That function of \(\v\theta\) is called the log likelihood:
Our goal it to maximize \(\ell\) and that is equivalent with minimizing the sum of squared errors:
This is exactly the same expression we started with when introducing linear regression. In this section we see that linear regression is optimal in case we have normally distributed noise. In practice that is not very often the case (especially in the machine learning context) but even then linear regression is often used (mostly with the addition of a regularization term).