2.10. Estimators for Distribution Parameters
A lot of statistical and machine learning methods are based on learning the probability distribution from data. Assume that random variable \(X\) is distributed according to the probability mass function \(p_X(x;\theta)\) or probability density function \(f_X(x;\theta)\) where \(\theta\) are the parameters characterizing the distribution.
The question then is: can we give estimate of the parameters \(\theta\) given a sample of \(m\) values of this distribution ? In machine learning it would be called the learning set.
A very common way to come up with an estimator is the maximum likelihood estimator.
2.10.1. Maximum Likelihood Estimator
Consider a discrete distribution with parameters \(\theta\). The pmf is then written as \(\P(X=x) = p_X(x;\theta)\). We assume the values in the sample \(x\ls 1,\ldots,x\ls m\) are realization of \(m\) independent and identically distributed random variables \(X_1,\ldots,X_m\). Therefore:
where \(p_{X_1 X_2\cdots X_m}\) is the joint probability mass function.
Assuming the random variables are iid we have:
This probability can also be interpreted as a function of \(\theta\), it then gives the likelihood for the parameter \(\theta\) to explain the data:
The maximum likelihood estimator then is:
In practice we often look at the log likelihood:
In the following sections we will look at a few distributions and their MLE’s. Evidently the final expression for the estimator depends on the probability mass function \(p_X\) and its dependence on \(\theta\).
In case \(X\) is a continuous random variable the probability density function is used:
Also note that the base of the logarithm is not important for our purpose.
2.10.2. Bernoulli Distribution
The one parameter that needs to be estimated for the Bernoulli distribution is the probability \(p = \P(X=1)\). Using the sample \(x\ls 1,\ldots,x\ls m\) an obvious way to calculate this is:
i.e. we just calculate the relative frequency of the value \(X=1\) in our sample.
This estimator is the maximum likelihood estimator (MLE) for \(p\) as we will show. The sample set of values are the outcomes of \(m\) iid random variables \(X_1,\ldots,X_m\). Assuming \(X_i\sim\Bernoulli(p)\) we have:
This probability when viewed as a function of \(p\) for fixed sample set \(x\ls 1,\ldots,x\ls m\) is called the likelihood function:
The maximum likelihood estimator for \(p\) now finds the value for \(p\) that maximizes \(\ell(p)\). Instead of maximizing \(\ell\) we maximize the logarithm of \(\ell\) leading to the maximum likelihood estimator:
where
where \(m_1\) is the sum of the \(x\ls i\), i.e. the number of values 1 in the sample and \(m_0\) is the number of values 0 in the sample.
We can find the maximal value by calculating the derivative of \(\ell\) with respect to \(p\) and solving for \(\ell'(p)=0\). We have:
setting this equal to 0 and solving for \(p\) leads to:
With relatively small sample sizes you might run into the situation where \(m_1=0\) or \(m_0=0\). This will lead to either \(p=0\) or \(1-p=0\), a situation that in a lot of machine learning applications you would like to avoid. In such cases the add 1 Laplace smoothing is often used. Now we use
Note that in case we have no data at all we just assume that \(\hat p=\tfrac{1}{2}\), i.e. a uniform distribution.
2.10.3. Categorical Distribution
For the categorical distribution there are \(K\) probabilities to be estimated from the sample \(x\ls 1,\ldots,x\ls m\). The possible values range from \(1\) to \(K\). Let’s denote the number of outcomes equal to \(i\) with \(m_i\), i.e.:
here \(\left[ B \right]\) is the Iverson bracket:
The MLE for \(p_i\) is:
The estimator using add 1 Laplace smoothing is given by:
2.10.4. Normal Distribution
Let \(X\sim\Normal(\mu,\sigma^2)\) and let \(x\ls 1,\ldots,x\ls m\) be a sample from \(X\).
The MLE estimators for \(\mu\) and \(\sigma^2\) are:
and