One of the goals of inferential statistics is to estimate unknown population parameters. This estimation is performed by constructing confidence intervals from statistical samples. One question becomes, “How good of an estimator do we have?” In other words, “How accurate is our statistical process, in the long run, of estimating our population parameter. One way to determine the value of an estimator is to consider if it is unbiased. This analysis requires us to find the expected value of our statistic.

### Parameters and Statistics

We start by considering parameters and statistics. We consider random variables from a known type of distribution, but with an unknown parameter in this distribution. This parameter made be part of a population, or it could be part of a probability density function. We also have a function of our random variables, and this is called a statistic. The statistic *( X_{1}, X_{2}, . . . , X_{n}) *estimates the parameter T, and so we call it an estimator of T.

### Unbiased and Biased Estimators

We now define unbiased and biased estimators. We want our estimator to match our parameter, in the long run. In more precise language we want the expected value of our statistic to equal the parameter. If this is the case, then we say that our statistic is an unbiased estimator of the parameter.

If an estimator is not an unbiased estimator, then it is a biased estimator. Although a biased estimator does not have a good alignment of its expected value with its parameter, there are many practical instances when a biased estimator can be useful. One such case is when a plus four confidence interval is used to construct a confidence interval for a population proportion.

### Example for Means

To see how this idea works, we will examine an example that pertains to the mean. The statistic

*( X_{1} + X_{2} + . . . + X_{n})/n *

is known as the sample mean. We suppose that the random variables are a random sample from the same distribution with mean μ. This means that the expected value of each random variable is μ.

When we calculate the expected value of our statistic, we see the following:

*E[( X_{1} + X_{2} + . . . + X_{n})/n] = (E[X_{1}] + E[X_{2}] + . . . + E[X_{n}])/n = (nE[X_{1}])/n = E[X_{1}] = *μ.

Since the expected value of the statistic matches the parameter that it estimated, this means that the sample mean is an unbiased estimator for the population mean.