Point Estimates and Confidence Intervals
Introduction
Point estimates are statistics calculated from sample information that are used to estimate
population parameters. Example: Best Buy Inc. wanted to estimate the average age
television buyers. They select a sample of the last 50 buyers, determine the age of each buyer,
and calculate the average age of the buyers in the sample. The sample mean is a point estimate
of the population mean. The sample mean, is a point estimate of the population mean
p
(sample proportion) is a point estimate of
(population proportion); and s (sample standard
deviation) is a point estimate of (population standard deviation). In general, population
parameters will be symbolized (pronounced theta). So
could be the mean , standard
deviation and proportion .
Point estimates only tell part of the story. A point estimate is a single value derived from a
sample and used to estimate a population value. We expect the point estimate to be as close to
the population parameter as possible, so we will measure how close it actually is. Confidence
intervals can be used to make such measurements.
Confidence Interval
A Confidence Interval is a range of values created from sample data where the population
parameter is likely to occur within that range with a specific probability. This specifi
probability is called the level of confidence. Suppose, we select a sample of 50 young
executives' signatures to find out how many hours they spent working 1 week ago. Calculate
the mean of this sample of 50 people and use the sample mean as a point estimate of the
unknown population mean. The estimate is a single value. A more informative approach is to
give a range of values that we expect to occur within a given population parameter i.e. what
we call a confidence interval.
Estimator Requirements
1. Consistent
The estimator is said to be consistent, if the estimator remains consistent /
concentrates on the estimate made, or: If 2 = 0 and unbiased, the estimator whose value perfectly concentrates on its target
value when the sample is enlarged to infinity () or the larger the sample size, the
statistical estimator
is closer to the parameter
population.
2. Efficient
The estimator is said to be efficient, if the estimator has the smallest variance.
3. Unbiased
The estimator is said to be Unbiased, if the value of the sample statistics = the value of
the Population Parameters.
Unbiased Estimator
Unbiased estimator means an estimator that precisely hits the target. Meanwhile, a biased
estimator means an estimator that is not right on target or is called. a miss
Efficient Estimator
1, 2, and 3 p obtained from 3
The figure above shows that there are three estimators
samples, where the distribution of sample 1 has variance σ2 , sample 2 has variance σ2/2 ,
1 isthat
sample 3 has variance σ3^2 . Since sample 1 has the smallest variance, it is said
an
efficient estimator.
Consistent Estimator The figure above shows that sample size 1, n1, is smaller than sample size 2, n2, and smaller
than sample size 3, n2. It can be seen that the larger the sample size, the closer the estimated
statistic is to parameter from the population, where the sample distribution consistently
moves to the left.