Education.com
Try
Brainzy
Try
Plus

The Law of Large Numbers and the Central Limit Theorem Study Guide

based on 2 ratings
By
Updated on Oct 5, 2011

Introduction The Law of Large Numbers and the Central Limit Theorem

If a random sample of size n is taken from a population having a normal distribution with a mean μ and a standard deviation σ, then the sampling distribution of the sample mean is also normal with a mean of μ and a standard deviation of . What happens if the population distribution is not normal? Do we really get a better estimate of the population mean, or other parameters for that matter, if we take a larger sample? The Law of Large Numbers and the Central Limit Theorem will help us answer these questions.

Law of Large Numbers

We learned in the last lesson that if a random sample is taken from a normal distribution, then the sampling distribution of the sample mean is also normal. What happens when the population distribution is not normal? If a random sample of size n is selected from any distribution with mean m and standard deviation s, the sampling distribution of the sample mean has mean μ and standard deviation σ, the sampling distribution of the sample mean has mean μ and standard deviation . This is true no matter what the form of the population distribution.

The standard deviation of the sampling distribution of decreases as the sample size increases, and the sampling distribution of is centered on the population mean. Thus, we know that, at least in some sense, the sample mean is getting closer to the population mean as the sample size increases. However, the Law of Large Numbers tells us even more. The Law of Large Numbers states that, provided the sample size is large enough, the sample mean will be "close" to the population mean μ with a specified level of probability, regardless of how small a difference "close" is defined to be. Unless we observe every member of the population, we can never be sure that the sample mean equals the population mean. However, in practice, we can be assured that, for any specific population, is tending to get closer to μ as the sample size increases. In the previous lesson, we talked about the sample mean being a "better" estimate of μ as n increases. Here, we are saying that, as the sample size increases, we are doing better because the sample mean has an increased probability of being close to the population mean. Though we will focus on the mean, the Law of Large Numbers applies to other estimators, such as variance, as well.

The Law of Large Numbers

Provided the sample size is large enough, the sample mean will be "close" to the population mean μ with a specified level of probability, regardless of how small a difference "close" is defined to be.

Central Limit Theorem

We now know that the sampling distribution of the sample mean has a mean of μ and a standard deviation of no matter what form the population distribution has as long as the population has a finite mean and standard deviation. However, this alone is not enough for us to know what the shape of the sampling distribution is. Surprisingly, if the sample size is sufficiently large, the sampling distribution of is approximately normal! This follows from a basic statistical result, the Central Limit Theorem. The Central Limit Theorem states that, if a sufficiently large random sample of size n is selected from a population with finite mean μ and finite standard deviation σ, the sampling distribution of the sample mean is approximately normal with mean μ and standard deviation of . That is, as long as the sample is large enough, the normal distribution serves as a good model for the sampling distribution of and it does not matter whether the population is normal or nonnormal or even discrete or continuous. We will illustrate this with a poll.

Suppose a poll is conducted to determine what proportion of the registered voters in a large city thinks that the sales tax should be increased so that more recreational facilities, such as public parks and swimming pools, can be developed. One hundred registered voters in the community could be surveyed. A response supporting the increased sales tax is recorded as X = 1, and a response against the increased sales tax is recorded as X = 0. Then the sample mean is the sample proportion, which we will denote as . That is, (or equivalently ) is the sample mean that estimates the population proportion p, the proportion of the registered voters in the large city who favor the increase in sales tax. Because each registered voter polled either agrees or disagrees (only two possible responses), the random selection of a single registered voter whose response is recorded is equivalent to a Bernoulli trial. Suppose that 68% of the registered voters support the increased sales tax. (Of course, we do not really know what proportion favors the sales tax. If we did, there would be no need for the survey.) A graph of the probabilities associated with the responses is shown in Figure 13.1. The population is discrete and not symmetric; it is certainly not normal!

Figure13.1

Samples of size 100 were drawn from a Bernoulli distribution with p = 0.68, and the sample mean = calculated for each sample. Assuming samples of size 100 are large enough for the Central Limit Theorem to apply, we expect the sampling distribution of to be approximately normal. Further, because the mean and variance of a Bernoulli random variable are p and p(1 – p), respectively, we anticipate the standard deviation of the sampling distribution of to be p = 0.68 and

= = 0.0466, respectively.

The Central Limit Theorem

If a sufficiently large random sample of size n is selected from a population with finite mean μ and finite standard deviation σ, the sampling distribution of the sample mean is approximately normal with mean μ and standard deviation

A histogram of the simulated sampling distribution based on 10,000 samples of size 100 from a Bernoulli distribution with p = 0.68 is shown in Figure 13.2. The bell-shaped appearance suggests that the distribution is approximately normal as predicted by the Central Limit Theorem. The average of the 10,000 values is 0.6807, and the sample standard deviation of the 10,000 values is 0.04668. These are very close to the values of p and that we anticipated based on the properties of the sampling distribution of . (Remember, as long as the sample is randomly selected, these properties hold regardless of the shape of the population distribution as long as it has a finite mean and a finite variance.

Figure13.2

From the previous polling example, we can conclude that when p = 0.68, it is reasonable to assume that the sampling distribution of the sample proportion is approximately normal when the sample size is 100. Would a sample size of 50 have been large enough? What about 10? Does it depend on p? Such questions have been explored, and guidelines have been developed. For proportions, the rules apply to two cases: (1) a population has a fixed proportion who has a certain trait, opinion, etc. or (2) a repeatable experiment in which a certain outcome will occur with a constant probability p. Suppose we take a random sample of size n in the first case and repeat the experiment n times in the second case. If np ≥ 10 and np(1 – p) ≥ 10, then the sample size n is large enough for the Central Limit Theorem to apply. Of course, p is usually unknown, so we check that these conditions hold using .

The population distribution could be some other discrete or continuous distribution. Suppose we take a random sample of size n from one of these. The Central Limit Theorem tells us that if n is large enough, the sampling distribution of is approximately normal. An arbitrary rule is often given that the Central Limit Theorem applies for n ≥ 30. Although we will tend to use this rule, if the population distribution is highly skewed or if there are extreme outliers, a larger sample would be better.

View Full Article
Add your own comment