**When to be Skeptical**

In some situations, the availability of statistical data can affect the very event the data is designed to analyze or predict. Cancer and hurricanes don't care about polls, but people do!

If you hear, for example, that there is a "95% chance that Dr. J will beat Mr. H in the next local mayoral race," take it with a dose of skepticism. There are inherent problems with this type of analysis, because people's reactions to the publication of predictive statistics can affect the actual event. If broadcast extensively by the local media, a statement suggesting that Dr. J has the election "already won" could cause overconfident supporters of Dr. J to stay home on election day, while those who favor Mr. H go to the polls in greater numbers than they would have if the data had not been broadcast. Or it might have the opposite effect, causing supporters of Mr. H to stay home because they believe they'll be wasting their time going to an election their candidate is almost certain to lose.

**99.7% Confidence Interval**

The empirical rule also states that in a normal distribution, 99.7% of the elements in a sample have a parameter that falls within three standard deviations of the mean for that parameter. From this fact we can develop an estimate of the 99.7% *confidence interval*.

In our situation, 99.7% of the bulbs can be expected to draw current that falls in a range equal to the estimate of the mean plus or minus three standard deviations (μ* ± 3σ*). In Fig. 5-10, this range is 2.910 amperes to 4.290 amperes.

**c% Confidence Interval**

We can obtain any confidence interval we want, within reason, from a distribution when we have good estimates of the mean and standard deviation (Fig. 5-11). The width of the confidence interval, specified as a percentage *c*, is related to the number of standard deviations *x* either side of the mean in a normal distribution. This relationship takes the form of a function of *x* versus *c*.

When graphed for values of *c* ranging upwards of 50%, the function of *x* versus *c* for a normal distribution looks like the curve shown in Fig. 5-12. The curve "blows up" at *c* = 100%.

**Inexactness and Impossibility**

The foregoing calculations are never exact. There are two reasons for this.

First, unless the population is small enough so we can test every single element, we can only get estimates of the mean and standard deviation, never the actual values. This can be overcome by using good experimental practice when we choose our sample frame and/or samples.

Second, when the estimate of the standard deviation σ* is a sizable fraction of the estimate of the mean μ*, we get into trouble if we stray too many multiples of σ* either side of μ*. This is especially true as the parameter decreases. If we wander too far to the left (below μ*), we get close to zero and might even stumble into negative territory – for example, "predicting" that we could end up with a light bulb that draws less than no current! Because of this, confidence interval calculations work only when the span of values is a small fraction of the estimate of the mean. This is true in the cases represented above and by Figs. 5-8, 5-9, and 5-10. If the distribution were much flatter, or if we wanted a much greater degree of certainty, we would not be able to specify such large confidence intervals without modifying the formulas.

### Ask a Question

Have questions about this article or topic? Ask### Related Questions

See More Questions### Popular Articles

- Kindergarten Sight Words List
- First Grade Sight Words List
- 10 Fun Activities for Children with Autism
- Signs Your Child Might Have Asperger's Syndrome
- Definitions of Social Studies
- A Teacher's Guide to Differentiating Instruction
- Curriculum Definition
- Theories of Learning
- What Makes a School Effective?
- Child Development Theories