Assumptions and Testing Help (page 3)

By — McGraw-Hill Professional
Updated on Aug 26, 2011


In the ''USA hurricane scenario,'' H0 is almost certain to be rejected after the experiment has taken place. Even though Fig. 6-3 represents the mean path for Emma as determined by a computer program, the probability is low that Emma will actually follow right along this path. If you find this confusing, you can think of it in terms of a double negative. The computers are not saying that Emma is almost sure to follow the path shown in Fig. 6-3. They are telling us that the path shown is the least unlikely individual path according to their particular set of data and parameters. (When we talk about the probability that something will or will not occur, we mean the degree to which the forecasters believe it, based on historical and computer data. By clarifying this, we keep ourselves from committing the probability fallacy.)

Assumptions and Testing

Similarly, in the ''Canada ice-cream scenario,'' the probability is low that the proportion of vanilla lovers among Canadian ice-cream connoisseurs is exactly 25%. Even if we make this claim, we must be willing to accept that the experiment will almost surely produce results that are a little different, a little more or less than 25%. When we make H0 in this case, then we are asserting that all other exact proportion figures are less likely than 25%. (When we talk about the probability that something does or does not reflect reality, we mean the degree to which we believe it, based on experience, intuition, or plain guesswork. Again, we don't want to be guilty of the probability fallacy.)

Whenever someone makes a prediction or claim, someone else will refute it. In part, this is human nature. But logic also plays a role. Computer programs for hurricane forecasting get better with each passing year. Methods of conducting statistical surveys about all subjects, including people's ice-cream flavor preferences, also improve.

If a group of meteorologists comes up with a new computer program that says Hurricane Emma will pass over New York City instead of Wilmington, then the output of that program constitutes evidence against H0 in the ''USA hurricane scenario.'' If someone produces the results of a survey showing that only 17% of British ice-cream lovers prefer plain vanilla flavor and only 12% of USA ice-cream lovers prefer it, this might be considered evidence against H0 in the ''Canada ice-cream scenario.'' The gathering and presentation of data supporting or refuting a null hypothesis, and the conducting of experiments to figure out the true situation, is called statistical testing or hypothesis testing.

Species of Error

There are two major ways in which an error can be made when formulating hypotheses. One form of error involves rejecting or denying the potential truth of a null hypothesis, and then having the experiment end up demonstrating that the null hypothesis is true after all. This is sometimes called a type-1 error. The other species of error is the exact converse: accepting the null hypothesis and then having the experiment show that it is false. This is called a type-2 error.

How likely is either type of error in the ''USA hurricane scenario'' or the ''Canada ice-cream scenario''? These questions can be difficult to answer. It is hard enough to come up with good null hypotheses in the first place. Nevertheless, the chance for error is a good thing to know, because it tells us how seriously we ought to take a null hypothesis. The level of significance, symbolized by the lowercase Greek letter alpha (α), is the probability that H0 will turn out to be true after it has been rejected. This figure can be expressed as a ratio, in which case it is a number between 0 and 1, or as a percentage, in which case it is between 0% and 100%.

Practice problems for these concepts can be found at:

Hypotheses, Prediction, and Regression Practice Test

View Full Article
Add your own comment

Ask a Question

Have questions about this article or topic? Ask
150 Characters allowed