Discrete Probability Distributions Study Guide

based on 1 rating
Updated on Oct 5, 2011

Introduction to Discrete Probability Distributions

Because some probability distributions occur frequently in practice, they have been given specific names. In this lesson, we will discuss three discrete probability distributions: the Bernoulli, the binomial, and the geometric distributions.

Bernoulli Distribution

Suppose we flip a fair coin and observe the upper face. The sample space may be represented as S = {Head, Tail}. Suppose we define X=1 if a head is on the upper face and 0 if a tail is on the upper face. X is an example of a random variable. We have used the term random variable somewhat loosely in earlier lessons. Formally, a random variable X assigns a numerical result to each possible outcome of a random experiment. If X can assume a finite or countably infinite number of values, then X is a discrete random variable; otherwise, X is a continuous random variable. In this lesson, we will consider the distributions of some discrete random variables.

The probability function P(X = x), or p(x), assigns a probability to each possible value of X. Because these are probabilities, 0 ≤ p(x) ≤ 1 for all X = x. Further, if we sum over all possible values of X, we must get one (i.e.,

Bernoulli Trial

A Bernoulli trial is any random experiment that has only two possible outcomes.

A discrete probability function is any function that satisfies the following two conditions: (1) The probabilities are between 0 and 1 and (2) the probabilities sum to one. As an illustration, let X= – 1, 0, or 1 if the stock market goes down, up, or stays the same, respectively, on a given day. The probabilities associated with the particular outcomes of X change from day to day. However, suppose for a given day, they are as follows:

Table 10.1 Probability of stock market change

Each probability is between 0 and 1, and the sum of the probabilities is one. Thus, this is a valid probability function. The graph of the distribution is given in Figure 10.1.

Figure 10.1

For the moment, we are going to focus on studies in which each observation may result in one of two possible outcomes. Flipping the coin is one such study as each flip will result in either a head or a tail. In an orchard, each piece of fruit either has or has not been damaged by insects. The television set tested at the factory either works or it does not. A person has a job or does not have a job. In each case, there are only two possible outcomes; one outcome may be labeled a success and the other a failure. A Bernoulli trial is any random experiment that has only two possible outcomes. For a Bernoulli trial, let X be a random variable defined as follows:

The choice of which outcome is considered a success and which is considered a failure is arbitrary. It is only important to clearly state for which outcome X =1 and for which X = 0. The probability of success is denoted by p where 0 < p < 1. Because there are only two outcomes, the probability of a success and the probability of a failure must sum to 1. Thus, the probability of a failure is 1 – p. We can present the probability distribution of the Bernoulli random variable as shown in Table 10.2.

Table 10.2 Probability distribution of the Bernoulli random variable

View Full Article
Add your own comment