Probability review
We will go through a review of probability concepts over here, all of the review materials have been adapted from CS229 Probability Notes.
1. Elements of probability
In order to define a probability on a set we need a few basic elements,
Sample space : The set of all the outcomes of a random experiment. Here, each outcome can be thought of as a complete description of the state of the real world at the end of the experiment.
Set of events (or event space) : A set whose elements (called events) are subsets of (i.e., is a collection of possible outcomes of an experiment)
Probability measure: A function that satisfies the following properties
- , for all
- If , , . . . are disjoint events (i.e. whenever ), then
- .
These three properties are called the Axioms of Probability.
Example: Consider the event of tossing a six-sided die. The sample space is . We can define different event spaces on this sample space. For example, the simplest event space is the trivial event space . Another event space is the set of all subsets of . For the first event space, the unique probability measure satisfying the requirements above is given by , . For the second event space, one valid probability measure is to assign the probability of each set in the event space to be where is the number of elements of that set; for example, and .
Properties:
- .
- .
- Union Bound .
- .
- Law of Total Probability If are a set of disjoint events such that , then .
1.1 Conditional probability
Let be an event with non-zero probability. The conditional probability of any event given is defined as
In other words, is the probability measure of the event after observing the occurrence of event .
1.2 Chain Rule
Let be events, . Then the chain rule states that:
Note that for events, this is just the definition of conditional probability:
In general, the chain rule is derived by applying the definition of conditional probability multiple times, as in the following example:
1.3 Independence
Two events are called independent if and only if (or equivalently, ). Therefore, independence is equivalent to saying that observing B does not have any effect on the probability of A.
2. Random variables
Consider an experiment in which we flip 10 coins, and we want to know the number of coins that come up heads. Here, the elements of the sample space are 10-length sequences of heads and tails. For example, we might have . However, in practice, we usually do not care about the probability of obtaining any particular sequence of heads and tails. Instead we usually care about real-valued functions of outcomes, such as the number of heads that appear among our 10 tosses, or the length of the longest run of tails. These functions, under some technical conditions, are known as random variables.
More formally, a random variable is a function . Typically, we will denote random variables using upper case letters or more simply (where the dependence on the random outcome is implied). We will denote the value that a random variable may take on using lower case letters . Thus, means that we are assigning the value to the random variable .
Example: In our experiment above, suppose that is the number of heads which occur in the sequence of tosses . Given that only 10 coins are tossed, can take only a finite number of values, so it is known as a discrete random variable. Here, the probability of the set associated with a random variable taking on some specific value is .
Example: Suppose that is a random variable indicating the amount of time it takes for a radioactive particle to decay. In this case, takes on a infinite number of possible values, so it is called a continuous random variable. We denote the probability that takes on a value between two real constants and (where ) as .
2.1 Cumulative distribution functions
In order to specify the probability measures used when dealing with random variables, it is often convenient to specify alternative functions (CDFs, PDFs, and PMFs) from which the probability measure governing an experiment immediately follows. In this section and the next two sections, we describe each of these types of functions in turn. A cumulative distribution function (CDF) is a function which specifies a probability measure as,
By using this function one can calculate the probability of any event.
Properties:
- .
- .
- .
- .
2.2 Probability mass functions
When a random variable takes on a finite set of possible values (i.e., is a discrete random variable), a simpler way to represent the probability measure associated with a random variable is to directly specify the probability of each value that the random variable can assume. In particular, a probability mass function (PMF) is a function such that .
In the case of discrete random variable, we use the notation for the set of possible values that the random variable may assume. For example, if is a random variable indicating the number of heads out of ten tosses of coin, then .
Properties:
- .
- .
- .
2.3 Probability density functions
For some continuous random variables, the cumulative distribution function is differentiable everywhere. In these cases, we define the Probability Density Function or PDF as the derivative of the CDF, i.e.,
Note here, that the PDF for a continuous random variable may not always exist (i.e., if is not differentiable everywhere).
According to the properties of differentiation, for very small ,
Both CDFs and PDFs (when they exist!) can be used for calculating the probabilities of different events. But it should be emphasized that the value of PDF at any given point is not the probability of that event, i.e., . For example, can take on values larger than one (but the integral of over any subset of will be at most one).
Properties:
- .
- .
- .
2.4 Expectation
Suppose that is a discrete random variable with PMF and is an arbitrary function. In this case, can be considered a random variable, and we define the expectation or expected value of as
If is a continuous random variable with PDF , then the expected value of g(X) is defined as
Intuitively, the expectation of can be thought of as a “weighted average” of the values that can taken on for different values of , where the weights are given by or . As a special case of the above, note that the expectation, of a random variable itself is found by letting ; this is also known as the mean of the random variable .
Properties:
- for any constant .
- for any constant .
- (Linearity of Expectation) .
- For a discrete random variable , .
2.5 Variance
The variance of a random variable is a measure of how concentrated the distribution of a random variable is around its mean. Formally, the variance of a random variable is defined as .
Using the properties in the previous section, we can derive an alternate expression for the variance:
where the second equality follows from linearity of expectations and the fact that is actually a constant with respect to the outer expectation.
Properties:
- for any constant .
- for any constant .
Example Calculate the mean and the variance of the uniform random variable with PDF elsewhere.
Example : Suppose that for some subset . What is ?
Discrete case:
Continuous case:
2.6 Some common random variables
Discrete random variables
- (where ): one if a coin with heads probability comes up heads, zero otherwise.
- (where ): the number of heads in independent flips of a coin with heads probability .
- (where ): the number of flips of a coin with heads probability until the first heads.
- (where > 0): a probability distribution over the nonnegative integers used for modeling the frequency of rare events.
Continuous random variables
- (where ): equal probability density to every value between and on the real line.
- (where > 0): decaying probability density over the nonnegative reals.
- : also known as the Gaussian distribution
3. Two random variables
Thus far, we have considered single random variables. In many situations, however, there may be more than one quantity that we are interested in knowing during a random experiment. For instance, in an experiment where we flip a coin ten times, we may care about both the number of heads that come up as well as the length of the longest run of consecutive heads. In this section, we consider the setting of two random variables.
3.1 Joint and marginal distributions
Suppose that we have two random variables and . One way to work with these two random variables is to consider each of them separately. If we do that we will only need and . But if we want to know about the values that and assume simultaneously during outcomes of a random experiment, we require a more complicated structure known as the joint cumulative distribution function of and , defined by
It can be shown that by knowing the joint cumulative distribution function, the probability of any event involving and can be calculated.
The joint CDF and the cumulative distribution functions and of each variable separately are related by
Here, we call and the marginal cumulative distribution functions of .
Properties:
- .
- .
- .
- .
3.2 Joint and marginal probability mass functions
If and are discrete random variables, then the joint probability mass function is defined by
Here, for all and .
How does the joint PMF over two variables relate to the probability mass function for each variable separately? It turns out that
and similarly for . In this case, we refer to as the marginal probability mass function of . In statistics, the process of forming the marginal distribution with respect to one variable by summing out the other variable is often known as “marginalization.”
3.3 Joint and marginal probability density functions
Let and be two continuous random variables with joint distribution function . In the case that is everywhere differentiable in both and , then we can define the joint probability density function,
Like in the single-dimensional case, , but rather
Note that the values of the probability density function are always nonnegative, but they may be greater than 1. Nonetheless, it must be the case that .
Analagous to the discrete case, we define
as the marginal probability density function (or marginal density) of , and similarly for .
3.4 Conditional distributions
Conditional distributions seek to answer the question, what is the probability distribution over , when we know that must take on a certain value ? In the discrete case, the conditional probability mass function of given is simply
assuming that .
In the continuous case, the situation is technically a little more complicated because the probability that a continuous random variable takes on a specific value is equal to zero. Ignoring this technical point, we simply define, by analogy to the discrete case, the conditional probability density of given to be
provided .
3.5 Chain rule
The chain rule we derived earlier for events can be applied to random variables as follows:
3.6 Bayes’s rule
A useful formula that often arises when trying to derive expressions for conditional probability is Bayes’s rule.
In the case of discrete random variables and ,
If the random variables and are continuous,
3.7 Independence
Two random variables and are independent if for all values of and . Equivalently,
- For discrete random variables, for all , .
- For discrete random variables, whenever for all .
- For continuous random variables, for all .
- For continuous random variables, whenever for all .
Informally, two random variables and are independent if “knowing” the value of one variable will never have any effect on the conditional probability distribution of the other variable, that is, you know all the information about the pair by just knowing and . The following lemma formalizes this observation:
Lemma 3.1. If and are independent then for any subsets , we have,
By using the above lemma one can prove that if is independent of then any function of is independent of any function of .
3.8 Expectation and covariance
Suppose that we have two discrete random variables and is a function of these two random variables. Then the expected value of is defined in the following way,
For continuous random variables , the analogous expression is
We can use the concept of expectation to study the relationship of two random variables with each other. In particular, the covariance of two random variables and is defined as
Using an argument similar to that for variance, we can rewrite this as,
Here, the key step in showing the equality of the two forms of covariance is in the third equality, where we use the fact that and are actually constants which can be pulled out of the expectation. When , we say that and are uncorrelated.
Properties:
- (Linearity of expectation) .
- .
- If and are independent, then .
- If and are independent, then .
Index | Previous | Next |