Back

ⓘ Joint probability distribution. Given random variables X, Y, … {\displaystyle X,Y,\ldots }, that are defined on a probability space, the joint probability distr ..




Joint probability distribution
                                     

ⓘ Joint probability distribution

Given random variables X, Y, … {\displaystyle X,Y,\ldots }, that are defined on a probability space, the joint probability distribution for X, Y, … {\displaystyle X,Y,\ldots } is a probability distribution that gives the probability that each of X, Y, … {\displaystyle X,Y,\ldots } falls in any particular range or discrete set of values specified for that variable. In the case of only two random variables, this is called a bivariate distribution, but the concept generalizes to any number of random variables, giving a multivariate distribution.

The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function in the case of continuous variables or joint probability mass function in the case of discrete variables. These in turn can be used to find two other types of distributions: the marginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and the conditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables.

                                     

1. Examples

Real life example:

Consider a production facility that fills plastic bottles with laundry detergent. The weight of each bottle Y and the volume of laundry detergent it contains X are measured.

                                     

1.1. Examples Coin flips

Consider the flip of two fair coins; let A {\displaystyle A} and B {\displaystyle B} be discrete random variables associated with the outcomes of the first and second coin flips respectively. Each coin flip is a Bernoulli trial and has a Bernoulli distribution. If a coin displays "heads" then the associated random variable takes the value 1, and it takes the value 0 otherwise. The probability of each of these outcomes is 1/2, so the marginal unconditional density functions are

P A = 1 / 2 for A ∈ { 0, 1 } ; {\displaystyle PA=1/2\quad {\text{for}}\quad A\in \{0.1\};} P B = 1 / 2 for B ∈ { 0, 1 }. {\displaystyle PB=1/2\quad {\text{for}}\quad B\in \{0.1\}.}

The joint probability density function of A {\displaystyle A} and B {\displaystyle B} defines probabilities for each pair of outcomes. All possible outcomes are

A = 0, B = 0, A = 0, B = 1, A = 1, B = 0, A = 1, B = 1. {\displaystyle A=0,B=0,A=0,B=1,A=1,B=0,A=1,B=1.}

Since each outcome is equally likely the joint probability density function becomes

P A, B = 1 / 4 for A, B ∈ { 0, 1 }. {\displaystyle PA,B=1/4\quad {\text{for}}\quad A,B\in \{0.1\}.}

Since the coin flips are independent, the joint probability density function is the product of the marginals:

P A, B = P A P B for A, B ∈ { 0, 1 }. {\displaystyle PA,B=PAPB\quad {\text{for}}\quad A,B\in \{0.1\}.}
                                     

1.2. Examples Rolling a die

Consider the roll of a fair die and let A = 1 {\displaystyle A=1} if the number is even i.e. 2, 4, or 6 and A = 0 {\displaystyle A=0} otherwise. Furthermore, let B = 1 {\displaystyle B=1} if the number is prime i.e. 2, 3, or 5 and B = 0 {\displaystyle B=0} otherwise.

Then, the joint distribution of A {\displaystyle A} and B {\displaystyle B}, expressed as a probability mass function, is

P A = 0, B = 0 = P { 1 } = 1 6, P A = 1, B = 0 = P { 4, 6 } = 2 6, {\displaystyle \mathrm {P} A=0,B=0=P\{1\}={\frac {1}{6}},\\quad \mathrm {P} A=1,B=0=P\{4.6\}={\frac {2}{6}},} P A = 0, B = 1 = P { 3, 5 } = 2 6, P A = 1, B = 1 = P { 2 } = 1 6. {\displaystyle \mathrm {P} A=0,B=1=P\{3.5\}={\frac {2}{6}},\\quad \mathrm {P} A=1,B=1=P\{2\}={\frac {1}{6}}.}

These probabilities necessarily sum to 1, since the probability of some combination of A {\displaystyle A} and B {\displaystyle B} occurring is 1.

                                     

1.3. Examples Real life example:

Consider a production facility that fills plastic bottles with laundry detergent. The weight of each bottle Y and the volume of laundry detergent it contains X are measured.

                                     

2. Marginal probability distribution

If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables.

If the joint probability density function of random variable X and Y is f X, Y x, y {\displaystyle f_{X,Y}x,y}, the marginal probability density function of X and Y are:

f x = ∫ f X Y x, y d y {\displaystyle f_{X}x=\int f_{XY}x,y\;dy}, f y = ∫ f X Y x, y d x {\displaystyle f_{Y}y=\int f_{XY}x,y\;dx}

where the first integral is over all points in the range of X,Y for which X=x and the second integral is over all points in the range of X,Y for which Y=y.



                                     

3. Joint cumulative distribution function

For a pair of random variables X, Y {\displaystyle X,Y}, the joint cumulative distribution function CDF F X Y {\displaystyle F_{XY}} is given by

where the right-hand side represents the probability that the random variable X {\displaystyle X} takes on a value less than or equal to x {\displaystyle x} and that Y {\displaystyle Y} takes on a value less than or equal to y {\displaystyle y}.

For N {\displaystyle N} random variables X 1, …, X N {\displaystyle X_{1},\ldots,X_{N}}, the joint CDF F X 1, …, X N {\displaystyle F_{X_{1},\ldots,X_{N}}} is given by

Interpreting the N {\displaystyle N} random variables as a random vector X = X 1, …, X N T {\displaystyle \mathbf {X} =X_{1},\ldots,X_{N}^{T}} yields a shorter notation:

F x = P ⁡ X 1 ≤ x 1, …, X N ≤ x n {\displaystyle F_{\mathbf {X} }\mathbf {x}=\operatorname {P} X_{1}\leq x_{1},\ldots,X_{N}\leq x_{n}}
                                     

4.1. Joint density function or mass function Discrete case

The joint probability mass function of two discrete random variables X, Y {\displaystyle X,Y} is:

or written in terms of conditional distributions

p X, Y x, y = P Y = y ∣ X = x ⋅ P X = x = P X = x ∣ Y = y ⋅ P Y = y {\displaystyle p_{X,Y}x,y=\mathrm {P} Y=y\mid X=x\cdot \mathrm {P} X=x=\mathrm {P} X=x\mid Y=y\cdot \mathrm {P} Y=y}

where P Y = y ∣ X = x {\displaystyle \mathrm {P} Y=y\mid X=x} is the probability of Y = y {\displaystyle Y=y} given that X = x {\displaystyle X=x}.

The generalization of the preceding two-variable case is the joint probability distribution of n {\displaystyle n\,} discrete random variables X 1, X 2, …, X n {\displaystyle X_{1},X_{2},\dots,X_{n}} which is:

or equivalently

p X 1, …, X n x 1, …, x n = P X 1 = x 1 ⋅ P X 2 = x 2 ∣ X 1 = x 1 ⋅ P X 3 = x 3 ∣ X 1 = x 1, X 2 = x 2 … ⋅ P. {\displaystyle {\begin{aligned}p_{X_{1},\ldots,X_{n}}x_{1},\ldots,x_{n}&=\mathrm {P} X_{1}=x_{1}\cdot \mathrm {P} X_{2}=x_{2}\mid X_{1}=x_{1}\\&\cdot \mathrm {P} X_{3}=x_{3}\mid X_{1}=x_{1},X_{2}=x_{2}\\&\dots \\&\cdot PX_{n}=x_{n}\mid X_{1}=x_{1},X_{2}=x_{2},\dots,X_{n-1}=x_{n-1}.\end{aligned}}}.

This identity is known as the chain rule of probability.

Since these are probabilities, we have in the two-variable case

∑ i ∑ j P X = x i a n d Y = y j = 1, {\displaystyle \sum _{i}\sum _{j}\mathrm {P} X=x_{i}\ \mathrm {and} \ Y=y_{j}=1,\,}

which generalizes for n {\displaystyle n\,} discrete random variables X 1, X 2, …, X n {\displaystyle X_{1},X_{2},\dots,X_{n}} to

∑ i ∑ j … ∑ k P = 1. {\displaystyle \sum _{i}\sum _{j}\dots \sum _{k}\mathrm {P} X_{1}=x_{1i},X_{2}=x_{2j},\dots,X_{n}=x_{nk}=1.\;}


                                     

4.2. Joint density function or mass function Continuous case

The joint probability density function f X, Y x, y {\displaystyle f_{X,Y}x,y} for two continuous random variables is defined as the derivative of the joint cumulative distribution function see Eq.1:

This is equal to:

f X, Y x, y = f Y ∣ X y ∣ x f x = f X ∣ Y x ∣ y f y {\displaystyle f_{X,Y}x,y=f_{Y\mid X}y\mid xf_{X}x=f_{X\mid Y}x\mid yf_{Y}y}

where f Y ∣ X y ∣ x {\displaystyle f_{Y\mid X}y\mid x} and f X ∣ Y x ∣ y {\displaystyle f_{X\mid Y}x\mid y} are the conditional distributions of Y {\displaystyle Y} given X = x {\displaystyle X=x} and of X {\displaystyle X} given Y = y {\displaystyle Y=y} respectively, and f x {\displaystyle f_{X}x} and f y {\displaystyle f_{Y}y} are the marginal distributions for X {\displaystyle X} and Y {\displaystyle Y} respectively.

The definition extends naturally to more than two random variables:

Again, since these are probability distributions, one has

∫ x ∫ y f X, Y x, y d y d x = 1 {\displaystyle \int _{x}\int _{y}f_{X,Y}x,y\;dy\;dx=1}

respectively

∫ x 1 … ∫ x n f X 1, …, X n x 1, …, x n d x 1 … d x n = 1 {\displaystyle \int _{x_{1}}\ldots \int _{x_{n}}f_{X_{1},\ldots,X_{n}}x_{1},\ldots,x_{n}\;dx_{1}\ldots \;dx_{n}=1}
                                     

4.3. Joint density function or mass function Mixed case

The "mixed joint density" may be defined where one or more random variables are continuous and the other random variables are discrete. With one variable of each type we have

f X, Y x, y = f X ∣ Y x ∣ y P Y = y = P Y = y ∣ X = x f X x. {\displaystyle {\begin{aligned}f_{X,Y}x,y=f_{X\mid Y}x\mid y\mathrm {P} Y=y=\mathrm {P} Y=y\mid X=xf_{X}x.\end{aligned}}}

One example of a situation in which one may wish to find the cumulative distribution of one random variable which is continuous and another random variable which is discrete arises when one wishes to use a logistic regression in predicting the probability of a binary outcome Y conditional on the value of a continuously distributed outcome X {\displaystyle X}. One must use the "mixed" joint density when finding the cumulative distribution of this binary outcome because the input variables X, Y {\displaystyle X,Y} were initially defined in such a way that one could not collectively assign it either a probability density function or a probability mass function. Formally, f X, Y x, y {\displaystyle f_{X,Y}x,y} is the probability density function of X, Y {\displaystyle X,Y} with respect to the product measure on the respective supports of X {\displaystyle X} and Y {\displaystyle Y}. Either of these two decompositions can then be used to recover the joint cumulative distribution function:

F X, Y x, y = ∑ t ≤ y ∫ s = − ∞ x f X, Y s, t d s. {\displaystyle {\begin{aligned}F_{X,Y}x,y&=\sum \limits _{t\leq y}\int _{s=-\infty }^{x}f_{X,Y}s,t\;ds.\end{aligned}}}

The definition generalizes to a mixture of arbitrary numbers of discrete and continuous random variables.

                                     

5.1. Additional properties Joint distribution for independent variables

In general two random variables X {\displaystyle X} and Y {\displaystyle Y} are independent if and only if the joint cumulative distribution function satisfies

F X, Y x, y = F x ⋅ F y {\displaystyle F_{X,Y}x,y=F_{X}x\cdot F_{Y}y}

Two discrete random variables X {\displaystyle X} and Y {\displaystyle Y} are independent if and only if the joint probability mass function satisfies

P X = x and Y = y = P X = x ⋅ P Y = y {\displaystyle PX=x\ {\mbox{and}}\ Y=y=PX=x\cdot PY=y}

for all x {\displaystyle x} and y {\displaystyle y}.

While the number of independent random events grows, the related joint probability value decreases rapidly to zero, according to a negative exponential law.

Similarly, two absolutely continuous random variables are independent if and only if

f X, Y x, y = f x ⋅ f y {\displaystyle f_{X,Y}x,y=f_{X}x\cdot f_{Y}y}

for all x {\displaystyle x} and y {\displaystyle y}. This means that acquiring any information about the value of one or more of the random variables leads to a conditional distribution of any other variable that is identical to its unconditional marginal distribution; thus no variable provides any information about any other variable.



                                     

5.2. Additional properties Joint distribution for conditionally dependent variables

If a subset A {\displaystyle A} of the variables X 1, ⋯, X n {\displaystyle X_{1},\cdots,X_{n}} is conditionally dependent given another subset B {\displaystyle B} of these variables, then the probability mass function of the joint distribution is P X 1, …, X n {\displaystyle \mathrm {P} X_{1},\ldots,X_{n}}. P X 1, …, X n {\displaystyle \mathrm {P} X_{1},\ldots,X_{n}} is equal to P B ⋅ P A ∣ B {\displaystyle PB\cdot PA\mid B}. Therefore, it can be efficiently represented by the lower-dimensional probability distributions P B {\displaystyle PB} and P A ∣ B {\displaystyle PA\mid B}. Such conditional independence relations can be represented with a Bayesian network or copula functions.

                                     

5.3. Additional properties Covariance

When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables. If the relationship between the random variables is nonlinear, the covariance might not be sensitive to the relationship.

The covariance between the random variable X and Y, denoted as covX,Y, is:

σ X Y = E =EXY-\mu _{x}\mu _{y}}

                                     

5.4. Additional properties Correlation

There is another measure of the relationship between two random variables that is often easier to interpret than the covariance.

The correlation just scales the covariance by the product of the standard deviation of each variable. Consequently, the correlation is a dimensionless quantity that can be used to compare the linear relationships between pairs of variables in different units. If the points in the joint probability distribution of X and Y that receive positive probability tend to fall along a line of positive or negative slope, ρ XY is near +1 or −1. If ρ XY equals +1 or −1, it can be shown that the points in the joint probability distribution that receive positive probability fall exactly along a straight line. Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the correlation is a measure of the linear relationship between random variables.

The correlation between random variable X and Y, denoted as

ρ X Y = c o v X, Y V X V Y = σ X Y σ X σ Y {\displaystyle \rho _{XY}={\frac {covX,Y}{\sqrt {VXVY}}}={\frac {\sigma _{XY}}{\sigma _{X}\sigma _{Y}}}}

                                     

6. Important named distributions

Named joint distributions that arise frequently in statistics include the multivariate normal distribution, the multivariate stable distribution, the multinomial distribution, the negative multinomial distribution, the multivariate hypergeometric distribution, and the elliptical distribution.

                                     
  • with distributions for statistically dependent multivariate data. Here the problem of defining or manipulating a joint probability distribution for a
  • Probability Glossary, s.v. Joint frequency Media related to Frequency distribution at Wikimedia Commons Learn 7 ways to make frequency distribution table
  • family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities to work, it is necessary to use a σ - algebra
  • sequence continue to change but can be described by an unchanging probability distribution Stochastic convergence formalizes the idea that a sequence of
  • In probability theory and statistics, the Dirichlet - multinomial distribution is a family of discrete multivariate probability distributions on a finite
  • Pareto distribution named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power - law probability distribution that is
  • Discrete probability distribution 1: D Independent and identically - distributed random variables  FS: BDCR Joint probability distribution F: DC Marginal
  • In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically - distributed random
  • In probability theory and statistics, a categorical distribution also called a generalized Bernoulli distribution multinoulli distribution is a discrete

Users also searched:

conditional distribution, expectation of joint probability distribution, how to find joint distribution function, joint cdf, joint distribution conditional expectation,

...
...
...