Back

ⓘ Compound probability distribution. In probability and statistics, a compound probability distribution is the probability distribution that results from assuming ..




                                     

ⓘ Compound probability distribution

In probability and statistics, a compound probability distribution is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with the parameters of that distribution themselves being random variables. If the parameter is a scale parameter, the resulting mixture is also called a scale mixture.

The compound distribution "unconditional distribution" is the result of marginalizing integrating over the latent random variables representing the parameters of the parametrized distribution "conditional distribution".

                                     

1. Definition

A compound probability distribution is the probability distribution that results from assuming that a random variable X {\displaystyle X} is distributed according to some parametrized distribution F {\displaystyle F} with an unknown parameter θ {\displaystyle \theta } that is again distributed according to some other distribution G {\displaystyle G}. The resulting distribution H {\displaystyle H} is said to be the distribution that results from compounding F {\displaystyle F} with G {\displaystyle G}. The parameters distribution G {\displaystyle G} is also called the mixing distribution or latent distribution. Technically, the unconditional distribution H {\displaystyle H} results from marginalizing over G {\displaystyle G}, i.e., from integrating out the unknown parameters θ {\displaystyle \theta }. Its probability density function is given by:

p H x = ∫ p F x | θ p G θ d θ {\displaystyle p_{H}x={\displaystyle \int \limits p_{F}x|\theta\,p_{G}\theta\operatorname {d} \!\theta }}

The same formula applies analogously if some or all of the variables are vectors.

From the above formula, one can see that a compound distribution essentially is a special case of a marginal distribution: The joint distribution of x {\displaystyle x} and θ {\displaystyle \theta } is given by p x, θ = p x | θ p θ {\displaystyle px,\theta=px|\thetap\theta}, and the compound results as its marginal distribution: p x = ∫ p x, θ d θ {\displaystyle {\textstyle px=\int px,\theta\operatorname {d} \!\theta }}. If the domain of θ {\displaystyle \theta } is discrete, then the distribution is again a special case of a mixture distribution.

                                     

2. Properties

A compound distribution H {\displaystyle H} resembles in many ways the original distribution F {\displaystyle F} that generated it, but typically has greater variance, and often heavy tails as well. The support of H {\displaystyle H} is the same as the support of the F {\displaystyle F}, and often the shape is broadly similar as well. The parameters of H {\displaystyle H} include any parameters of G {\displaystyle G} or F {\displaystyle F} that have not been marginalized out.

The compound distributions first two moments are given by

E H ⁡ {\bigr)}}

Law of total variance.

                                     

3.1. Applications Testing

Distributions of common test statistics result as compound distributions under their null hypothesis, for example in Students t-test where the test statistic results as the ratio of a normal and a chi-squared random variable, or in the F-test where the test statistic is the ratio of two chi-squared random variables.

                                     

3.2. Applications Overdispersion modeling

Compound distributions are useful for modeling outcomes exhibiting overdispersion, i.e., a greater amount of variability than would be expected under a certain model. For example, count data are commonly modeled using the Poisson distribution, whose variance is equal to its mean. The distribution may be generalized by allowing for variability in its rate parameter, implemented via a gamma distribution, which results in a marginal negative binomial distribution. This distribution is similar in its shape to the Poisson distribution, but it allows for larger variances. Similarly, a binomial distribution may be generalized to allow for additional variability by compounding it with a beta distribution for its success probability parameter, which results in a beta-binomial distribution.

                                     

3.3. Applications Bayesian inference

Besides ubiquitous marginal distributions that may be seen as special cases of compound distributions, in Bayesian inference, compound distributions arise when, in the notation above, F represents the distribution of future observations and G is the posterior distribution of the parameters of F, given the information in a set of observed data. This gives a posterior predictive distribution. Correspondingly, for the prior predictive distribution, F is the distribution of a new data point while G is the prior distribution of the parameters.

                                     

3.4. Applications Convolution

Convolution of probability distributions to derive the probability distribution of sums of random variables may also be seen as a special case of compounding; here the sums distribution essentially results from considering one summand as a random location parameter for the other summand.

                                     

4. Computation

Compound distributions derived from exponential family distributions often have a closed form. If analytical integration is not possible, numerical methods may be necessary.

Compound distributions may relatively easily be investigated using Monte Carlo methods, i.e., by generating random samples. It is often easy to generate random numbers from the distributions p θ {\displaystyle p\theta} as well as p x | θ {\displaystyle px|\theta} and then utilize these to perform collapsed Gibbs sampling to generate samples from p x {\displaystyle px}.

A compound distribution may usually also be approximated to a sufficient degree by a mixture distribution using a finite number of mixture components, allowing to derive approximate density, distribution function etc.

Parameter estimation maximum-likelihood or maximum-a-posteriori estimation within a compound distribution model may sometimes be simplified by utilizing the EM-algorithm.

                                     

5. Examples

  • Gaussian scale mixtures
  • Compounding a normal distribution with variance distributed according to an inverse gamma distribution or equivalently, with precision distributed as a gamma distribution yields a non-standardized Students t-distribution. This distribution has the same symmetrical shape as a normal distribution with the same central point, but has greater variance and heavy tails.
  • Compounding a Gaussian distribution with variance distributed according to an exponential distribution or with standard deviation according to a Rayleigh distribution yields a Laplace distribution.
  • Compounding a Gaussian distribution with standard deviation distributed according to a standard inverse uniform distribution yields a Slash distribution.
  • Compounding a Gaussian distribution with variance distributed according to an exponential distribution whose rate parameter is itself distributed according to a gamma distribution yields a Normal-exponential-gamma distribution. This involves two compounding stages. The variance itself then follows a Lomax distribution; see below.
  • Compounding a Gaussian distribution with mean distributed according to another Gaussian distribution yields again a Gaussian distribution.
  • Compounding a Gaussian distribution with mean distributed according to a shifted exponential distribution yields an exponentially modified Gaussian distribution.
  • other Gaussian mixtures
  • Compounding a gamma distribution with inverse scale parameter distributed according to another gamma distribution yields a three-parameter beta prime distribution.
  • Compounding a multinomial distribution with probability vector distributed according to a Dirichlet distribution yields a Dirichlet-multinomial distribution.
  • Compounding an exponential distribution with its rate parameter distributed according to a gamma distribution yields a Lomax distribution.
  • Compounding a binomial distribution with probability of success distributed according to a beta distribution yields a beta-binomial distribution. It possesses three parameters, a parameter n {\displaystyle n} number of samples from the binomial distribution and shape parameters α {\displaystyle \alpha } and β {\displaystyle \beta } from the beta distribution.
  • Compounding a Poisson distribution with rate parameter distributed according to a gamma distribution yields a negative binomial distribution.
  • Compounding a half-normal distribution with its scale parameter distributed according to a Rayleigh distribution yields an exponential distribution. This follows immediately from the Laplace distribution resulting as a normal scale mixture; see above. The roles of conditional and mixing distributions may also be exchanged here; consequently, compounding a Rayleigh distribution with its scale parameter distributed according to a half-normal distribution also yields an exponential distribution.
  • A Gammak=2,θ - distributed random variable whose scale parameter θ again is uniformly distributed marginally yields an exponential distribution.


                                     
  • probabilists List of probability distributions List of probability topics List of scientific journals in probability Timeline of probability and statistics
  • issues, see the Dirichlet - multinomial distribution article. Compound probability distribution Marginal probability Prediction interval Bayesian statistics
  • well - defined probability density function or sampling distribution The Rayleigh mixture distribution is one of many types of compound distributions in which
  • In probability and statistics, the logarithmic distribution also known as the logarithmic series distribution or the log - series distribution is a discrete
  • probability and statistics, Student s t - distribution or simply the t - distribution is any member of a family of continuous probability distributions
  • The Delaporte distribution is a discrete probability distribution that has received attention in actuarial science. It can be defined using the convolution
  • In probability and statistics, a mixture distribution is the probability distribution of a random variable that is derived from a collection of other random
  • Many probability distributions that are important in theory or applications have been given specific names. The Bernoulli distribution which takes value
  • In probability theory and statistics, the geometric Poisson distribution also called the Polya Aeppli distribution is used for describing objects that
  • In probability theory and statistics, the geometric distribution is either of two discrete probability distributions The probability distribution of

Users also searched:

...
...
...