Gamma distribution
In probability theory and statistics,[] the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are three different parametrizations in common use:
- With a shape parameter k and a scale parameter θ.
- With a shape parameter and an inverse scale parameter, called a rate parameter.
- With a shape parameter k and a mean parameter.
The gamma distribution is the maximum entropy probability distribution for a random variable X for which E = kθ = α/β is fixed and greater than zero, and E = ψ + ln = ψ − ln is fixed.
Definitions
The parameterization with k and θ appears to be more common in econometrics and certain other applied fields, where for example the gamma distribution is frequently used to model waiting times. For instance, in life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution. See Hogg and Craig for an explicit motivation.The parameterization with α and β is more common in Bayesian statistics, where the gamma distribution is used as a conjugate prior distribution for various types of inverse scale parameters, such as the λ of an exponential distribution or a Poisson distribution – or for that matter, the β of the gamma distribution itself. The closely related inverse-gamma distribution is used as a conjugate prior for scale parameters, such as the variance of a normal distribution.
If k is a positive integer, then the distribution represents an Erlang distribution; i.e., the sum of k independent exponentially distributed random variables, each of which has a mean of θ.
Characterization using shape ''α'' and rate ''β''
The gamma distribution can be parameterized in terms of a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter. A random variable X that is gamma-distributed with shape α and rate β is denotedThe corresponding probability density function in the shape-rate parametrization is
where is the gamma function.
For all positive integers,.
The cumulative distribution function is the regularized gamma function:
where is the lower incomplete gamma function.
If α is a positive integer, the cumulative distribution function has the following series expansion:
Characterization using shape ''k'' and scale ''θ''
A random variable X that is gamma-distributed with shape k and scale θ is denoted byImage:Gamma-PDF-3D.png|thumb|right|320px|Illustration of the gamma PDF for parameter values over k and x with θ set to 1, 2, 3, 4, 5 and 6. One can see each θ layer by itself here as well as by k and x. .
The probability density function using the shape-scale parametrization is
Here Γ is the gamma function evaluated at k.
The cumulative distribution function is the regularized gamma function:
where is the lower incomplete gamma function.
It can also be expressed as follows, if k is a positive integer :
Both parametrizations are common because either can be more convenient depending on the situation.
Properties
Skewness
The skewness of the gamma distribution only depends on its shape parameter, k, and it is equal toMedian calculation
Unlike the mode and the mean which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is defined as the value such thatA rigorous treatment of the problem of determining an asymptotic expansion and bounds for the median of the gamma distribution was handled first by Chen and Rubin, who proved that
where is the mean and is the median of the distribution.
K. P. Choi found the first five terms in the asymptotic expansion of the median by comparing the median to Ramanujan's function. Berg and Pedersen found more terms:
They also proved many properties of the median, showed that is a convex function of, and showed that the asymptotic behavior near is.
Summation
If Xi has a Gamma distribution for i = 1, 2, ..., N, thenprovided all Xi are independent.
For the cases where the Xi are independent but have different scale parameters see Mathai or Moschopoulos.
The gamma distribution exhibits infinite divisibility.
Scaling
Ifthen, for any c > 0,
or equivalently
Indeed, we know that if X is an exponential r.v. with rate λ then cX is an exponential r.v. with rate λ/c; the same thing is valid with Gamma variates : multiplication by a positive constant c divides the rate.
Exponential family
The gamma distribution is a two-parameter exponential family with natural parameters k − 1 and −1/θ, and natural statistics X and ln.If the shape parameter k is held fixed, the resulting one-parameter family of distributions is a natural exponential family.
Logarithmic expectation and variance
One can show thator equivalently,
where ψ is the digamma function. Likewise,
where is the trigamma function.
This can be derived using the exponential family formula for the moment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution is ln.
Information entropy
The information entropy isIn the k, θ parameterization, the information entropy is given by
Kullback–Leibler divergence
The Kullback–Leibler divergence, of Gamma from Gamma is given byWritten using the k, θ parameterization, the KL-divergence of Gamma from Gamma is given by
Laplace transform
The Laplace transform of the gamma PDF isRelated distributions
General
- Let be independent and identically distributed random variables following an exponential distribution with rate parameter λ, then ~ Gamma where n is the shape parameter and 1/λ is the scale.
- If X ~ Gamma, then X has an exponential distribution with rate parameter λ.
- If X ~ Gamma, then X is identical to χ2, the chi-squared distribution with ν degrees of freedom. Conversely, if Q ~ χ2 and c is a positive constant, then cQ ~ Gamma.
- If k is an integer, the gamma distribution is an Erlang distribution and is the probability distribution of the waiting time until the kth "arrival" in a one-dimensional Poisson process with intensity 1/θ. If
- If X has a Maxwell–Boltzmann distribution with parameter a, then
- If X ~ Gamma, then follows an exponential-gamma distribution. It is sometimes referred to as the log-gamma distribution. Formulas for its mean and variance are in the section #Logarithmic expectation and variance.
- If X ~ Gamma, then follows a generalized gamma distribution with parameters p = 2, d = 2k, and .
- More generally, if X ~ Gamma, then for follows a generalized gamma distribution with parameters p = 1/q, d = k/q, and.
- If X ~ Gamma, then 1/X ~ Inv-Gamma.
- Parametrization 1: If are independent, then, or equivalently,
- Parametrization 2: If are independent, then, or equivalently,
- If X ~ Gamma and Y ~ Gamma are independently distributed, then X/ has a beta distribution with parameters α and β, and X/ is independent of X + Y, which is Gamma-distributed.
- If Xi ~ Gamma are independently distributed, then the vector, where S = X1 + ... + Xn, follows a Dirichlet distribution with parameters α1, ..., αn.
- For large k the gamma distribution converges to normal distribution with mean μ = kθ and variance σ2 = kθ2.
- The gamma distribution is the conjugate prior for the precision of the normal distribution with known mean.
- The Wishart distribution is a multivariate generalization of the gamma distribution.
- The gamma distribution is a special case of the generalized gamma distribution, the generalized integer gamma distribution, and the generalized inverse Gaussian distribution.
- Among the discrete distributions, the negative binomial distribution is sometimes considered the discrete analogue of the gamma distribution.
- Tweedie distributions – the gamma distribution is a member of the family of Tweedie exponential dispersion models.
Compound gamma
If instead the shape parameter is known but the mean is unknown, with the prior of the mean being given by another gamma distribution, then it results in K-distribution.
Statistical inference
Parameter estimation
Maximum likelihood estimation
The likelihood function for N iid observations isfrom which we calculate the log-likelihood function
Finding the maximum with respect to θ by taking the derivative and setting it equal to zero yields the maximum likelihood estimator of the θ parameter:
Substituting this into the log-likelihood function gives
Finding the maximum with respect to k by taking the derivative and setting it equal to zero yields
There is no closed-form solution for k. The function is numerically very well behaved, so if a numerical solution is desired, it can be found using, for example, Newton's method. An initial value of k can be found either using the method of moments, or using the approximation
If we let
then k is approximately
which is within 1.5% of the correct value. An explicit form for the Newton–Raphson update of this initial guess is:
Closed-form estimators
Consistent closed-form estimators of k and θ exists that are derived from the likelihood of the generalized gamma distribution.The estimate for the shape k is
and the estimate for the scale θ is
If the rate parameterization is used, the estimate of.
These estimators are not strictly maximum likelihood estimators, but are instead referred to as mixed type log-moment estimators. They have however similar efficiency as the maximum likelihood estimators.
Although these estimators are consistent, they have a small bias. A bias-corrected variant of the estimator for the scale θ is
A bias correction for the shape parameter k is given as
Bayesian minimum mean squared error
With known k and unknown θ, the posterior density function for theta isDenoting
Integration with respect to θ can be carried out using a change of variables, revealing that 1/θ is gamma-distributed with parameters α = Nk, β = y.
The moments can be computed by taking the ratio
which shows that the mean ± standard deviation estimate of the posterior distribution for θ is
Bayesian inference
Conjugate prior
In Bayesian inference, the gamma distribution is the conjugate prior to many likelihood distributions: the Poisson, exponential, normal, Pareto, gamma with known shape σ, inverse gamma with known shape parameter, and Gompertz with known scale parameter.The gamma distribution's conjugate prior is:
where Z is the normalizing constant, which has no closed-form solution.
The posterior distribution can be found by updating the parameters as follows:
where n is the number of observations, and xi is the ith observation.
Occurrence and applications
The gamma distribution has been used to model the size of insurance claims and rainfalls. This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by a gamma process – much like the exponential distribution generates a Poisson process.The gamma distribution is also used to model errors in multi-level Poisson regression models, because the combination of the Poisson distribution and a gamma distribution is a negative binomial distribution.
In wireless communication, the gamma distribution is used to model the multi-path fading of signal power; see also Rayleigh distribution and Rician distribution.
In oncology, the age distribution of cancer incidence often follows the gamma distribution, whereas the shape and scale parameters predict, respectively, the number of driver events and the time interval between them.
In neuroscience, the gamma distribution is often used to describe the distribution of inter-spike intervals.
In bacterial gene expression, the copy number of a constitutively expressed protein often follows the gamma distribution, where the scale and shape parameter are, respectively, the mean number of bursts per cell cycle and the mean number of protein molecules produced by a single mRNA during its lifetime.
In genomics, the gamma distribution was applied in peak calling step in ChIP-chip and ChIP-seq data analysis.
The gamma distribution is widely used as a conjugate prior in Bayesian statistics. It is the conjugate prior for the precision of a normal distribution. It is also the conjugate prior for the exponential distribution.
Generating gamma-distributed random variables
Given the scaling property above, it is enough to generate gamma variables with θ = 1 as we can later convert to any value of β with simple division.Suppose we wish to generate random variables from Gamma, where n is a non-negative integer and 0 < δ < 1. Using the fact that a Gamma distribution is the same as an Exp distribution, and noting the method of generating exponential variables, we conclude that if U is uniformly distributed on is distributed Gamma. Now, using the "α-addition" property of gamma distribution, we expand this result:
where Uk are all uniformly distributed on for 0 < δ < 1 and apply the "α-addition" property once more. This is the most difficult part.
Random generation of gamma variates is discussed in detail by Devroye, noting that none are uniformly fast for all shape parameters. For small values of the shape parameter, the algorithms are often not valid. For arbitrary values of the shape parameter, one can apply the Ahrens and Dieter modified acceptance–rejection method Algorithm GD, or transformation method when 0 < k < 1. Also see Cheng and Feast Algorithm GKM 3 or Marsaglia's squeeze method.
The following is a version of the Ahrens-Dieter acceptance–rejection method:
- Generate U, V and W as iid uniform.
where is the integer part of k, ξ is generated via the algorithm above with δ = and the Uk are all independent.
While the above approach is technically correct, Devroye notes that it is linear in the value of k and in general is not a good choice. Instead he recommends using either rejection-based or table-based methods, depending on context.
For example, Marsaglia's simple transformation-rejection method relying on one normal variate X and one uniform variate U:
- Set and.
- Set.
- If and return, else go back to step 2.