# Multivariate Distributions

*Multivariate distributions* are the distributions whose variate forms are `Multivariate`

(*i.e* each sample is a vector). Abstract types for multivariate distributions:

```
const MultivariateDistribution{S<:ValueSupport} = Distribution{Multivariate,S}
const DiscreteMultivariateDistribution = Distribution{Multivariate, Discrete}
const ContinuousMultivariateDistribution = Distribution{Multivariate, Continuous}
```

## Common Interface

The methods listed below are implemented for each multivariate distribution, which provides a consistent interface to work with multivariate distributions.

### Computation of statistics

`Base.length`

— Method`length(d::MultivariateDistribution) -> Int`

Return the sample dimension of distribution `d`

.

`Base.size`

— Method`size(d::MultivariateDistribution)`

Return the sample size of distribution `d`

, *i.e* `(length(d),)`

.

`Base.eltype`

— Method`eltype(::Type{Sampleable})`

The default element type of a sample. This is the type of elements of the samples generated by the `rand`

method. However, one can provide an array of different element types to store the samples using `rand!`

.

`Statistics.mean`

— Method`mean(d::MultivariateDistribution)`

Compute the mean vector of distribution `d`

.

`Statistics.var`

— Method`var(d::MultivariateDistribution)`

Compute the vector of element-wise variances for distribution `d`

.

`Statistics.cov`

— Method`cov(d::MultivariateDistribution)`

Compute the covariance matrix for distribution `d`

. (`cor`

is provided based on `cov`

).

`Statistics.cor`

— Method`cor(d::MultivariateDistribution)`

Computes the correlation matrix for distribution `d`

.

`StatsBase.entropy`

— Method`entropy(d::MultivariateDistribution)`

Compute the entropy value of distribution `d`

.

`StatsBase.entropy`

— Method`entropy(d::MultivariateDistribution, b::Real)`

Compute the entropy value of distribution $d$, w.r.t. a given base.

### Probability evaluation

`Distributions.insupport`

— Method`insupport(d::MultivariateDistribution, x::AbstractArray)`

If $x$ is a vector, it returns whether x is within the support of $d$. If $x$ is a matrix, it returns whether every column in $x$ is within the support of $d$.

Missing docstring for `pdf(::MultivariateDistribution, ::AbstractArray)`

. Check Documenter's build log for details.

Missing docstring for `logpdf(::MultivariateDistribution, ::AbstractArray)`

. Check Documenter's build log for details.

`StatsAPI.loglikelihood`

— Method`loglikelihood(d::Distribution{ArrayLikeVariate{N}}, x) where {N}`

The log-likelihood of distribution `d`

with respect to all variate(s) contained in `x`

.

Here, `x`

can be any output of `rand(d, dims...)`

and `rand!(d, x)`

. For instance, `x`

can be

- an array of dimension
`N`

with`size(x) == size(d)`

, - an array of dimension
`N + 1`

with`size(x)[1:N] == size(d)`

, or - an array of arrays
`xi`

of dimension`N`

with`size(xi) == size(d)`

.

**Note:** For multivariate distributions, the pdf value is usually very small or large, and therefore direct evaluation of the pdf may cause numerical problems. It is generally advisable to perform probability computation in log scale.

### Sampling

`Base.rand`

— Method`rand(::AbstractRNG, ::Sampleable)`

Samples from the sampler and returns the result.

`Random.rand!`

— Method`rand!(::AbstractRNG, ::Sampleable, ::AbstractArray)`

Samples in-place from the sampler and stores the result in the provided array.

**Note:** In addition to these common methods, each multivariate distribution has its special methods, as introduced below.

## Distributions

`Distributions.Multinomial`

— TypeThe Multinomial distribution generalizes the *binomial distribution*. Consider n independent draws from a Categorical distribution over a finite set of size k, and let $X = (X_1, ..., X_k)$ where $X_i$ represents the number of times the element $i$ occurs, then the distribution of $X$ is a multinomial distribution. Each sample of a multinomial distribution is a k-dimensional integer vector that sums to n.

The probability mass function is given by

\[f(x; n, p) = \frac{n!}{x_1! \cdots x_k!} \prod_{i=1}^k p_i^{x_i}, \quad x_1 + \cdots + x_k = n\]

```
Multinomial(n, p) # Multinomial distribution for n trials with probability vector p
Multinomial(n, k) # Multinomial distribution for n trials with equal probabilities
# over 1:k
```

`Distributions.AbstractMvNormal`

— TypeThe Multivariate normal distribution is a multidimensional generalization of the *normal distribution*. The probability density function of a d-dimensional multivariate normal distribution with mean vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}$ is:

\[f(\mathbf{x}; \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2 \pi)^{d/2} |\boldsymbol{\Sigma}|^{1/2}} \exp \left( - \frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right)\]

We realize that the mean vector and the covariance often have special forms in practice, which can be exploited to simplify the computation. For example, the mean vector is sometimes just a zero vector, while the covariance matrix can be a diagonal matrix or even in the form of $\sigma^2 \mathbf{I}$. To take advantage of such special cases, we introduce a parametric type `MvNormal`

, defined as below, which allows users to specify the special structure of the mean and covariance.

```
struct MvNormal{T<:Real,Cov<:AbstractPDMat,Mean<:AbstractVector} <: AbstractMvNormal
μ::Mean
Σ::Cov
end
```

Here, the mean vector can be an instance of any `AbstractVector`

. The covariance can be of any subtype of `AbstractPDMat`

. Particularly, one can use `PDMat`

for full covariance, `PDiagMat`

for diagonal covariance, and `ScalMat`

for the isotropic covariance – those in the form of $\sigma^2 \mathbf{I}$. (See the Julia package PDMats for details).

We also define a set of aliases for the types using different combinations of mean vectors and covariance:

```
const IsoNormal = MvNormal{Float64, ScalMat{Float64}, Vector{Float64}}
const DiagNormal = MvNormal{Float64, PDiagMat{Float64,Vector{Float64}}, Vector{Float64}}
const FullNormal = MvNormal{Float64, PDMat{Float64,Matrix{Float64}}, Vector{Float64}}
const ZeroMeanIsoNormal{Axes} = MvNormal{Float64, ScalMat{Float64}, Zeros{Float64,1,Axes}}
const ZeroMeanDiagNormal{Axes} = MvNormal{Float64, PDiagMat{Float64,Vector{Float64}}, Zeros{Float64,1,Axes}}
const ZeroMeanFullNormal{Axes} = MvNormal{Float64, PDMat{Float64,Matrix{Float64}}, Zeros{Float64,1,Axes}}
```

Multivariate normal distributions support affine transformations:

```
d = MvNormal(μ, Σ)
c + B * d # == MvNormal(B * μ + c, B * Σ * B')
dot(b, d) # == Normal(dot(b, μ), b' * Σ * b)
```

`Distributions.MvNormal`

— Type`MvNormal`

Generally, users don't have to worry about these internal details.

We provide a common constructor `MvNormal`

, which will construct a distribution of appropriate type depending on the input arguments.

`Distributions.MvNormalCanon`

— Type`MvNormalCanon`

The multivariate normal distribution is an exponential family distribution, with two *canonical parameters*: the *potential vector* $\mathbf{h}$ and the *precision matrix* $\mathbf{J}$. The relation between these parameters and the conventional representation (*i.e.* the one using mean $\boldsymbol{\mu}$ and covariance $\boldsymbol{\Sigma}$) is:

\[\mathbf{h} = \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu}, \quad \text{ and } \quad \mathbf{J} = \boldsymbol{\Sigma}^{-1}\]

The canonical parameterization is widely used in Bayesian analysis. We provide a type `MvNormalCanon`

, which is also a subtype of `AbstractMvNormal`

to represent a multivariate normal distribution using canonical parameters. Particularly, `MvNormalCanon`

is defined as:

```
struct MvNormalCanon{T<:Real,P<:AbstractPDMat,V<:AbstractVector} <: AbstractMvNormal
μ::V # the mean vector
h::V # potential vector, i.e. inv(Σ) * μ
J::P # precision matrix, i.e. inv(Σ)
end
```

We also define aliases for common specializations of this parametric type:

```
const FullNormalCanon = MvNormalCanon{Float64, PDMat{Float64,Matrix{Float64}}, Vector{Float64}}
const DiagNormalCanon = MvNormalCanon{Float64, PDiagMat{Float64,Vector{Float64}}, Vector{Float64}}
const IsoNormalCanon = MvNormalCanon{Float64, ScalMat{Float64}, Vector{Float64}}
const ZeroMeanFullNormalCanon{Axes} = MvNormalCanon{Float64, PDMat{Float64,Matrix{Float64}}, Zeros{Float64,1,Axes}}
const ZeroMeanDiagNormalCanon{Axes} = MvNormalCanon{Float64, PDiagMat{Float64,Vector{Float64}}, Zeros{Float64,1,Axes}}
const ZeroMeanIsoNormalCanon{Axes} = MvNormalCanon{Float64, ScalMat{Float64}, Zeros{Float64,1,Axes}}
```

**Note:** `MvNormalCanon`

share the same set of methods as `MvNormal`

.

`Distributions.MvLogitNormal`

— Type`MvLogitNormal{<:AbstractMvNormal}`

The multivariate logit-normal distribution is a multivariate generalization of `LogitNormal`

capable of handling correlations between variables.

If $\mathbf{y} \sim \mathrm{MvNormal}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ is a length $d-1$ vector, then

\[\mathbf{x} = \operatorname{softmax}\left(\begin{bmatrix}\mathbf{y} \\ 0 \end{bmatrix}\right) \sim \mathrm{MvLogitNormal}(\boldsymbol{\mu}, \boldsymbol{\Sigma})\]

is a length $d$ probability vector.

```
MvLogitNormal(μ, Σ) # MvLogitNormal with y ~ MvNormal(μ, Σ)
MvLogitNormal(MvNormal(μ, Σ)) # same as above
MvLogitNormal(MvNormalCanon(μ, J)) # MvLogitNormal with y ~ MvNormalCanon(μ, J)
```

**Fields**

`normal::AbstractMvNormal`

: contains the $d-1$-dimensional distribution of $y$

`Distributions.MvLogNormal`

— Type`MvLogNormal(d::MvNormal)`

The Multivariate lognormal distribution is a multidimensional generalization of the *lognormal distribution*.

If $\boldsymbol X \sim \mathcal{N}(\boldsymbol\mu,\,\boldsymbol\Sigma)$ has a multivariate normal distribution then $\boldsymbol Y=\exp(\boldsymbol X)$ has a multivariate lognormal distribution.

Mean vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}$ of the underlying normal distribution are known as the *location* and *scale* parameters of the corresponding lognormal distribution.

`Distributions.Dirichlet`

— Type`Dirichlet`

The Dirichlet distribution is often used as the conjugate prior for Categorical or Multinomial distributions. The probability density function of a Dirichlet distribution with parameter $\alpha = (\alpha_1, \ldots, \alpha_k)$ is:

\[f(x; \alpha) = \frac{1}{B(\alpha)} \prod_{i=1}^k x_i^{\alpha_i - 1}, \quad \text{ with } B(\alpha) = \frac{\prod_{i=1}^k \Gamma(\alpha_i)}{\Gamma \left( \sum_{i=1}^k \alpha_i \right)}, \quad x_1 + \cdots + x_k = 1\]

```
# Let alpha be a vector
Dirichlet(alpha) # Dirichlet distribution with parameter vector alpha
# Let a be a positive scalar
Dirichlet(k, a) # Dirichlet distribution with parameter a * ones(k)
```

`Distributions.Product`

— Type`Product <: MultivariateDistribution`

An N dimensional `MultivariateDistribution`

constructed from a vector of N independent `UnivariateDistribution`

s.

`Product(Uniform.(rand(10), 1)) # A 10-dimensional Product from 10 independent `Uniform` distributions.`

## Addition Methods

### AbstractMvNormal

In addition to the methods listed in the common interface above, we also provide the following methods for all multivariate distributions under the base type `AbstractMvNormal`

:

`Distributions.invcov`

— Method`invcov(d::AbstractMvNormal)`

Return the inversed covariance matrix of d.

`Distributions.logdetcov`

— Method`logdetcov(d::AbstractMvNormal)`

Return the log-determinant value of the covariance matrix.

`Distributions.sqmahal`

— Method`sqmahal(d, x)`

Return the squared Mahalanobis distance from x to the center of d, w.r.t. the covariance. When x is a vector, it returns a scalar value. When x is a matrix, it returns a vector of length size(x,2).

`sqmahal!(r, d, x)`

with write the results to a pre-allocated array `r`

.

`Base.rand`

— Method`rand(::AbstractRNG, ::Distributions.AbstractMvNormal)`

Sample a random vector from the provided multi-variate normal distribution.

`Base.minimum`

— Method`minimum(d::Distribution)`

Return the minimum of the support of `d`

.

`Base.maximum`

— Method`maximum(d::Distribution)`

Return the maximum of the support of `d`

.

`Base.extrema`

— Method`extrema(d::Distribution)`

Return the minimum and maximum of the support of `d`

as a 2-tuple.

### MvLogNormal

In addition to the methods listed in the common interface above, we also provide the following methods:

`Distributions.location`

— Method`location(d::MvLogNormal)`

Return the location vector of the distribution (the mean of the underlying normal distribution).

`Distributions.scale`

— Method`scale(d::MvLogNormal)`

Return the scale matrix of the distribution (the covariance matrix of the underlying normal distribution).

`Statistics.median`

— Method`median(d::MvLogNormal)`

Return the median vector of the lognormal distribution. which is strictly smaller than the mean.

`StatsBase.mode`

— Method`mode(d::MvLogNormal)`

Return the mode vector of the lognormal distribution, which is strictly smaller than the mean and median.

It can be necessary to calculate the parameters of the lognormal (location vector and scale matrix) from a given covariance and mean, median or mode. To that end, the following functions are provided.

`Distributions.location`

— Method`location{D<:AbstractMvLogNormal}(::Type{D},s::Symbol,m::AbstractVector,S::AbstractMatrix)`

Calculate the location vector (the mean of the underlying normal distribution).

- If
`s == :meancov`

, then m is taken as the mean, and S the covariance matrix of a lognormal distribution. - If
`s == :mean | :median | :mode`

, then m is taken as the mean, median or mode of the lognormal respectively, and S is interpreted as the scale matrix (the covariance of the underlying normal distribution).

It is not possible to analytically calculate the location vector from e.g., median + covariance, or from mode + covariance.

`Distributions.location!`

— Method`location!{D<:AbstractMvLogNormal}(::Type{D},s::Symbol,m::AbstractVector,S::AbstractMatrix,μ::AbstractVector)`

Calculate the location vector (as above) and store the result in $μ$

`Distributions.scale`

— Method`scale{D<:AbstractMvLogNormal}(::Type{D},s::Symbol,m::AbstractVector,S::AbstractMatrix)`

Calculate the scale parameter, as defined for the location parameter above.

`Distributions.scale!`

— Method`scale!{D<:AbstractMvLogNormal}(::Type{D},s::Symbol,m::AbstractVector,S::AbstractMatrix,Σ::AbstractMatrix)`

Calculate the scale parameter, as defined for the location parameter above and store the result in `Σ`

.

`StatsAPI.params`

— Method`params{D<:AbstractMvLogNormal}(::Type{D},m::AbstractVector,S::AbstractMatrix)`

Return (scale,location) for a given mean and covariance

## Internal Methods (for creating your own multivariate distribution)

Missing docstring for `Distributions._logpdf(d::MultivariateDistribution, x::AbstractArray)`

. Check Documenter's build log for details.

## Product distributions

`Distributions.product_distribution`

— Function`product_distribution(dists::AbstractArray{<:Distribution{<:ArrayLikeVariate{M}},N})`

Create a distribution of `M + N`

-dimensional arrays as a product distribution of independent `M`

-dimensional distributions by stacking them.

The function falls back to constructing a `ProductDistribution`

distribution but specialized methods can be defined.

`product_distribution(dists::AbstractVector{<:Normal})`

Create a multivariate normal distribution by stacking the univariate normal distributions.

The resulting distribution of type `MvNormal`

has a diagonal covariance matrix.

Using `product_distribution`

is advised to construct product distributions. For some distributions, it constructs a special multivariate type.

## Index

`Distributions.AbstractMvNormal`

`Distributions.Dirichlet`

`Distributions.Multinomial`

`Distributions.MvLogNormal`

`Distributions.MvLogitNormal`

`Distributions.MvNormal`

`Distributions.MvNormalCanon`

`Distributions.Product`

`Base.eltype`

`Base.extrema`

`Base.length`

`Base.maximum`

`Base.minimum`

`Base.rand`

`Base.rand`

`Base.size`

`Distributions.insupport`

`Distributions.invcov`

`Distributions.location`

`Distributions.location`

`Distributions.location!`

`Distributions.logdetcov`

`Distributions.product_distribution`

`Distributions.scale`

`Distributions.scale`

`Distributions.scale!`

`Distributions.sqmahal`

`Random.rand!`

`Statistics.cor`

`Statistics.cov`

`Statistics.mean`

`Statistics.median`

`Statistics.var`

`StatsAPI.loglikelihood`

`StatsAPI.params`

`StatsBase.entropy`

`StatsBase.entropy`

`StatsBase.mode`