Matrix-variate Distributions

# Matrix-variate Distributions

Matrix-variate distributions are the distributions whose variate forms are Matrixvariate (i.e each sample is a matrix). Abstract types for matrix-variate distributions:

const MatrixDistribution{S<:ValueSupport} = Distribution{Matrixvariate,S}

const DiscreteMatrixDistribution   = Distribution{Matrixvariate, Discrete}
const ContinuousMatrixDistribution = Distribution{Matrixvariate, Continuous}

More advanced functionalities related to random matrices can be found in the RandomMatrices.jl package.

## Common Interface

All distributions implement the same set of methods:

size(d::MatrixDistribution)

Return the size of each sample from distribution d.

source
length(d::MatrixDistribution)

The length (i.e number of elements) of each sample from the distribution d.

source
rank(d::MatrixDistribution)

The rank of each sample from the distribution d.

source
mean(d::MatrixDistribution)

Return the mean matrix of d.

source
var(d::MatrixDistribution)

Compute the matrix of element-wise variances for distribution d.

source
cov(d::MatrixDistribution)

Compute the covariance matrix for vec(X), where X is a random matrix with distribution d.

source
pdf(d::MatrixDistribution, x::AbstractArray)

Compute the probability density at the input matrix x.

source
logpdf(d::MatrixDistribution, AbstractMatrix)

Compute the logarithm of the probability density at the input matrix x.

source
_rand!(::AbstractRNG, ::MatrixDistribution, A::AbstractMatrix)

Sample the matrix distribution and store the result in A. Must be implemented by matrix-variate distributions.

source
vec(d::MatrixDistribution)

If known, returns a MultivariateDistribution instance representing the distribution of vec(X), where X is a random matrix with distribution d.

source

## Distributions

Wishart(ν, S)
ν::Real   degrees of freedom (greater than p - 1)
S::PDMat  p x p scale matrix

The Wishart distribution generalizes the gamma distribution to $p\times p$ real, positive definite matrices $\mathbf{H}$. If $\mathbf{H}\sim W_p(\nu,\mathbf{S})$, then its probability density function is

$f(\mathbf{H};\nu,\mathbf{S}) = \frac{1}{2^{\nu p/2} \left|\mathbf{S}\right|^{\nu/2} \Gamma_p\left(\frac {\nu}{2}\right ) }{\left|\mathbf{H}\right|}^{(\nu-p-1)/2} e^{-(1/2)\operatorname{tr}(\mathbf{S}^{-1}\mathbf{H})}.$

If $\nu$ is an integer, then a random matrix $\mathbf{H}$ given by

$\mathbf{H} = \mathbf{X}\mathbf{X}^{\rm{T}}, \quad\mathbf{X} \sim MN_{p,\nu}(\mathbf{0}, \mathbf{S}, \mathbf{I}_{\nu})$

has $\mathbf{H}\sim W_p(\nu, \mathbf{S})$. For non-integer degrees of freedom, Wishart matrices can be generated via the Bartlett decomposition.

source
InverseWishart(ν, Ψ)
ν::Real   degrees of freedom (greater than p - 1)
Ψ::PDMat  p x p scale matrix

The inverse Wishart distribution generalizes the inverse gamma distribution to $p\times p$ real, positive definite matrices $\boldsymbol{\Sigma}$. If $\boldsymbol{\Sigma}\sim IW_p(\nu,\boldsymbol{\Psi})$, then its probability density function is

$f(\boldsymbol{\Sigma}; \nu,\boldsymbol{\Psi}) = \frac{\left|\boldsymbol{\Psi}\right|^{\nu/2}}{2^{\nu p/2}\Gamma_p(\frac{\nu}{2})} \left|\boldsymbol{\Sigma}\right|^{-(\nu+p+1)/2} e^{-\frac{1}{2}\operatorname{tr}(\boldsymbol{\Psi}\boldsymbol{\Sigma}^{-1})}.$

$\mathbf{H}\sim W_p(\nu, \mathbf{S})$ if and only if $\mathbf{H}^{-1}\sim IW_p(\nu, \mathbf{S}^{-1})$.

source
MatrixNormal(M, U, V)
M::AbstractMatrix  n x p mean
U::PDMat           n x n row covariance
V::PDMat           p x p column covariance

The matrix normal distribution generalizes the multivariate normal distribution to $n\times p$ real matrices $\mathbf{X}$. If $\mathbf{X}\sim MN_{n,p}(\mathbf{M}, \mathbf{U}, \mathbf{V})$, then its probability density function is

$f(\mathbf{X};\mathbf{M}, \mathbf{U}, \mathbf{V}) = \frac{\exp\left( -\frac{1}{2} \, \mathrm{tr}\left[ \mathbf{V}^{-1} (\mathbf{X} - \mathbf{M})^{\rm{T}} \mathbf{U}^{-1} (\mathbf{X} - \mathbf{M}) \right] \right)}{(2\pi)^{np/2} |\mathbf{V}|^{n/2} |\mathbf{U}|^{p/2}}.$

$\mathbf{X}\sim MN_{n,p}(\mathbf{M},\mathbf{U},\mathbf{V})$ if and only if $\text{vec}(\mathbf{X})\sim N(\text{vec}(\mathbf{M}),\mathbf{V}\otimes\mathbf{U})$.

source
MatrixTDist(ν, M, Σ, Ω)
ν::Real            positive degrees of freedom
M::AbstractMatrix  n x p location
Σ::PDMat           n x n scale
Ω::PDMat           p x p scale

The matrix t-Distribution generalizes the multivariate t-Distribution to $n\times p$ real matrices $\mathbf{X}$. If $\mathbf{X}\sim MT_{n,p}(\nu,\mathbf{M},\boldsymbol{\Sigma}, \boldsymbol{\Omega})$, then its probability density function is

$f(\mathbf{X} ; \nu,\mathbf{M},\boldsymbol{\Sigma}, \boldsymbol{\Omega}) = c_0 \left|\mathbf{I}_n + \boldsymbol{\Sigma}^{-1}(\mathbf{X} - \mathbf{M})\boldsymbol{\Omega}^{-1}(\mathbf{X}-\mathbf{M})^{\rm{T}}\right|^{-\frac{\nu+n+p-1}{2}},$

where

$c_0=\frac{\Gamma_p\left(\frac{\nu+n+p-1}{2}\right)}{(\pi)^\frac{np}{2} \Gamma_p\left(\frac{\nu+p-1}{2}\right)} |\boldsymbol{\Omega}|^{-\frac{n}{2}} |\boldsymbol{\Sigma}|^{-\frac{p}{2}}.$

If the joint distribution $p(\mathbf{S},\mathbf{X})=p(\mathbf{S})p(\mathbf{X}|\mathbf{S})$ is given by

\begin{align*} \mathbf{S}&\sim IW_n(\nu + n - 1, \boldsymbol{\Sigma})\\ \mathbf{X}|\mathbf{S}&\sim MN_{n,p}(\mathbf{M}, \mathbf{S}, \boldsymbol{\Omega}), \end{align*}

then the marginal distribution of $\mathbf{X}$ is $MT_{n,p}(\nu,\mathbf{M},\boldsymbol{\Sigma},\boldsymbol{\Omega})$.

source
MatrixBeta(p, n1, n2)
p::Int    dimension
n1::Real  degrees of freedom (greater than p - 1)
n2::Real  degrees of freedom (greater than p - 1)

The matrix beta distribution generalizes the beta distribution to $p\times p$ real matrices $\mathbf{U}$ for which $\mathbf{U}$ and $\mathbf{I}_p-\mathbf{U}$ are both positive definite. If $\mathbf{U}\sim MB_p(n_1/2, n_2/2)$, then its probability density function is

$f(\mathbf{U}; n_1,n_2) = \frac{\Gamma_p(\frac{n_1+n_2}{2})}{\Gamma_p(\frac{n_1}{2})\Gamma_p(\frac{n_2}{2})} |\mathbf{U}|^{(n_1-p-1)/2}\left|\mathbf{I}_p-\mathbf{U}\right|^{(n_2-p-1)/2}.$

If $\mathbf{S}_1\sim W_p(n_1,\mathbf{I}_p)$ and $\mathbf{S}_2\sim W_p(n_2,\mathbf{I}_p)$ are independent, and we use $\mathcal{L}(\cdot)$ to denote the lower Cholesky factor, then

$\mathbf{U}=\mathcal{L}(\mathbf{S}_1+\mathbf{S}_2)^{-1}\mathbf{S}_1\mathcal{L}(\mathbf{S}_1+\mathbf{S}_2)^{-\rm{T}}$

has $\mathbf{U}\sim MB_p(n_1/2, n_2/2)$.

source
MatrixFDist(n1, n2, B)
n1::Real  degrees of freedom (greater than p - 1)
n2::Real  degrees of freedom (greater than p - 1)
B::PDMat  p x p scale

The matrix F-Distribution (sometimes called the matrix beta type II distribution) generalizes the F-Distribution to $p\times p$ real, positive definite matrices $\boldsymbol{\Sigma}$. If $\boldsymbol{\Sigma}\sim MF_{p}(n_1/2,n_2/2,\mathbf{B})$, then its probability density function is

$f(\boldsymbol{\Sigma} ; n_1,n_2,\mathbf{B}) = \frac{\Gamma_p(\frac{n_1+n_2}{2})}{\Gamma_p(\frac{n_1}{2})\Gamma_p(\frac{n_2}{2})} |\mathbf{B}|^{n_2/2}|\boldsymbol{\Sigma}|^{(n_1-p-1)/2}|\mathbf{B}+\boldsymbol{\Sigma}|^{-(n_1+n_2)/2}.$

If the joint distribution $p(\boldsymbol{\Psi},\boldsymbol{\Sigma})=p(\boldsymbol{\Psi})p(\boldsymbol{\Sigma}|\boldsymbol{\Psi})$ is given by

\begin{align*} \boldsymbol{\Psi}&\sim W_p(n_1, \mathbf{B})\\ \boldsymbol{\Sigma}|\boldsymbol{\Psi}&\sim IW_p(n_2, \boldsymbol{\Psi}), \end{align*}

then the marginal distribution of $\boldsymbol{\Sigma}$ is $MF_{p}(n_1/2,n_2/2,\mathbf{B})$.

source

## Internal Methods (for creating your own matrix-variate distributions)

_logpdf(d::MatrixDistribution, x::AbstractArray)

Evaluate logarithm of pdf value for a given sample x. This function need not perform dimension checking.

source