# Abstraction for Statistical Models

This package defines an abstract type `StatisticalModel`

, and an abstract subtype `RegressionModel`

.

Particularly, instances of `StatisticalModel`

implement the following methods.

`StatsBase.adjr2`

— Function```
adjr2(model::StatisticalModel)
adjr²(model::StatisticalModel)
```

Adjusted coefficient of determination (adjusted R-squared).

For linear models, the adjusted R² is defined as $1 - (1 - (1-R^2)(n-1)/(n-p))$, with $R^2$ the coefficient of determination, $n$ the number of observations, and $p$ the number of coefficients (including the intercept). This definition is generally known as the Wherry Formula I.

```
adjr2(model::StatisticalModel, variant::Symbol)
adjr²(model::StatisticalModel, variant::Symbol)
```

Adjusted pseudo-coefficient of determination (adjusted pseudo R-squared).

For nonlinear models, one of the several pseudo R² definitions must be chosen via `variant`

. The only currently supported variants are `:MacFadden`

, defined as $1 - (\log (L) - k)/\log (L0)$ and `:devianceratio`

, defined as $1 - (D/(n-k))/(D_0/(n-1))$. In these formulas, $L$ is the likelihood of the model, $L0$ that of the null model (the model including only the intercept), $D$ is the deviance of the model, $D_0$ is the deviance of the null model, $n$ is the number of observations (given by `nobs`

) and $k$ is the number of consumed degrees of freedom of the model (as returned by `dof`

).

`StatsBase.aic`

— Function`aic(model::StatisticalModel)`

Akaike's Information Criterion, defined as $-2 \log L + 2k$, with $L$ the likelihood of the model, and `k`

its number of consumed degrees of freedom (as returned by `dof`

).

`StatsBase.aicc`

— Function`aicc(model::StatisticalModel)`

Corrected Akaike's Information Criterion for small sample sizes (Hurvich and Tsai 1989), defined as $-2 \log L + 2k + 2k(k-1)/(n-k-1)$, with $L$ the likelihood of the model, $k$ its number of consumed degrees of freedom (as returned by `dof`

), and $n$ the number of observations (as returned by `nobs`

).

`StatsBase.bic`

— Function`bic(model::StatisticalModel)`

Bayesian Information Criterion, defined as $-2 \log L + k \log n$, with $L$ the likelihood of the model, $k$ its number of consumed degrees of freedom (as returned by `dof`

), and $n$ the number of observations (as returned by `nobs`

).

`StatsBase.coef`

— Function`coef(model::StatisticalModel)`

Return the coefficients of the model.

`StatsBase.coefnames`

— Function`coefnames(model::StatisticalModel)`

Return the names of the coefficients.

`StatsBase.coeftable`

— Function`coeftable(model::StatisticalModel; level::Real=0.95)`

Return a table with coefficients and related statistics of the model. `level`

determines the level for confidence intervals (by default, 95%).

The returned `CoefTable`

object implements the Tables.jl interface, and can be converted e.g. to a `DataFrame`

via `using DataFrames; DataFrame(coeftable(model))`

.

`StatsBase.confint`

— Function`confint(model::StatisticalModel; level::Real=0.95)`

Compute confidence intervals for coefficients, with confidence level `level`

(by default 95%).

`StatsBase.deviance`

— Function`deviance(model::StatisticalModel)`

Return the deviance of the model relative to a reference, which is usually when applicable the saturated model. It is equal, *up to a constant*, to $-2 \log L$, with $L$ the likelihood of the model.

`StatsBase.dof`

— Function`dof(model::StatisticalModel)`

Return the number of degrees of freedom consumed in the model, including when applicable the intercept and the distribution's dispersion parameter.

`StatsBase.fit`

— Function`fit(Histogram, data[, weight][, edges]; closed=:left, nbins)`

Fit a histogram to `data`

.

**Arguments**

`data`

: either a vector (for a 1-dimensional histogram), or a tuple of vectors of equal length (for an*n*-dimensional histogram).`weight`

: an optional`AbstractWeights`

(of the same length as the data vectors), denoting the weight each observation contributes to the bin. If no weight vector is supplied, each observation has weight 1.`edges`

: a vector (typically an`AbstractRange`

object), or tuple of vectors, that gives the edges of the bins along each dimension. If no edges are provided, these are determined from the data.

**Keyword arguments**

`closed`

: if`:left`

(the default), the bin intervals are left-closed [a,b); if`:right`

, intervals are right-closed (a,b].`nbins`

: if no`edges`

argument is supplied, the approximate number of bins to use along each dimension (can be either a single integer, or a tuple of integers).

**Examples**

```
# Univariate
h = fit(Histogram, rand(100))
h = fit(Histogram, rand(100), 0:0.1:1.0)
h = fit(Histogram, rand(100), nbins=10)
h = fit(Histogram, rand(100), weights(rand(100)), 0:0.1:1.0)
h = fit(Histogram, [20], 0:20:100)
h = fit(Histogram, [20], 0:20:100, closed=:right)
# Multivariate
h = fit(Histogram, (rand(100),rand(100)))
h = fit(Histogram, (rand(100),rand(100)),nbins=10)
```

Fit a statistical model.

`fit(ZScoreTransform, X; dims=nothing, center=true, scale=true)`

Fit standardization parameters to vector or matrix `X`

and return a `ZScoreTransform`

transformation object.

**Keyword arguments**

`dims`

: if`1`

fit standardization parameters in column-wise fashion; if`2`

fit in row-wise fashion. The default is`nothing`

, which is equivalent to`dims=2`

with a deprecation warning.`center`

: if`true`

(the default) center data so that its mean is zero.`scale`

: if`true`

(the default) scale the data so that its variance is equal to one.

**Examples**

```
julia> using StatsBase
julia> X = [0.0 -0.5 0.5; 0.0 1.0 2.0]
2×3 Array{Float64,2}:
0.0 -0.5 0.5
0.0 1.0 2.0
julia> dt = fit(ZScoreTransform, X, dims=2)
ZScoreTransform{Float64}(2, 2, [0.0, 1.0], [0.5, 1.0])
julia> StatsBase.transform(dt, X)
2×3 Array{Float64,2}:
0.0 -1.0 1.0
-1.0 0.0 1.0
```

`fit(UnitRangeTransform, X; dims=nothing, unit=true)`

Fit a scaling parameters to vector or matrix `X`

and return a `UnitRangeTransform`

transformation object.

**Keyword arguments**

`dims`

: if`1`

fit standardization parameters in column-wise fashion;

if `2`

fit in row-wise fashion. The default is `nothing`

.

`unit`

: if`true`

(the default) shift the minimum data to zero.

**Examples**

```
julia> using StatsBase
julia> X = [0.0 -0.5 0.5; 0.0 1.0 2.0]
2×3 Array{Float64,2}:
0.0 -0.5 0.5
0.0 1.0 2.0
julia> dt = fit(UnitRangeTransform, X, dims=2)
UnitRangeTransform{Float64}(2, 2, true, [-0.5, 0.0], [1.0, 0.5])
julia> StatsBase.transform(dt, X)
2×3 Array{Float64,2}:
0.5 0.0 1.0
0.0 0.5 1.0
```

`StatsBase.fit!`

— FunctionFit a statistical model in-place.

`StatsBase.informationmatrix`

— Function`informationmatrix(model::StatisticalModel; expected::Bool = true)`

Return the information matrix of the model. By default the Fisher information matrix is returned, while the observed information matrix can be requested with `expected = false`

.

`StatsBase.isfitted`

— Function`isfitted(model::StatisticalModel)`

Indicate whether the model has been fitted.

`StatsBase.islinear`

— Function`islinear(model::StatisticalModel)`

Indicate whether the model is linear.

`StatsBase.loglikelihood`

— Function`loglikelihood(model::StatisticalModel)`

Return the log-likelihood of the model.

`loglikelihood(model::StatisticalModel, ::Colon)`

Return a vector of each observation's contribution to the log-likelihood of the model. In other words, this is the vector of the pointwise log-likelihood contributions.

In general, `sum(loglikehood(model, :)) == loglikelihood(model)`

.

`loglikelihood(model::StatisticalModel, observation)`

Return the contribution of `observation`

to the log-likelihood of `model`

.

`StatsBase.mss`

— Function`mss(model::StatisticalModel)`

Return the model sum of squares.

`StatsBase.nobs`

— Function`nobs(model::StatisticalModel)`

Return the number of independent observations on which the model was fitted. Be careful when using this information, as the definition of an independent observation may vary depending on the model, on the format used to pass the data, on the sampling plan (if specified), etc.

`StatsBase.nulldeviance`

— Function`nulldeviance(model::StatisticalModel)`

Return the deviance of the null model, that is the one including only the intercept.

`StatsBase.r2`

— Function```
r2(model::StatisticalModel)
r²(model::StatisticalModel)
```

Coefficient of determination (R-squared).

For a linear model, the R² is defined as $ESS/TSS$, with $ESS$ the explained sum of squares and $TSS$ the total sum of squares.

```
r2(model::StatisticalModel, variant::Symbol)
r²(model::StatisticalModel, variant::Symbol)
```

Pseudo-coefficient of determination (pseudo R-squared).

For nonlinear models, one of several pseudo R² definitions must be chosen via `variant`

. Supported variants are:

`:MacFadden`

(a.k.a. likelihood ratio index), defined as $1 - \log (L)/\log (L_0)$;`:CoxSnell`

, defined as $1 - (L_0/L)^{2/n}$;`:Nagelkerke`

, defined as $(1 - (L_0/L)^{2/n})/(1 - L_0^{2/n})$.`:devianceratio`

, defined as $1 - D/D_0$.

In the above formulas, $L$ is the likelihood of the model, $L_0$ is the likelihood of the null model (the model with only an intercept), $D$ is the deviance of the model (from the saturated model), $D_0$ is the deviance of the null model, $n$ is the number of observations (given by `nobs`

).

The Cox-Snell and the deviance ratio variants both match the classical definition of R² for linear models.

`StatsBase.rss`

— Function`rss(model::StatisticalModel)`

Return the residual sum of squares of the model.

`StatsBase.score`

— Function`score(model::StatisticalModel)`

Return the score of the model, that is the gradient of the log-likelihood with respect to the coefficients.

`StatsBase.stderror`

— Function`stderror(model::StatisticalModel)`

Return the standard errors for the coefficients of the model.

`StatsBase.vcov`

— Function`vcov(model::StatisticalModel)`

Return the variance-covariance matrix for the coefficients of the model.

`StatsBase.weights`

— Method`weights(model::StatisticalModel)`

Return the weights used in the model.

`RegressionModel`

extends `StatisticalModel`

by implementing the following additional methods.

`StatsBase.crossmodelmatrix`

— Function`crossmodelmatrix(model::RegressionModel)`

Return `X'X`

where `X`

is the model matrix of `model`

. This function will return a pre-computed matrix stored in `model`

if possible.

`StatsBase.dof_residual`

— Function`dof_residual(model::RegressionModel)`

Return the residual degrees of freedom of the model.

`StatsBase.fitted`

— Function`fitted(model::RegressionModel)`

Return the fitted values of the model.

`StatsBase.leverage`

— Function`leverage(model::RegressionModel)`

Return the diagonal of the projection matrix of the model.

`StatsBase.cooksdistance`

— Function`cooksdistance(model::RegressionModel)`

Compute Cook's distance for each observation in linear model `model`

, giving an estimate of the influence of each data point.

`StatsBase.meanresponse`

— Function`meanresponse(model::RegressionModel)`

Return the mean of the response.

`StatsBase.modelmatrix`

— Function`modelmatrix(model::RegressionModel)`

Return the model matrix (a.k.a. the design matrix).

`StatsBase.response`

— Function`response(model::RegressionModel)`

Return the model response (a.k.a. the dependent variable).

`StatsBase.responsename`

— Function`responsename(model::RegressionModel)`

Return the name of the model response (a.k.a. the dependent variable).

`StatsBase.predict`

— Function`predict(model::RegressionModel, [newX])`

Form the predicted response of `model`

. An object with new covariate values `newX`

can be supplied, which should have the same type and structure as that used to fit `model`

; e.g. for a GLM it would generally be a `DataFrame`

with the same variable names as the original predictors.

`StatsBase.predict!`

— Function`predict!`

In-place version of `predict`

.

`StatsBase.residuals`

— Function`residuals(model::RegressionModel)`

Return the residuals of the model.