Parametric tests
Power divergence test
PowerDivergenceTest(x[, y]; lambda = 1.0, theta0 = ones(length(x))/length(x))
Perform a Power Divergence test.
If y
is not given and x
is a matrix with one row or column, or x
is a vector, then a goodness-of-fit test is performed (x
is treated as a one-dimensional contingency table). In this case, the hypothesis tested is whether the population probabilities equal those in theta0
, or are all equal if theta0
is not given.
If x
is a matrix with at least two rows and columns, it is taken as a two-dimensional contingency table. Otherwise, x
and y
must be vectors of the same length. The contingency table is calculated using the counts
function from the StatsBase
package. Then the power divergence test is conducted under the null hypothesis that the joint distribution of the cell counts in a 2-dimensional contingency table is the product of the row and column marginals.
Note that the entries of x
(and y
if provided) must be non-negative integers.
The power divergence test is given by
where $n_{ij}$ is the cell count in the $i$ th row and $j$ th column and $λ$ is a real number determining the nature of the test to be performed:
- $λ = 1$: equal to Pearson's chi-squared statistic
- $λ \to 0$: converges to the likelihood ratio test statistic
- $λ \to -1$: converges to the minimum discrimination information statistic (Gokhale and Kullback, 1978)
- $λ = -2$: equals Neyman modified chi-squared (Neyman, 1949)
- $λ = -1/2$: equals the Freeman-Tukey statistic (Freeman and Tukey, 1950).
Under regularity conditions, the asymptotic distributions are identical (see Drost et. al. 1989). The $χ^2$ null approximation works best for $λ$ near $2/3$.
Implements: pvalue
, confint(::PowerDivergenceTest)
References
- Agresti, Alan. Categorical Data Analysis, 3rd Edition. Wiley, 2013.
Pearson chi-squared test
HypothesisTests.ChisqTest
— Function.ChisqTest(x[, y][, theta0 = ones(length(x))/length(x)])
Perform a Pearson chi-squared test (equivalent to a PowerDivergenceTest
with $λ = 1$).
If y
is not given and x
is a matrix with one row or column, or x
is a vector, then a goodness-of-fit test is performed (x
is treated as a one-dimensional contingency table). In this case, the hypothesis tested is whether the population probabilities equal those in theta0
, or are all equal if theta0
is not given.
If x
is a matrix with at least two rows and columns, it is taken as a two-dimensional contingency table. Otherwise, x
and y
must be vectors of the same length. The contingency table is calculated using counts
function from the StatsBase
package. Then the power divergence test is conducted under the null hypothesis that the joint distribution of the cell counts in a 2-dimensional contingency table is the product of the row and column marginals.
Note that the entries of x
(and y
if provided) must be non-negative integers.
Multinomial likelihood ratio test
HypothesisTests.MultinomialLRTest
— Function.MultinomialLRTest(x[, y][, theta0 = ones(length(x))/length(x)])
Perform a multinomial likelihood ratio test (equivalent to a PowerDivergenceTest
with $λ = 0$).
If y
is not given and x
is a matrix with one row or column, or x
is a vector, then a goodness-of-fit test is performed (x
is treated as a one-dimensional contingency table). In this case, the hypothesis tested is whether the population probabilities equal those in theta0
, or are all equal if theta0
is not given.
If x
is a matrix with at least two rows and columns, it is taken as a two-dimensional contingency table. Otherwise, x
and y
must be vectors of the same length. The contingency table is calculated using counts
function from the StatsBase
package. Then the power divergence test is conducted under the null hypothesis that the joint distribution of the cell counts in a 2-dimensional contingency table is the product of the row and column marginals.
Note that the entries of x
(and y
if provided) must be non-negative integers.
t-test
HypothesisTests.OneSampleTTest
— Type.OneSampleTTest(xbar::Real, stddev::Real, n::Int, μ0::Real = 0)
Perform a one sample t-test of the null hypothesis that n
values with mean xbar
and sample standard deviation stddev
come from a distribution with mean μ0
against the alternative hypothesis that the distribution does not have mean μ0
.
OneSampleTTest(x::AbstractVector{T<:Real}, y::AbstractVector{T<:Real}, μ0::Real = 0)
Perform a paired sample t-test of the null hypothesis that the differences between pairs of values in vectors x
and y
come from a distribution with mean μ0
against the alternative hypothesis that the distribution does not have mean μ0
.
EqualVarianceTTest(x::AbstractVector{T<:Real}, y::AbstractVector{T<:Real})
Perform a two-sample t-test of the null hypothesis that x
and y
come from distributions with equal means and variances against the alternative hypothesis that the distributions have different means but equal variances.
UnequalVarianceTTest(x::AbstractVector{T<:Real}, y::AbstractVector{T<:Real})
Perform an unequal variance two-sample t-test of the null hypothesis that x
and y
come from distributions with equal means against the alternative hypothesis that the distributions have different means.
This test is sometimes known as Welch's t-test. It differs from the equal variance t-test in that it computes the number of degrees of freedom of the test using the Welch-Satterthwaite equation:
z-test
HypothesisTests.OneSampleZTest
— Type.OneSampleZTest(xbar::Real, stddev::Real, n::Int, μ0::Real = 0)
Perform a one sample z-test of the null hypothesis that n
values with mean xbar
and population standard deviation stddev
come from a distribution with mean μ0
against the alternative hypothesis that the distribution does not have mean μ0
.
OneSampleZTest(x::AbstractVector{T<:Real}, y::AbstractVector{T<:Real}, μ0::Real = 0)
Perform a paired sample z-test of the null hypothesis that the differences between pairs of values in vectors x
and y
come from a distribution with mean μ0
against the alternative hypothesis that the distribution does not have mean μ0
.
EqualVarianceZTest(x::AbstractVector{T<:Real}, y::AbstractVector{T<:Real})
Perform a two-sample z-test of the null hypothesis that x
and y
come from distributions with equal means and variances against the alternative hypothesis that the distributions have different means but equal variances.
UnequalVarianceZTest(x::AbstractVector{T<:Real}, y::AbstractVector{T<:Real})
Perform an unequal variance two-sample z-test of the null hypothesis that x
and y
come from distributions with equal means against the alternative hypothesis that the distributions have different means.
F-test
HypothesisTests.VarianceFTest
— Type.VarianceFTest(x::AbstractVector{<:Real}, y::AbstractVector{<:Real})
Perform an F-test of the null hypothesis that two real-valued vectors x
and y
have equal variances.
Implements: pvalue
References
- George E. P. Box, "Non-Normality and Tests on Variances", Biometrika 40 (3/4): 318–335, 1953.
External links