K-means

# K-means

K-means is a classical method for clustering or vector quantization. It produces a fixed number of clusters, each associated with a center (also known as a prototype), and each data point is assigned to a cluster with the nearest center.

From a mathematical standpoint, K-means is a coordinate descent algorithm that solves the following optimization problem:

$\text{minimize} \ \sum_{i=1}^n \| \mathbf{x}_i - \boldsymbol{\mu}_{z_i} \|^2 \ \text{w.r.t.} \ (\boldsymbol{\mu}, z)$

Here, $\boldsymbol{\mu}_k$ is the center of the $k$-th cluster, and $z_i$ is an index of the cluster for $i$-th point $\mathbf{x}_i$.

Clustering.kmeansFunction.
kmeans(X, k, [...]) -> KmeansResult

K-means clustering of the $d×n$ data matrix X (each column of X is a $d$-dimensional data point) into k clusters.

Arguments

• init (defaults to :kmpp): how cluster seeds should be initialized, could be one of the following:
• a Symbol, the name of a seeding algorithm (see Seeding for a list of supported methods);
• an instance of SeedingAlgorithm;
• an integer vector of length $k$ that provides the indices of points to use as initial seeds.
• weights: $n$-element vector of point weights (the cluster centers are the weighted means of cluster members)
• maxiter, tol, display: see common options
source
KmeansResult{C,D<:Real,WC<:Real} <: ClusteringResult

The output of kmeans and kmeans!.

Type parameters

• C<:AbstractMatrix{<:AbstractFloat}: type of the centers matrix
• D<:Real: type of the assignment cost
• WC<:Real: type of the cluster weight
source

If you already have a set of initial center vectors, kmeans! could be used:

Clustering.kmeans!Function.
kmeans!(X, centers; [kwargs...]) -> KmeansResult

Update the current cluster centers ($d×k$ matrix, where $d$ is the dimension and $k$ the number of centroids) using the $d×n$ data matrix X (each column of X is a $d$-dimensional data point).

See kmeans for the description of optional kwargs.

source

## Examples

using Clustering

# make a random dataset with 1000 random 5-dimensional points
X = rand(5, 1000)

# cluster X into 20 clusters using K-means
R = kmeans(X, 20; maxiter=200, display=:iter)

@assert nclusters(R) == 20 # verify the number of clusters

a = assignments(R) # get the assignments of points to clusters
c = counts(R) # get the cluster sizes
M = R.centers # get the cluster centers
5×20 Array{Float64,2}:
0.164604  0.228906  0.350394  0.850131  …  0.77535   0.769705  0.790462
0.778679  0.331674  0.194845  0.367476     0.193065  0.786183  0.415769
0.815839  0.265362  0.274903  0.687911     0.252092  0.235703  0.755996
0.510177  0.292274  0.763865  0.784683     0.474675  0.526003  0.243253
0.303659  0.783869  0.232346  0.718252     0.762558  0.816224  0.745028
using RDatasets, Clustering, Plots
iris = dataset("datasets", "iris"); # load the data

features = collect(Matrix(iris[:, 1:4])'); # features to use for clustering
result = kmeans(features, 3); # run K-means for the 3 clusters

# plot with the point color mapped to the assigned cluster index
scatter(iris.PetalLength, iris.PetalWidth, marker_z=result.assignments,
color=:lightrainbow, legend=false)