# K-means

K-means is a classical method for clustering or vector quantization. It produces a fixed number of clusters, each associated with a *center* (also known as a *prototype*), and each data point is assigned to a cluster with the nearest center.

From a mathematical standpoint, K-means is a coordinate descent algorithm that solves the following optimization problem:

\[\text{minimize} \ \sum_{i=1}^n \| \mathbf{x}_i - \boldsymbol{\mu}_{z_i} \|^2 \ \text{w.r.t.} \ (\boldsymbol{\mu}, z)\]

Here, $\boldsymbol{\mu}_k$ is the center of the $k$-th cluster, and $z_i$ is an index of the cluster for $i$-th point $\mathbf{x}_i$.

`Clustering.kmeans`

— Function`kmeans(X, k, [...]) -> KmeansResult`

K-means clustering of the $d×n$ data matrix `X`

(each column of `X`

is a $d$-dimensional data point) into `k`

clusters.

**Arguments**

`init`

(defaults to`:kmpp`

): how cluster seeds should be initialized, could be one of the following:- a
`Symbol`

, the name of a seeding algorithm (see Seeding for a list of supported methods); - an instance of
`SeedingAlgorithm`

; - an integer vector of length $k$ that provides the indices of points to use as initial seeds.

- a
`weights`

: $n$-element vector of point weights (the cluster centers are the weighted means of cluster members)`maxiter`

,`tol`

,`display`

: see common options

`Clustering.KmeansResult`

— Type`KmeansResult{C,D<:Real,WC<:Real} <: ClusteringResult`

The output of `kmeans`

and `kmeans!`

.

**Type parameters**

`C<:AbstractMatrix{<:AbstractFloat}`

: type of the`centers`

matrix`D<:Real`

: type of the assignment cost`WC<:Real`

: type of the cluster weight

If you already have a set of initial center vectors, `kmeans!`

could be used:

`Clustering.kmeans!`

— Function`kmeans!(X, centers; [kwargs...]) -> KmeansResult`

Update the current cluster `centers`

($d×k$ matrix, where $d$ is the dimension and $k$ the number of centroids) using the $d×n$ data matrix `X`

(each column of `X`

is a $d$-dimensional data point).

See `kmeans`

for the description of optional `kwargs`

.

## Examples

```
using Clustering
# make a random dataset with 1000 random 5-dimensional points
X = rand(5, 1000)
# cluster X into 20 clusters using K-means
R = kmeans(X, 20; maxiter=200, display=:iter)
@assert nclusters(R) == 20 # verify the number of clusters
a = assignments(R) # get the assignments of points to clusters
c = counts(R) # get the cluster sizes
M = R.centers # get the cluster centers
```

```
5×20 Matrix{Float64}:
0.398119 0.218024 0.719639 0.664693 … 0.249263 0.424277 0.306384
0.232337 0.816726 0.637561 0.209547 0.322248 0.244281 0.764116
0.137113 0.739621 0.791013 0.70474 0.520285 0.705569 0.236996
0.748971 0.21472 0.229925 0.183843 0.805548 0.785696 0.659338
0.782496 0.739217 0.768887 0.248272 0.258066 0.790759 0.731828
```

Scatter plot of the K-means clustering results:

```
using RDatasets, Clustering, Plots
iris = dataset("datasets", "iris"); # load the data
features = collect(Matrix(iris[:, 1:4])'); # features to use for clustering
result = kmeans(features, 3); # run K-means for the 3 clusters
# plot with the point color mapped to the assigned cluster index
scatter(iris.PetalLength, iris.PetalWidth, marker_z=result.assignments,
color=:lightrainbow, legend=false)
```