K-Means / Neural-Gas

The k-means algorithm is sometimes also called the basic ISODATA algorithm. The idea behind that algorithm is simple. A set of example vectors is clustered into a few classes iteratively such that a distortion measure keeps being minimized. Every class has its mean vector. In a k-means iteration, every example vector is assigned to the class with the closest mean vector. After that every mean vector is replaced by the average of all vectors that have been assigned to it.

The neural-gas algorithm is a generalization of the k-means algorithm. The difference is, that every example vector is not assigned to a single class but to more than one class. It will be assigned to the closest class with a high weight and to other classes with smaller weights. After an iteration, the mean of a class is replaced by the weighted average of all assigned vectors.
This way, the neural gas algorithm is smoother, every class gets to see all data (some with a very low weight). The neural-gas algorithm uses a temperature value t which defines what weight will be given to the closest class and to the second closest class etc. The weights decay exponentially with the increasing distance-rank of the classes. Such the the n-th-closest class gets a weight of exp(-n/t).
We can clearly see that the closest class (the one ranked n=0) always gets the weight 1.0. If the temperature is higher then the more distant classes get a better weight, if the temperature is infinite, then every class gets the weight 1.0.

In Janus we usually use a very low temperature and decrease the temperature every iteration such that in the end it is virtually zero, and the neural-gas algorithm resembles the k-means algorithm.