What are Self-Organizing Maps?
Self-Organizing Maps have been around for a while (I prefer to call them ‘Kohonen Maps’ but I guess ‘SOM’ rolls off the tongue better).
There is no shortage of articles and papers available online that cover SOMs in depth, so I’ll keep this short.
A self-organizing map is an artificial neural network that does not follow the patterns typically associated with a neural network. Yes, SOMs require training, and yes trained SOMs automatically map input vectors. However, the structure of a trained SOM has more in common with a trained k-means model than say, an RNN. Rather than training an SOM using an error-correction model, we use a vector quantization and a neighborhood function to build our SOM.
To oversimplify SOMs, they are a means to reduce high-dimensional data in a 2-D or 3-D space so that observations similar to each other are represented closer together.
There is currently no native SOM implementation available in TensorFlow. PyMVPA, py-kohonen, sevamoo/SOMPY and JustGlowing/minisom work well enough, but as my interests in TensorFlow have grown over the past six months, I figured this would be a good time to talk about creating new implementations. Sachin Joglekar’s implementation, published back in November 2015, is a great start, but a lot more could be done with it.