
Of all the machine learning algorithms, the most fascinating are neural networks. They don’t require statistical hypothesis or rigorous data preparation save for normalization.
The power of a neural network lies in its architecture, its activation functions, its regularization, etc.
Here’s an interesting article exploring a particular kind of neural network: the autoencoder.
The autoencoder (or autoassociator) is a multilayer feed-forward neural network, usually trained with the backpropagation algorithm. Any activation function is allowed: sigmoid, ReLU, tanh … the list of possible activation functions for neural units is quite long.