Autoencoders

tags
Neural networks, Data representation

Autoencoders and PCA nn

The relation between Autoencoders and PCA is strong. In particular, a very small autoencoder with only linear activations seems intuitively very close to PCA decomposition. (Bourlard, Kamp 1988) gives an interesting analysis of the uselessness of the activation functions in the encoding layers of an autoencoder when there is no activations in the output layers. In that case, autoencoding is closely related to a sinigular value decomposition of the input data. Quoting Elad Plaut:

It is well known that an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.

The above quote is from (Plaut 2018) where it is demonstrated that autoencoder weights contain all the information of the loading vectors of the SVD.

Bibliography

  1. . . "Auto-association by Multilayer Perceptrons and Singular Value Decomposition". Biological Cybernetics 59 (4-5):291–94. DOI.
  2. . . "From Principal Subspaces to Principal Components with Linear Autoencoders". Arxiv:1804.10253 [cs, Stat]. http://arxiv.org/abs/1804.10253.
Last changed | authored by

Comments


← Back to Notes