# Autoencoders
Feedforward networks that
- Have a bottleneck layer with fewer dimensions than the input
- With an output layer with the same dimensionality as the input
- And as objective the output to reconstruct the input
There is an encoder network: $v=h(W x)$
And a decoder network: $x^{\prime}=h\left(W^{\prime} v\right)$
The weights are often tied together: $W^{T}=W^{\prime}$
If there is no bottleneck and no regularization -> no learning, the input can be simply copied to the output.
## Denoising Autoencoders
- Add some noise to the input: $\tilde{x}=x+\varepsilon$
- Then ask the autoencoder to reconstruct the original input $x$ regardless
- Learns more generalizable embeddings
---
## References