# AlexNet
Network that started the deep learning revolution. Very similar to LeNet.
![[alexnet.jpg]]
## Notable Properties
The difference in the size of the filters is notable as the network is shallower (by today's standards), so larger filters help learn complex patterns early on and use them later down the layer. Having shallow layers and smaller filters creates limitations on the complexity that the network can encode.
By removing fully connected layer 7, you reduce 16 milion parameters but only 1.1% drop in performance! By removing all fully connected layers, we reduce 50 million parameters but only 5.7% drop in performance. This shows convolutional layers are very important.
By removing layer 3,4,6,7 we have 33.5% drop in performance! This shows depth is crucial.
Translation Invariance - The representation are not too different when images are shifted.
Scale Invariance - CNNs are scale invariant to some degree. This is not because the convolutional filters are scale invariant, but because of scale variations present in the data.
Rotation Invariance - CNNs are not rotation invariant. Data augmentation can help.
---
## References
1. Lecture 4, UvA DL course 2020