# Negative Interference in Multi Task Learning
- MAML is trained by backpropagating the query set loss through the within-episode gradient descent procedure and into (b, W, θ). This normally requires computing second-order gradients, which can be expensive to obtain (both in terms of time and memory). For this reason, an approximation is often used whereby gradients of the within-episode descent steps are ignored.
---
## References