# Why implicit density models
In the generative models we have model explicitly our density functions. Typically, the learning objective is intractable due to normalization integration.
Implicit density models circumvent the problem with very intelligent approximations. For example, [[Boltzmann Machines]] via [[Contrastive Divergence]] and [[Variational Autoencoders]] via [[Variational Inference]].
What if skipped modelling the explicit density altogether? Do we really need to model $p(x)$ or $p(x,z)$?
Implicit density functions do not. Instead, they learn to evaluate directly if the generations are plausible and return gradients when not.
What is a plausible generation? Especially in an unsupervised setting with no guidance? For [[Generative Adversarial Networks]] plausible generation is one that cannot be easily recognized as such by a competing neural network.
## Summary of generative models
![[genmodel-summary.jpg]]
---
## References