# InfoGAN
InfoGANs learn interpretable representations with GANs.
Generator takes as input noise $z$ and a latent code $c$.
Add mutual information as regularization to objective
$
\min _{G} \max _{D} \mathbb{E}_{x \sim p_{\text {data}}}[\log D(x \mid y)]+\mathbb{E}_{\mathbf{z} \sim p(z)}[\log (1-D(G(z \mid y)))]-\lambda I(c, G(z, c))
$
- Requiring high mutual information between the latent code and the generation discourages learning trivial latent codes
As the mutual information requires the true posterior, a variational bound is used instead.
---
## References
1. Chen et al., InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets