Deep Recurrent Attentive Writer
This post implements the Deep Recurrent Attentive Writer
Many people preceded me with fascinating implementations
This implementation goes end to end in 250 lines of code. We start at import MNIST
and end up with these lovely visualizations.
This series of posts, implementing an Variational autoencoder, followed by a variational recurrent autoencoder and now the DRAW was inspired after Karol Gregor's lecture at Oxford. More recently, he ties his work together at ICML (or use this link).
Moreover if you'd like a better introduction to variational inference, go for this talk by Diederik Kingma at ICLR or go for his paper.
As always, I am curious to any comments and questions. Reach me at romijndersrob@gmail.com