Reading "MADE: Masked Autoencoder for Distribution Estimation", by Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
Paper abstract:
“There has been a lot of recent interest in designing neural network models to estimate a distribution from a set of examples. We introduce a simple modification for autoencoder neural networks that yields powerful generative models. Our method masks the autoencoder’s parameters to respect autoregressive constraints: each input is reconstructed only from previous inputs in a given ordering. Constrained this way, the autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint probability. We can also train a single network that can decompose the joint probability in multiple different orderings. Our simple framework can be applied to multiple architectures, including deep ones. Vectorized implementations, such as on GPUs, are simple and fast. Experiments demonstrate that this approach is competitive with state-of-the-art tractable distribution estimators. At test time, the method is significantly faster and scales better than other autoregressive estimators.”
Reference: arxiv.org/abs/1502.03509
—
Speaker bio: A second year DPhil student in computational linguistic group of computer science department, supervised by Dr. Phil Blunsom. I’m working on developing deep generative models for NLP, especially on variational autoencoder.
Date:
29 April 2015, 13:00 (Wednesday, 1st week, Trinity 2015)
Venue:
The Robert Hooke Building, Parks Road OX1 3PR
Venue Details:
Tony Hoare Room, Department of Computer Science, Robert Hooke Building
Speaker:
Yishu Miao (University of Oxford)
Part of:
Machine Learning Lunches
Booking required?:
Not required
Audience:
Public
Editor:
Iurii Perov