Problem :** You want to create a representation that describes important distributions in your data, and a good representation should be easy to model and factorize.
idea : Let’s use the change of variable rule to represent the data x by making any transformation h=f(x) the inverse function x=f^(-1)(h).
architecture : split the hidden layer in half, mlp the first half and the second half directly sum with the first half mlped, this transformation is called additive coupling layer and this mlped half is alternated for each layer.
objective : log-likelihood
baseline : Deep MFA, GRBM
data : MNIST, Toronto Face Dataset(TFD), Street View House Numbers dataset(SVHN), CIFAR-10
result : high likelihood. h sample and put it into the inverse function, it is generated.
contribution : As a pioneering work among flow based models