image

paper

TL;DR

  • task : generative model
  • Problem :** Compared to discriminative models, generative has limited performance because it is difficult to approximate intractable probabilities with maximum likelihood via a back-prop.
  • Idea :** Introduce a discriminator to learn adversarially.
  • architecture : generator neither MLP discriminator neither MLP
  • objective : Train the discriminator to be good at discriminating between generated and real data, while the generator creates data that the discriminator is not good at discriminating.
  • baseline : restricted Boltzmann machines(RBM), deep Boltzmann Machines(DBM), deep Belief network(DBN)
  • data : mnist, Toronto Face Database, CIFAR10
  • result : Paren window-based log-likelihood estimates based on SOTA
  • contribution : Unlike RBM, it does not use markov chains, but only gradients to learn.
  • Limitations or things I don’t understand: 4.2. Convergence of Algorithm 1 doesn’t make sense to me

Details

notion