Speaker
Description
Normalizing flows are a popular class of models for approximating probability distributions. However, in many tasks such as image generation benchmarks they are still outperformed by autoregressive models and generative adversarial networks. This is in part due to their invertible nature limiting their ability to model target distributions with a complex topological structure. Several approaches have been proposed to solve this problem but they sacrifice invertibility and thereby trackability of the log-likelihood as well as other desirable properties. In this work, we introduce a base distribution for normalizing flows based on learned rejection sampling, allowing them to model complex topologies without giving up on bijectivity. We applied our model to various sample problems, such as images generation and approximating Boltzmann distributions, and it outperforms the baseline qualitatively and quantitatively.