Learnable Explicit Density for Continuous Latent Space and Variational Inference.

Chin-Wei Huang, Ahmed Touati, Laurent Dinh, Michal Drozdzal, Mohammad Havaei, Laurent Charlin, Aaron Courville.

In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior. First, we decompose the learning of VAEs into layerwise density estimation, and argue that having a flexible prior is beneficial to both sample generation and inference. Second, we analyze the family of inverse autoregressive flows (inverse AF) and show that with further improvement, inverse AF could be used as universal approximation to any complicated posterior. Our analysis results in a unified approach to parameterizing a VAE, without the need to restrict ourselves to use factorial Gaussians in the latent real space.

Related posts

Digital Technology Supercluster Announces Investment to Increase the Effectiveness of Precision Oncology

Digital Technology Supercluster Announces Investment to Increase the Effectiveness of Precision Oncology

Harnessing artificial intelligence to take the guesswork out of diagnosing cancer recurrence for millions of cancer survivors

Read more
How to Bring Biomarker Testing In-House for Cancer Targeted Treatment Selection

How to Bring Biomarker Testing In-House for Cancer Targeted Treatment Selection

Personalized cancer treatment via targeted therapies is two-to-three times more effective than standard chemotherapy for patients with advan

...
Read more
Imagia Cybernetics & Canexia Health Merge to Supercharge Precision Oncology Accessibility

Imagia Cybernetics & Canexia Health Merge to Supercharge Precision Oncology Accessibility

Imagia Cybernetics, an AI-healthcare company that accelerates oncology solutions generated from real world data, today announced its merger

...
Read more