Disentangled Variational Auto-Encoder in TensorFlow / Keras (Beta-VAE)
-
Updated
Oct 26, 2019 - Python
Disentangled Variational Auto-Encoder in TensorFlow / Keras (Beta-VAE)
Joint variational Autoencoders for Multimodal Imputation and Embedding (JAMIE)
Implementations of autoencoder, generative adversarial networks, variational autoencoder and adversarial variational autoencoder
automatic/analytical differentiation benchmark
TensorFlow implementation of the method from Variational Dropout Sparsifies Deep Neural Networks, Molchanov et al. (2017)
[Pytorch] Minimal implementation of a Variational Autoencoder (VAE) with Categorical Latent variables inspired from "Categorical Reparameterization with Gumbel-Softmax".
Python toolbox for solving imaging continuous optimization problems.
Discrete Variational Autoencoder in PyTorch
Code for Adversarial Approximate Inference for Speech to Laryngograph Conversion
Disentangling the latent space of a VAE.
Experiments on Disentangled Representation Learning using Variational autoencoding algorithms
Variational Diagrammatic Monte-Carlo Built on Dynamical Mean-Field Theory
A simple variational autoencoder to generate images from MNIST. Implemented in TensorFlow.
Add a description, image, and links to the variational topic page so that developers can more easily learn about it.
To associate your repository with the variational topic, visit your repo's landing page and select "manage topics."