Skip to content

Commit

Permalink
change readme
Browse files Browse the repository at this point in the history
  • Loading branch information
josemanuel22 committed Aug 2, 2024
1 parent 2dc1ded commit b0293aa
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ In addition to its core functionalities, `ISL` offers a suite of utility functio

# Methods

Implicit generative models employ an $m$-dimensional latent random variable (r.v.) $\mathbf{z}$ to simulate random samples from a prescribed $n$-dimensional target probability distribution. To be precise, the latent variable undergoes a transformation through a deterministic function $g_{\theta}$, which maps $\mathbb{R}^m \mapsto \mathbb{R}^n$ using the parameter set $\theta$. Given the model capability to generate samples with ease, various techniques can be employed for contrasting two sample collections: one originating from the genuine data distribution and the other from the \jm{model} distribution. This approach essentially constitutes a methodology for \jm{the approximation of probability distributions} via comparison.
Generative adversarial networks (GANs) [goodfellow2014generative], $f$-GANs [nowozin2016f], Wasserstein-GANs (WGANs) [arjovsky2017wasserstein], adversarial variational Bayes (AVB) [mescheder2017adversarial], and \jm{maximum mean-miscrepancy} (MMD) GANs [li2017mmd] are some popular methods that fall within this framework.
Implicit generative models employ an $m$-dimensional latent random variable (r.v.) $\mathbf{z}$ to simulate random samples from a prescribed $n$-dimensional target probability distribution. To be precise, the latent variable undergoes a transformation through a deterministic function $g_{\theta}$, which maps $\mathbb{R}^m \mapsto \mathbb{R}^n$ using the parameter set $\theta$. Given the model capability to generate samples with ease, various techniques can be employed for contrasting two sample collections: one originating from the genuine data distribution and the other from the model distribution. This approach essentially constitutes a methodology for the approximation of probability distributions via comparison.
Generative adversarial networks (GANs) [goodfellow2014generative], $f$-GANs [nowozin2016f], Wasserstein-GANs (WGANs) [arjovsky2017wasserstein], adversarial variational Bayes (AVB) [mescheder2017adversarial], and maximum mean-miscrepancy (MMD) GANs [li2017mmd] are some popular methods that fall within this framework.

Approximation of 1-dimensional (1D) parametric distributions is a seemingly naive problem for which the above-mentioned models can perform below expectations. In [zaheer2017gan], the authors report that various types of GANs struggle to approximate relatively simple distributions from samples, emerging with MMD-GAN as the most promising technique. However, the latter implements a kernelized extension of a moment-matching criterion defined over a reproducing kernel Hilbert space, and consequently, the objective function is expensive to compute.

Expand Down

0 comments on commit b0293aa

Please sign in to comment.