From 9f1f024544e012232f5152fa22f254eb83e9e0f2 Mon Sep 17 00:00:00 2001 From: SahilCarterr <110806554+SahilCarterr@users.noreply.github.com> Date: Wed, 21 Aug 2024 00:48:44 +0530 Subject: [PATCH] Updated Math description to correct rendering Fixed maths rendering issue #315 --- .../generative-models/variational_autoencoders.mdx | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/chapters/en/unit5/generative-models/variational_autoencoders.mdx b/chapters/en/unit5/generative-models/variational_autoencoders.mdx index a73074032..0040f82cf 100644 --- a/chapters/en/unit5/generative-models/variational_autoencoders.mdx +++ b/chapters/en/unit5/generative-models/variational_autoencoders.mdx @@ -7,9 +7,14 @@ Autoencoders are a class of neural networks primarily used for unsupervised lear ![Vanilla Autoencoder Image - Lilian Weng Blog](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/generative_models/autoencoder.png) -This encoder model consists of an encoder network (represented as \\(g_\phi\\)) and a decoder network (represented as \\(f_\theta\\)). The low-dimensional representation is learned in the bottleneck layer as \\(z\\) and the reconstructed output is represented as \\(x' = f_\theta(g_\phi(x))\\) with the goal of \\(x \approx x'\\). +This encoder model consists of an encoder network (represented as $g_\phi$) and a decoder network (represented as $f_\theta$). The low-dimensional representation is learned in the bottleneck layer as $z$, and the reconstructed output is represented as $x' = f_\theta(g_\phi(x))$ with the goal of $x \approx x'$. + +A common loss function used in such vanilla autoencoders is + +$$L(\theta, \phi) = \frac{1}{n}\sum_{i=1}^n (\mathbf{x}^{(i)} - f_\theta(g_\phi(\mathbf{x}^{(i)})))^2$$ + +which tries to minimize the error between the original image and the reconstructed one. This is also known as the `reconstruction loss`. -A common loss function used in such vanilla autoencoders is \\(L(\theta, \phi) = \frac{1}{n}\sum_{i=1}^n (\mathbf{x}^{(i)} - f_\theta(g_\phi(\mathbf{x}^{(i)})))^2\\), which tries to minimize the error between the original image and the reconstructed one and is also known as the `reconstruction loss`. Autoencoders are useful for tasks such as data denoising, feature learning, and compression. However, traditional autoencoders lack the probabilistic nature that makes VAEs particularly intriguing and also useful for generational tasks. @@ -34,4 +39,4 @@ In summary, VAEs go beyond mere data reconstruction; they generate new samples a ## References 1. [Lilian Weng's Awesome Blog on Autoencoders](https://lilianweng.github.io/posts/2018-08-12-vae/) 2. [Generative models under a microscope: Comparing VAEs, GANs, and Flow-Based Models](https://medium.com/sciforce/generative-models-under-a-microscope-comparing-vaes-gans-and-flow-based-models-344f20085d83) -3. [Autoencoders, Variational Autoencoders (VAE) and β-VAE](https://medium.com/@rushikesh.shende/autoencoders-variational-autoencoders-vae-and-%CE%B2-vae-ceba9998773d) \ No newline at end of file +3. [Autoencoders, Variational Autoencoders (VAE) and β-VAE](https://medium.com/@rushikesh.shende/autoencoders-variational-autoencoders-vae-and-%CE%B2-vae-ceba9998773d)