(zhuan) Variational Autoencoder: Intuition and Implementation

简介: Agustinus Kristiadi's BlogTECH BLOGTRAVEL BLOGPORTFOLIOCONTACTABOUTVariational Autoencoder: Intuition and ImplementationThere are two...

Variational Autoencoder: Intuition and Implementation

There are two generative models facing neck to neck in the data generation business right now: Generative Adversarial Nets (GAN) and Variational Autoencoder (VAE). These two models have different take on how the models are trained. GAN is rooted in game theory, its objective is to find the Nash Equilibrium between discriminator net and generator net. On the other hand, VAE is rooted in bayesian inference, i.e. it wants to model the underlying probability distribution of data so that it could sample new data from that distribution.

In this post, we will look at the intuition of VAE model and its implementation in Keras.

VAE: Formulation and Intuition

Suppose we want to generate a data. Good way to do it is first to decide what kind of data we want to generate, then actually generate the data. For example, say, we want to generate an animal. First, we imagine the animal: it must have four legs, and it must be able to swim. Having those criteria, we could then actually generate the animal by sampling from the animal kingdom. Lo and behold, we get Platypus!

From the story above, our imagination is analogous to latent variable. It is often useful to decide the latent variable first in generative models, as latent variable could describe our data. Without latent variable, it is as if we just generate data blindly. And this is the difference between GAN and VAE: VAE uses latent variable, hence it’s an expressive model.

Alright, that fable is great and all, but how do we model that? Well, let’s talk about probability distribution.

Let’s define some notions:

  1. XX: data that we want to model a.k.a the animal
  2. zz: latent variable a.k.a our imagination
  3. P(X)P(X): probability distribution of the data, i.e. that animal kingdom
  4. P(z)P(z): probability distribution of latent variable, i.e. our brain, the source of our imagination
  5. P(X|z)P(X|z): distribution of generating data given latent variable, e.g. turning imagination into real animal

Our objective here is to model the data, hence we want to find P(X)P(X). Using the law of probability, we could find it in relation with zz as follows:

P(X)=P(X|z)P(z)dzP(X)=∫P(X|z)P(z)dz

that is, we marginalize out zz from the joint probability distribution P(X,z)P(X,z).

Now if only we know P(X,z)P(X,z), or equivalently, P(X|z)P(X|z) and P(z)P(z)…

The idea of VAE is to infer P(z)P(z) using P(z|X)P(z|X). This is make a lot of sense if we think about it: we want to make our latent variable likely under our data. Talking in term of our fable example, we want to limit our imagination only on animal kingdom domain, so we shouldn’t imagine about things like root, leaf, tyre, glass, GPU, refrigerator, doormat, … as it’s unlikely that those things have anything to do with things that come from the animal kingdom. Right?

But the problem is, we have to infer that distribution P(z|X)P(z|X), as we don’t know it yet. In VAE, as it name suggests, we infer P(z|X)P(z|X) using a method called Variational Inference (VI). VI is one of the popular choice of method in bayesian inference, the other one being MCMC method. The main idea of VI is to pose the inference by approach it as an optimization problem. How? By modeling the true distribution P(z|X)P(z|X) using simpler distribution that is easy to evaluate, e.g. Gaussian, and minimize the difference between those two distribution using KL divergence metric, which tells us how difference it is PP and QQ.

Alright, now let’s say we want to infer P(z|X)P(z|X) using Q(z|X)Q(z|X). The KL divergence then formulated as follows:

DKL[Q(z|X)P(z|X)]=zQ(z|X)logQ(z|X)P(z|X)=E[logQ(z|X)P(z|X)]=E[logQ(z|X)logP(z|X)]DKL[Q(z|X)‖P(z|X)]=∑zQ(z|X)log⁡Q(z|X)P(z|X)=E[log⁡Q(z|X)P(z|X)]=E[log⁡Q(z|X)−log⁡P(z|X)]

Recall the notations above, there are two things that we haven’t use, namely P(X)P(X), P(X|z)P(X|z), and P(z)P(z). But, with Bayes’ rule, we could make it appear in the equation:

DKL[Q(z|X)P(z|X)]=E[logQ(z|X)logP(X|z)P(z)P(X)]=E[logQ(z|X)(logP(X|z)+logP(z)logP(X))]=E[logQ(z|X)logP(X|z)logP(z)+logP(X)]DKL[Q(z|X)‖P(z|X)]=E[log⁡Q(z|X)−log⁡P(X|z)P(z)P(X)]=E[log⁡Q(z|X)−(log⁡P(X|z)+log⁡P(z)−log⁡P(X))]=E[log⁡Q(z|X)−log⁡P(X|z)−log⁡P(z)+log⁡P(X)]

Notice that the expectation is over

z and zP(X) doesn’t depend on P(X)z, so we could move it outside of the expectation.zDKL[Q(z|X)P(z|X)]=E[logQ(z|X)logP(X|z)logP(z)]+logP(X)DKL[Q(z|X)P(z|X)]logP(X)=E[logQ(z|X)logP(X|z)logP(z)]DKL[Q(z|X)‖P(z|X)]=E[log⁡Q(z|X)−log⁡P(X|z)−log⁡P(z)]+log⁡P(X)DKL[Q(z|X)‖P(z|X)]−log⁡P(X)=E[log⁡Q(z|X)−log⁡P(X|z)−log⁡P(z)]

If we look carefully at the right hand side of the equation, we would notice that it could be rewritten as another KL divergence. So let’s do that by first rearranging the sign.

DKL[Q(z|X)P(z|X)]logP(X)=E[logQ(z|X)logP(X|z)logP(z)]logP(X)DKL[Q(z|X)P(z|X)]=E[logP(X|z)(logQ(z|X)logP(z))]=E[logP(X|z)]E[logQ(z|X)logP(z)]=E[logP(X|z)]DKL[Q(z|X)P(z)]DKL[Q(z|X)‖P(z|X)]−log⁡P(X)=E[log⁡Q(z|X)−log⁡P(X|z)−log⁡P(z)]log⁡P(X)−DKL[Q(z|X)‖P(z|X)]=E[log⁡P(X|z)−(log⁡Q(z|X)−log⁡P(z))]=E[log⁡P(X|z)]−E[log⁡Q(z|X)−log⁡P(z)]=E[log⁡P(X|z)]−DKL[Q(z|X)‖P(z)]

And this is it, the VAE objective function:

logP(X)DKL[Q(z|X)P(z|X)]=E[logP(X|z)]DKL[Q(z|X)P(z)]log⁡P(X)−DKL[Q(z|X)‖P(z|X)]=E[log⁡P(X|z)]−DKL[Q(z|X)‖P(z)]

At this point, what do we have? Let’s enumerate:

Q(z|X) that project our data Q(z|X)X into latent variable spaceX z, the latent variablez P(X|z) that generate data given latent variableP(X|z)

We might feel familiar with this kind of structure. And guess what, it’s the same structure as seen in ! That is,

AutoencoderQ(z|X) is the encoder net, Q(z|X)z is the encoded representation, and zP(X|z) is the decoder net! Well, well, no wonder the name of this model is Variational Autoencoder!P(X|z)

VAE: Dissecting the Objective

It turns out, VAE objective function has a very nice interpretation. That is, we want to model our data, which described by

logP(X), under some error log⁡P(X)DKL[Q(z|X)P(z|X)]. In other words, VAE tries to find the lower bound of DKL[Q(z|X)‖P(z|X)]logP(X), which in practice is good enough as trying to find the exact distribution is often untractable.log⁡P(X)

That model then could be found by maximazing over some mapping from latent variable to data

logP(X|z) and minimizing the difference between our simple distribution log⁡P(X|z)Q(z|X) and the true latent distribution Q(z|X)P(z).P(z)

As we might already know, maximizing

E[logP(X|z)] is a maximum likelihood estimation. We basically see it all the time in discriminative supervised model, for example Logistic Regression, SVM, or Linear Regression. In the other words, given an input E[log⁡P(X|z)]z and an output zX, we want to maximize the conditional distribution XP(X|z) under some model parameters. So we could implement it by using any classifier with input P(X|z)z and output zX, then optimize the objective function by using for example log loss or regression loss.X

What about

DKL[Q(z|X)P(z)]? Here, DKL[Q(z|X)‖P(z)]P(z) is the latent variable distribution. We might want to sample P(z)P(z) later, so the easiest choice is P(z)N(0,1). Hence, we want to make N(0,1)Q(z|X) to be as close as possible to Q(z|X)N(0,1) so that we could sample it easily.N(0,1)

Having

P(z)=N(0,1) also add another benefit. Let’s say we also want P(z)=N(0,1)Q(z|X) to be Gaussian with parameters Q(z|X)μ(X) and μ(X)Σ(X), i.e. the mean and variance given X. Then, the KL divergence between those two distribution could be computed in closed form!Σ(X)DKL[N(μ(X),Σ(X))N(0,1)]=12(tr(Σ(X))+μ(X)Tμ(X)klogdet(Σ(X)))DKL[N(μ(X),Σ(X))‖N(0,1)]=12(tr(Σ(X))+μ(X)Tμ(X)−k−logdet(Σ(X)))

Above,

k is the dimension of our Gaussian. ktr(X) is trace function, i.e. sum of the diagonal of matrix tr(X)X. The determinant of a diagonal matrix could be computed as product of its diagonal. So really, we could implement XΣ(X) as just a vector as it’s a diagonal matrix:Σ(X)DKL[N(μ(X),Σ(X))N(0,1)]=12(kΣ(X)+kμ2(X)k1logkΣ(X))=12(kΣ(X)+kμ2(X)k1klogΣ(X))=12k(Σ(X)+μ2(X)1logΣ(X))DKL[N(μ(X),Σ(X))‖N(0,1)]=12(∑kΣ(X)+∑kμ2(X)−∑k1−log∏kΣ(X))=12(∑kΣ(X)+∑kμ2(X)−∑k1−∑klog⁡Σ(X))=12∑k(Σ(X)+μ2(X)−1−log⁡Σ(X))

In practice, however, it’s better to model

Σ(X) as Σ(X)logΣ(X), as it is more numerically stable to take exponent compared to computing log. Hence, our final KL divergence term is:log⁡Σ(X)DKL[N(μ(X),Σ(X))N(0,1)]=12k(exp(Σ(X))+μ2(X)1Σ(X))DKL[N(μ(X),Σ(X))‖N(0,1)]=12∑k(exp⁡(Σ(X))+μ2(X)−1−Σ(X))

Implementation in Keras

First, let’s implement the encoder net

Q(z|X), which takes input Q(z|X)X and outputting two things: Xμ(X) and μ(X)Σ(X), the parameters of the Gaussian.Σ(X)
 
 
    from tensorflow.examples.tutorials.mnist import input_data from keras.layers import Input, Dense, Lambda from keras.models import Model from keras.objectives import binary_crossentropy from keras.callbacks import LearningRateScheduler import numpy as np import matplotlib.pyplot as plt import keras.backend as K import tensorflow as tf m = 50 n_z = 2 n_epoch = 10 # Q(z|X) -- encoder inputs = Input(shape=(784,)) h_q = Dense(512, activation='relu')(inputs) mu = Dense(n_z, activation='linear')(h_q) log_sigma = Dense(n_z, activation='linear')(h_q)  
    

That is, our

Q(z|X) is a neural net with one hidden layer. In this implementation, our latent variable is two dimensional, so that we could easily visualize it. In practice though, more dimension in latent variable should be better.Q(z|X)

However, we are now facing a problem. How do we get

z from the encoder outputs? Obviously we could sample zz from a Gaussian which parameters are the outputs of the encoder. Alas, sampling directly won’t do, if we want to train VAE with gradient descent as the sampling operation doesn’t have gradient!z

There is, however a trick called reparameterization trick, which makes the network differentiable. Reparameterization trick basically divert the non-differentiable operation out of the network, so that, even though we still involve a thing that is non-differentiable, at least it is out of the network, hence the network could still be trained.

The reparameterization trick is as follows. Recall, if we have

xN(μ,Σ) and then standardize it so that x∼N(μ,Σ)μ=0,Σ=1, we could revert it back to the original distribution by reverting the standardization process. Hence, we have this equation:μ=0,Σ=1x=μ+Σ12xstdx=μ+Σ12xstd

With that in mind, we could extend it. If we sample from a standard normal distribution, we could convert it to any Gaussian we want if we know the mean and the variance. Hence we could implement our sampling operation of

z by:zz=μ(X)+Σ12(X)ϵz=μ(X)+Σ12(X)ϵ

where

ϵN(0,1).ϵ∼N(0,1)

Now, during backpropagation, we don’t care anymore with the sampling process, as it is now outside of the network, i.e. doesn’t depend on anything in the net, hence the gradient won’t flow through it.

 
 
    def sample_z(args): mu, log_sigma = args eps = K.random_normal(shape=(m, n_z), mean=0., std=1.) return mu + K.exp(log_sigma / 2) * eps # Sample z ~ Q(z|X) z = Lambda(sample_z)([mu, log_sigma])  
    

Now we create the decoder net

P(X|z):P(X|z)
 
 
    # P(X|z) -- decoder decoder_hidden = Dense(512, activation='relu') decoder_out = Dense(784, activation='sigmoid') h_p = decoder_hidden(z) outputs = decoder_out(h_p)  
    

Lastly, from this model, we can do three things: reconstruct inputs, encode inputs into latent variables, and generate data from latent variable. So, we have three Keras models:

 
 
    # Overall VAE model, for reconstruction and training vae = Model(inputs, outputs) # Encoder model, to encode input into latent variable # We use the mean as the output as it is the center point, the representative of the gaussian encoder = Model(inputs, mu) # Generator model, generate new data given latent variable z d_in = Input(shape=(n_z,)) d_h = decoder_hidden(d_in) d_out = decoder_out(d_h) decoder = Model(d_in, d_out)  
    

Then, we need to translate our loss into Keras code:

 
 
    def vae_loss(y_true, y_pred): """ Calculate loss = reconstruction loss + KL loss for each data in minibatch """ # E[log P(X|z)] recon = K.sum(K.binary_crossentropy(y_pred, y_true), axis=1) # D_KL(Q(z|X) || P(z|X)); calculate in closed form as both dist. are Gaussian kl = 0.5 * K.sum(K.exp(log_sigma) + K.square(mu) - 1. - log_sigma, axis=1) return recon + kl  
    

and then train it:

 
 
    vae.compile(optimizer='adam', loss=vae_loss) vae.fit(X_train, X_train, batch_size=m, nb_epoch=n_epoch)  
    

And that’s it, the implementation of VAE in Keras!

Implementation on MNIST Data

We could use any dataset really, but like always, we will use MNIST as an example.

After we trained our VAE model, we then could visualize the latent variable space

Q(z|X):Q(z|X)

 

Q(z \vert X)

As we could see, in the latent space, the representation of our data that have the same characteristic, e.g. same label, are close to each other. Notice that in the training phase, we never provide any information regarding the data.

We could also look at the data reconstruction by running through the data into overall VAE net:

 

Reconstruction

Lastly, we could generate new sample by first sample

zN(0,1) and feed it into our decoder net:z∼N(0,1)

 

Generation

If we look closely on the reconstructed and generated data, we would notice that some of the data are ambiguous. For example the digit 5 looks like 3 or 8. That’s because our latent variable space is a continous distribution (i.e.

N(0,1)), hence there bound to be some smooth transition on the edge of the clusters. And also, the cluster of digits are close to each other if they are somewhat similar. That’s why in the latent space, 5 is close to 3.N(0,1)

Conclusion

In this post we looked at the intuition behind Variational Autoencoder (VAE), its formulation, and its implementation in Keras.

We also saw the difference between VAE and GAN, the two most popular generative models nowadays.

For more math on VAE, be sure to hit the original paper by Kingma et al., 2014. There is also an excellent tutorial on VAE by Carl Doersch. Check out the references section below.

The full code is available in my repo:

https://github.com/wiseodd/generative-models

References

  • Doersch, Carl. “Tutorial on variational autoencoders.” arXiv preprint arXiv:1606.05908 (2016).
  • Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).
https://blog.keras.io/building-autoencoders-in-keras.html
相关文章
|
Shell 开发工具 开发者
mac出现无法打开“*“,因为无法验证开发者 问题解决
mac出现无法打开“*“,因为无法验证开发者 问题解决
6122 0
mac出现无法打开“*“,因为无法验证开发者 问题解决
|
应用服务中间件 JavaScript 虚拟化
阿里云香港轻量应用服务器介绍与测评:月付24元/30Mbps带宽/1TB流量
阿里云香港24是阿里云推出了一款非常优惠的香港的轻量应用服务器,每个月只需要24元,流量有1T,30M的带宽,国内延迟非常低,联通和移动是直连,电信去程ntt,回程cn2,性价比非常高。本文详细介绍这个方案的配置以及做一个简单的测评。
34145 0
|
9月前
|
数据可视化 JavaScript 前端开发
代码可视化平台
这是一个代码可视化工具,旨在简化代码理解过程。用户无需额外配置,直接复制代码即可实时观看执行过程,支持前进后退和动画展示。目前支持JavaScript和Python,未来将扩展更多语言。工具提供了数组、链表、栈、队列、二叉树和哈希表的可视化,并包含辅助函数和自定义注释功能。主要局限在于仅支持单段代码,且执行步数限制为500步。[了解更多](https://staying.fun/zh)
598 20
|
11月前
|
机器学习/深度学习 人工智能 自然语言处理
1分钟认识:人工智能claude AI _详解CLAUDE在国内怎么使用
Claude AI 是 Anthropic 开发的先进对话式 AI 模型,以信息论之父克劳德·香农命名,体现了其在信息处理和生成方面的卓越能力
|
开发者 C# Android开发
震惊!Xamarin 跨平台开发优势满满却也挑战重重,代码复用、熟悉语言与性能优势并存,学习曲线与差异处理何解?
【8月更文挑战第31天】Xamarin 与 C# 结合,为移动应用开发带来高效跨平台解决方案,使用单一语言和框架即可构建 iOS、Android 和 Windows 原生应用。本文通过问答形式探讨 Xamarin 和 C# 如何塑造移动开发的未来,并通过示例代码展示其实际应用。Xamarin 和 C# 的组合不仅提高了开发效率,还支持最新的移动平台功能,帮助开发者应对未来挑战,如物联网、人工智能和增强现实等领域的需求。
186 0
|
监控 Java 开发者
实现Java微服务架构下的服务熔断与降级
实现Java微服务架构下的服务熔断与降级
|
机器学习/深度学习 人工智能 算法
阿里达摩院 MindOpt 介绍和使用
MindOpt 是阿里巴巴达摩院决策智能实验室研发的决策优化软件。团队组建于2019年,聚焦于研发尖端运筹优化和机器学习技术,构建智能决策系统,更快更好地向各行各业提供数学建模与求解能力,帮助业务更快更好地做出决策,以期降低成本、提升效率、增大收益 。当前 MindOpt 围绕智能决策优化所需的建模和求解能力,突破国外垄断,自研了 MindOpt Solver 优化求解器、MindOpt APL 建模语言、MindOpt Tuner 调参器;并创新地提出“强化+优化”双决策引擎,打造了MindOpt Studio 优化平台。并结合前沿先进的预训练大模型技术打造MindOpt Copilot。
4041 2
阿里达摩院 MindOpt 介绍和使用
|
机器学习/深度学习 监控 Ubuntu
【UR3系统升级到CB3.12附带URcap1.05】
【UR3系统升级到CB3.12附带URcap1.05】
474 0
|
数据采集 人工智能 算法
ECCV 2022 | 76小时动捕,最大规模数字人多模态数据集开源
ECCV 2022 | 76小时动捕,最大规模数字人多模态数据集开源
372 0
|
设计模式 负载均衡 算法
【高可用架构】高可用性架构模式
随着企业客户部署的任务关键型基于web的服务的数量不断增加,对设计最佳网络可用性解决方案的深入理解的需求前所未有地重要。高可用性(HA)已成为此类系统开发的关键方面。高可用性简单地指的是一个组件或系统持续运行一段时间。