KL divergence over-estimate/under-estimate

reference: Very intuitive example shown in this blog

https://wiseodd.github.io/techblog/2016/12/21/forward-reverse-kl/

Let our true distribution defined as P ( X ) P(X) , and the approximate distribution as Q ( X ) Q(X) .
Forward KL: x X P ( x ) log ( P ( x ) Q ( x ) ) \sum_{x \in X} P(x) \log(\frac{P(x)}{Q(x)})
Discussion in different cases:

  1. If P(x)=0, then log term can be igored so that the Q(x) can be any shape when P(x)=0. Q(x) is able to assign any probabilities when P(x)=0)
  2. If P(x)>0, then the log term have effect during optimization so that Q(x) assign probabilities as close as possible to P(X) when P(x)>0.
  3. The following graph is a specific optimal optimization for Forward KL.
    在这里插入图片描述

Reverse KL: x X Q ( x ) log ( Q ( x ) P ( x ) ) \sum_{x \in X} Q(x) \log(\frac{Q(x)}{P(x)})

  1. If Q(x)=0, log term can be ignored so that Q(x) able to assign 0 probabilities to P(x)>=0.
  2. If Q(x) > 0, log term is taken into account in optimization step so that Q(x) assign probabilites as close as possible to P(X) when P(x)>0
  3. The following graph is a specific optimal optimization for Reverse KL.
    在这里插入图片描述
发布了14 篇原创文章 · 获赞 1 · 访问量 1226

猜你喜欢

转载自blog.csdn.net/weixin_32334291/article/details/89267234