熵,交叉熵,KL散度公式与计算实例
交叉熵(Cross Entropy)和KL散度(Kullback–Leibler Divergence)是机器学习中极其常用的两个指标,用来衡量两个概率分布的相似度,常被作为Loss Function。本文给出熵、相对熵、交叉熵的定义,用python实现算法并与pytorch中对应的函数结果对比验证。
熵(Entropy)
此处为方便讨论及与后续实例呼应,所有随机变量均为离散随机变量。
定义随机变量x在概率分布p的熵: \[H(p)=-\sum_{x}p(x) \log p(x)\]
代码与实例
import torch
import numpy as np
from torch.distributions import Categorical, kl
# Entropy
p = [0.1, 0.2, 0.3, 0.4]
Hp = -sum([p[i] * np.log(p[i]) for i in range(len(p))])
print (f"H(p) = {Hp}")
dist_p = Categorical(torch.tensor(p))
print (f"Torch H(p) = {dist_p.entropy().item()}")
结果:
H(p) = 1.2798542258336676
Torch H(p) = 1.2798542976379395
相对熵(Relative Entropy)
相对熵(Relative Entropy),也称KL散度 (Kullback–Leibler divergence)。
p(x),q(x)为随机变量x的两个概率分布,定义p对q的相对熵(KL散度)为: \[D(p||q)=\sum_{x}p(x) \log \frac{p(x)}{q(x)}\]
KL散度在p(x)和q(x)相同时取到最小值0,两个概率分布越相似,则KL散度越小。
注意,\(D(p||q) != D(q||p)\),也不满足三角不等式。
假设p(x)是随机变量的真实分布,q(x)是模型预测的分布,则可以用KL散度作为分类问题的Loss Function,通过训练使预测分布接近于真实分布。
代码与实例
import torch
import numpy as np
from torch.distributions import Categorical, kl
# KL divergence
p = [0.1, 0.2, 0.3, 0.4]
q = [0.1, 0.1, 0.7, 0.1]
Dpq = sum([p[i] * np.log(p[i] / q[i]) for i in range(len(p))])
print (f"D(p, q) = {Dpq}")
dist_p = Categorical(torch.tensor(p))
dist_q = Categorical(torch.tensor(q))
print (f"Torch D(p, q) = {kl.kl_divergence(dist_p, dist_q)}")
结果:
D(p, q) = 0.43895782244378423
Torch D(p, q) = 0.4389578104019165
交叉熵(Cross Entropy)
对KL散度进行变形: \[D(p||q)=\sum_{x}p(x) \log p(x) - \sum_{x}p(x) \log q(x)\] \[D(p||q)=-H(p) - \sum_{x}p(x) \log q(x)\]
定义p对q的交叉熵为: \[H(p, q)=-\sum_{x}p(x) \log q(x)\]
于是KL散度变为: \[D(p||q)=H(p, q) - H(p)\]
在分类问题中,随机变量的真实分布p(x)是确定的,于是H(p)也是确定的,相当于一个常数。因此,优化KL散度与优化交叉熵等价,这也是为什么用交叉熵作为分类问题损失函数的原因。
代码与实例
import torch
import numpy as np
from torch.distributions import Categorical, kl
# Cross entropy
p = [0.1, 0.2, 0.3, 0.4]
q = [0.1, 0.1, 0.7, 0.1]
Hpq = -sum([p[i] * np.log(q[i]) for i in range(len(p))])
print (f"H(p, q) = {Hpq}")
结果:
H(p, q) = 1.7188120482774516
验证了上面的公式 \(D(p||q)=H(p, q) - H(p)\)。
交叉熵损失函数
具体地来看下交叉熵作为分类问题的损失函数是如何工作的。 参考pytorch的文档:https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html
torch.nn.CrossEntropyLoss(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')
输入为两部分Input和Target,其中Input的Shape是(batch_size, C),CrossEntropyLoss是softmax和NLLLoss的结合体,Matrix每行的值是模型输出的各分类的概率值,无须经过softmax;Target的Shape是(batch_size),取值是true label对应的类的index。 输出是一个tensor。
听起来复杂,举个例子: 一个4分类问题,假设模型输出的结果是p = [1,2,3,4],true label是1,即第二类:
import torch
import numpy as np
from torch.distributions import Categorical, kl
from torch.nn import CrossEntropyLoss
# Cross entropy loss
p = [1, 2, 3, 4]
q = [1] # [0, 1, 0, 0] = torch.nn.functional.one_hot(torch.tensor(q), len(p))
celoss = -p[q[0]] + np.log(sum([np.exp(i) for i in p]))
print (f"Cross Entropy Loss: {celoss}")
loss = CrossEntropyLoss()
tensor_p = torch.FloatTensor(p).unsqueeze(0)
tensor_q = torch.tensor(q)
output = loss(tensor_p, tensor_q)
print (f"Torch Cross Entropy Loss: {output.item()}")
结果:
Cross Entropy Loss: 2.4401896985611957
Torch Cross Entropy Loss: 2.4401895999908447
解释一下(其中tl是true label,上例取值1),参考softmax函数的定义,把q变成one-hot encoding后为[0, 1, 0, 0],计算p与q的交叉熵: \[loss(p, tl) = -\log \frac{exp(x[tl])}{\sum_{i} exp(x[i])}\]
即:
\[loss(p, tl) = -x[tl] + \log {\sum_{i} exp(x[i])}\]
简洁起见,本文仅从公式和实例的角度直观地解释了熵,KL散度和交叉熵的几个概念,如果对几个定义的原理和在信息论中的解释有兴趣,可以参考如下几篇文章:
完整代码
import torch
import numpy as np
from torch.distributions import Categorical, kl
from torch.nn import CrossEntropyLoss
# Entropy
p = [0.1, 0.2, 0.3, 0.4]
Hp = -sum([p[i] * np.log(p[i]) for i in range(len(p))])
print (f"H(p) = {Hp}")
dist_p = Categorical(torch.tensor(p))
print (f"Torch H(p) = {dist_p.entropy().item()}")
# KL divergence
p = [0.1, 0.2, 0.3, 0.4]
q = [0.1, 0.1, 0.7, 0.1]
Dpq = sum([p[i] * np.log(p[i] / q[i]) for i in range(len(p))])
print (f"D(p, q) = {Dpq}")
dist_p = Categorical(torch.tensor(p))
dist_q = Categorical(torch.tensor(q))
print (f"Torch D(p, q) = {kl.kl_divergence(dist_p, dist_q)}")
# Cross entropy
p = [0.1, 0.2, 0.3, 0.4]
q = [0.1, 0.1, 0.7, 0.1]
Hpq = -sum([p[i] * np.log(q[i]) for i in range(len(p))])
print (f"H(p, q) = {Hpq}")
# Cross entropy loss
p = [1, 2, 3, 4]
q = [1] # [0, 1, 0, 0] = torch.nn.functional.one_hot(torch.tensor(q), len(p))
celoss = -p[q[0]] + np.log(sum([np.exp(i) for i in p]))
print (f"Cross Entropy Loss: {celoss}")
loss = CrossEntropyLoss()
tensor_p = torch.FloatTensor(p).unsqueeze(0)
tensor_q = torch.tensor(q)
output = loss(tensor_p, tensor_q)
print (f"Torch Cross Entropy Loss: {output.item()}")