版权声明:本文为博主原创文章,转载请注明博主信息和博文网址。 https://blog.csdn.net/dss_dssssd/article/details/83992701
本文主要借助代码讲解Xavier和kaiming是如何借助_calculate_fan_in_and_fan_out
函数来计算当前网络层的fan_in(输入神经元个数)和fan_out(输出神经元个数的),先针对Linear和Conv2d两种。
m_c = nn.Conv2d(16, 33, 3, stride=2)
m_l = nn.Linear(1, 10)
m_c.weight.size()
m_l.weight.size()
out:
torch.Size([33, 16, 3, 3])
torch.Size([10, 1])
注意看Linear weight的维度为2,而Conv2d的维度为4.
首先判断tensor的维度,如果是二维,则是Linear,
if dimensions == 2: # Linear
fan_in = tensor.size(1)
fan_out = tensor.size(0)
此时:
而如果大于2维,Conv2d.weight的第一维为out_channels
, 第二维为in_channels
第三维和第四维维kernal_size,
代码else,先取出前两个维度,然后tensor[0][0].numel得到的是tensor[0][0]中元素的数目,也就是:
,在m_c中就是
再将此值乘以 num_input_fmaps
和num_output_fmaps
就得到fan_in
和fan_out
了
else:
num_input_fmaps = tensor.size(1)
num_output_fmaps = tensor.size(0)
receptive_field_size = 1
if tensor.dim() > 2:
receptive_field_size = tensor[0][0].numel()
fan_in = num_input_fmaps * receptive_field_size
fan_out = num_output_fmaps * receptive_field_size
这是测试代码
m_c = nn.Conv2d(16, 33, 3, stride=2)
m_l = nn.Linear(1, 10)
m_c.weight.size()
Out[30]: torch.Size([33, 16, 3, 3])
m_l.weight.size()
Out[31]: torch.Size([10, 1])
m_c.weight[0][0]
Out[32]:
tensor([[-0.0667, 0.0241, 0.0701],
[-0.0209, 0.0364, 0.0826],
[ 0.0803, -0.0535, 0.0316]], grad_fn=<SelectBackward>)
m_c.weight[0][0].numel()
Out[33]: 9