不同层设置不同学习率

使用预训练模型时,可能需要将
(1)预训练好的 backbone 的 参数学习率设置为较小值
(2)而backbone 之外的部分,需要使用较大的学习率。

from collections import OrderedDict
import torch.nn as nn
import torch.optim as optim

net = nn.Sequential(OrderedDict([
    ("linear1", nn.Linear(10, 20)),
    ("linear2", nn.Linear(20, 30)),
    ("linear3", nn.Linear(30, 40))]))


linear3_params = list(map(id, net.linear3.parameters()))
base_params = filter(lambda p: id(p) not in linear3_params, net.parameters())

optimizer = optim.SGD([
    {
    
    'params': base_params},
    {
    
    'params': net.linear3.parameters(), 'lr': 0.0005}],
    lr=0.001, momentum=0.9)


print(optimizer)
print(optimizer.param_groups[0]['lr'])
print(optimizer.param_groups[1]['lr'])

猜你喜欢

转载自blog.csdn.net/weixin_37804469/article/details/133146846