加载、保存和冻结嵌入

这个笔记将包含:如何在TorchText中加载自定义字嵌入,如何保存我们在培训期间学习的所有嵌入,以及如何在培训期间冻结/解冻嵌入。

加载自定义的嵌入

首先,让我们看看如何加载自定义嵌入集。

嵌入需要格式化,每一行都以单词开头,后面跟着嵌入向量的值,以空格分隔。所有的向量都需要有相同数量的元素。

让我们看看这些教程提供的自定义嵌入,下面是7个单词的20维嵌入。

with open('custom_embeddings/embeddings.txt', 'r') as f:
    print(f.read())
good 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
great 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
awesome 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
bad -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0
terrible -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0
awful -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0 -1.0
kwyjibo 0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5 0.5 -0.5

现在,让我们设置字段。

import torch
from torchtext import data

SEED = 1234

torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True

TEXT = data.Field(tokenize='spacy', tokenizer_language='zh_core_web_sm')
LABEL = data.LabelField(dtype=torch.float)

然后,我们将加载数据集并创建验证集。

from torchtext import datasets
import random

train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)

train_data, valid_data = train_data.split(random_state = random.seed(SEED))

我们只能在将自定义嵌入对象转换为vector对象后再加载它们。

我们通过传递嵌入的位置(name)、缓存的嵌入的位置(cache)和一个函数(unk_init)来创建Vector对象,该函数稍后将初始化数据集中以外的嵌入标记。正如在以前的笔记中所做的,我们已经将这些初始化为 N ( 0 , 1 ) \mathcal{N}(0,1) N(0,1)

import torchtext.vocab as vocab

custom_embeddings = vocab.Vectors(name = 'custom_embeddings/embeddings.txt',
                                  cache = 'custom_embeddings',
                                  unk_init = torch.Tensor.normal_)

要检查嵌入已正确加载,我们可以打印出从自定义嵌入加载的单词。

print(custom_embeddings.stoi)
{
    
    'good': 0, 'great': 1, 'awesome': 2, 'bad': 3, 'terrible': 4, 'awful': 5, 'kwyjibo': 6}

我们也可以直接打印出嵌入值。

print(custom_embeddings.vectors)

tensor([[ 1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,
          1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,
          1.0000,  1.0000,  1.0000,  1.0000],
        [ 1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,
          1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,
          1.0000,  1.0000,  1.0000,  1.0000],
        [ 1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,
          1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,  1.0000,
          1.0000,  1.0000,  1.0000,  1.0000],
        [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000,
         -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000,
         -1.0000, -1.0000, -1.0000, -1.0000],
        [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000,
         -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000,
         -1.0000, -1.0000, -1.0000, -1.0000],
        [-1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000,
         -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000, -1.0000,
         -1.0000, -1.0000, -1.0000, -1.0000],
        [ 0.5000, -0.5000,  0.5000, -0.5000,  0.5000, -0.5000,  0.5000, -0.5000,
          0.5000, -0.5000,  0.5000, -0.5000,  0.5000, -0.5000,  0.5000, -0.5000,
          0.5000, -0.5000,  0.5000, -0.5000]])

然后,我们构建词汇表,传递vector对象。

请注意,unk_init应该在创建我们的向量时声明,而不是在这里!

MAX_VOCAB_SIZE = 25_000

TEXT.build_vocab(train_data, 
                 max_size = MAX_VOCAB_SIZE, 
                 vectors = custom_embeddings)

LABEL.build_vocab(train_data)

现在,自定义嵌入中单词的词汇表向量应该与加载的匹配。

TEXT.vocab.vectors[TEXT.vocab.stoi['good']]
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
        1., 1.])
TEXT.vocab.vectors[TEXT.vocab.stoi['bad']]
tensor([-1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1., -1.,
        -1., -1., -1., -1., -1., -1.])

在自定义嵌入中但没有在数据集词汇表中的单词由前面传递的unk_init函数初始化,即 N ( 0 , 1 ) \mathcal{N}(0,1) N(0,1)。它们的大小也与我们的定制嵌入(20维)相同。

TEXT.vocab.vectors[TEXT.vocab.stoi['kwjibo']]
tensor([-0.1117, -0.4966,  0.1631, -0.8817,  0.2891,  0.4899, -0.3853, -0.7120,
         0.6369, -0.7141, -1.0831, -0.5547, -1.3248,  0.6970, -0.6631,  1.2158,
        -2.5273,  1.4778, -0.1696, -0.9919])

设置的其余部分与使用glove向量时相同,下一步是设置迭代器。

BATCH_SIZE = 64

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
    (train_data, valid_data, test_data), 
    batch_size = BATCH_SIZE,
    device = device)

然后,我们定义模型

import torch.nn as nn
import torch.nn.functional as F

class CNN(nn.Module):
    def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, 
                 dropout, pad_idx):
        super().__init__()
        
        self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
        
        self.convs = nn.ModuleList([
                                    nn.Conv2d(in_channels = 1, 
                                              out_channels = n_filters, 
                                              kernel_size = (fs, embedding_dim)) 
                                    for fs in filter_sizes
                                    ])
        
        self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
        
        self.dropout = nn.Dropout(dropout)
        
    def forward(self, text):
        
        #text = [sent len, batch size]
        
        text = text.permute(1, 0)
                
        #text = [batch size, sent len]
        
        embedded = self.embedding(text)
                
        #embedded = [batch size, sent len, emb dim]
        
        embedded = embedded.unsqueeze(1)
        
        #embedded = [batch size, 1, sent len, emb dim]
        
        conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
            
        #conv_n = [batch size, n_filters, sent len - filter_sizes[n]]
        
        pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
        
        #pooled_n = [batch size, n_filters]
        
        cat = self.dropout(torch.cat(pooled, dim = 1))

        #cat = [batch size, n_filters * len(filter_sizes)]
            
        return self.fc(cat)

然后我们初始化我们的模型,确保embeddding_dim与我们自定义的嵌入维度相同,即20。

INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 20
N_FILTERS = 100
FILTER_SIZES = [3,4,5]
OUTPUT_DIM = 1
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]

model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)

由于使用的嵌入尺寸较小,我们在这个模型中有较少的参数。

def count_parameters(model):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)

print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 524,641 trainable parameters

接下来,我们初始化嵌入层以使用词汇表向量。

embeddings = TEXT.vocab.vectors

model.embedding.weight.data.copy_(embeddings)
tensor([[-0.1117, -0.4966,  0.1631,  ...,  1.4778, -0.1696, -0.9919],
        [-0.5675, -0.2772, -2.1834,  ...,  0.8504,  1.0534,  0.3692],
        [-0.0552, -0.6125,  0.7500,  ..., -0.1261, -1.6770,  1.2068],
        ...,
        [ 0.5383, -0.1504,  1.6720,  ..., -0.3857, -1.0168,  0.1849],
        [ 2.5640, -0.8564, -0.0219,  ..., -0.3389,  0.2203, -1.6119],
        [ 0.1203,  1.5286,  0.6824,  ...,  0.3330, -0.6704,  0.5883]])

然后,我们初始化未知和填充标记嵌入为所有零。

UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]

model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)

按照标准程序,我们创建优化器。

import torch.optim as optim

optimizer = optim.Adam(model.parameters())

定义损失函数(准则)。

criterion = nn.BCEWithLogitsLoss()

然后将损失函数和模型放到GPU上。

model = model.to(device)
criterion = criterion.to(device)

创建函数来计算准确率。

def binary_accuracy(preds, y):
    rounded_preds = torch.round(torch.sigmoid(preds))
    correct = (rounded_preds == y).float()
    acc = correct.sum() / len(correct)
    return acc

然后实现我们的训练函数和评估函数

def train(model, iterator, optimizer, criterion):
    
    epoch_loss = 0
    epoch_acc = 0
    
    model.train()
    
    for batch in iterator:
        
        optimizer.zero_grad()
                
        predictions = model(batch.text).squeeze(1)
        
        loss = criterion(predictions, batch.label)
        
        acc = binary_accuracy(predictions, batch.label)
        
        loss.backward()
            
        optimizer.step()
        
        epoch_loss += loss.item()
        epoch_acc += acc.item()
                
    return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
    
    epoch_loss = 0
    epoch_acc = 0
    
    model.eval()
    
    with torch.no_grad():
    
        for batch in iterator:
            
            predictions = model(batch.text).squeeze(1)
            
            loss = criterion(predictions, batch.label)
            
            acc = binary_accuracy(predictions, batch.label)

            epoch_loss += loss.item()
            epoch_acc += acc.item()
        
    return epoch_loss / len(iterator), epoch_acc / len(iterator)

…我们有用的函数告诉我们一个epoch需要多长时间。

import time

def epoch_time(start_time, end_time):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs

我们终于达到了训练我们的模型的时候!

冻结和解冻嵌入

我们要把我们的模型训练成10个epoch。在前5个epoch,我们将冻结嵌入层的权重(参数)。在过去的10个epoch,我们将允许我们的嵌入训练。

我们为什么要这么做?有时,我们使用的预先训练好的词嵌入已经足够好了,不需要根据我们的模型进行微调。如果我们保持嵌入不变,那么我们就不需要计算梯度和更新这些参数的权重,从而让我们获得更快的训练时间。这并不真正适用于这里使用的模型,但我们主要介绍它,以展示它是如何完成的。另一个原因是,如果我们的模型有大量的参数,它可能会使训练困难,所以通过冻结我们预先训练的嵌入,我们减少了需要学习的参数的数量。

为了冻结权重,我们设置了model. embedding .weight.requires_grad为False。这将导致不会为嵌入层中的权重计算梯度,因此在调用optimizer.step()时不会更新参数。

然后,在训练期间,我们检查FREEZE_FOR(我们将其设置为5)是否已经过去。如果有,我们设置model.embedding.weight.requires_grad为True,告诉PyTorch我们应该在嵌入层中计算梯度,并使用优化器更新它们。

N_EPOCHS = 10
FREEZE_FOR = 5

best_valid_loss = float('inf')

#freeze embeddings
model.embedding.weight.requires_grad = unfrozen = False

for epoch in range(N_EPOCHS):

    start_time = time.time()
    
    train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
    valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
    
    end_time = time.time()

    epoch_mins, epoch_secs = epoch_time(start_time, end_time)
    
    print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s | Frozen? {not unfrozen}')
    print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. Acc: {valid_acc*100:.2f}%')
    
    if valid_loss < best_valid_loss:
        best_valid_loss = valid_loss
        torch.save(model.state_dict(), 'tutC-model.pt')
    
    if (epoch + 1) >= FREEZE_FOR:
        #unfreeze embeddings
        model.embedding.weight.requires_grad = unfrozen = True
Epoch: 01 | Epoch Time: 0m 7s | Frozen? True
	Train Loss: 0.724 | Train Acc: 53.68%
	 Val. Loss: 0.658 |  Val. Acc: 62.27%
Epoch: 02 | Epoch Time: 0m 6s | Frozen? True
	Train Loss: 0.670 | Train Acc: 59.36%
	 Val. Loss: 0.626 |  Val. Acc: 67.51%
Epoch: 03 | Epoch Time: 0m 6s | Frozen? True
	Train Loss: 0.636 | Train Acc: 63.62%
	 Val. Loss: 0.592 |  Val. Acc: 70.22%
Epoch: 04 | Epoch Time: 0m 6s | Frozen? True
	Train Loss: 0.613 | Train Acc: 66.22%
	 Val. Loss: 0.573 |  Val. Acc: 71.77%
Epoch: 05 | Epoch Time: 0m 6s | Frozen? True
	Train Loss: 0.599 | Train Acc: 67.40%
	 Val. Loss: 0.569 |  Val. Acc: 70.86%
Epoch: 06 | Epoch Time: 0m 7s | Frozen? False
	Train Loss: 0.577 | Train Acc: 69.53%
	 Val. Loss: 0.520 |  Val. Acc: 76.17%
Epoch: 07 | Epoch Time: 0m 7s | Frozen? False
	Train Loss: 0.544 | Train Acc: 72.21%
	 Val. Loss: 0.487 |  Val. Acc: 78.03%
Epoch: 08 | Epoch Time: 0m 7s | Frozen? False
	Train Loss: 0.507 | Train Acc: 74.96%
	 Val. Loss: 0.450 |  Val. Acc: 80.02%
Epoch: 09 | Epoch Time: 0m 7s | Frozen? False
	Train Loss: 0.469 | Train Acc: 77.72%
	 Val. Loss: 0.420 |  Val. Acc: 81.79%
Epoch: 10 | Epoch Time: 0m 7s | Frozen? False
	Train Loss: 0.426 | Train Acc: 80.28%
	 Val. Loss: 0.392 |  Val. Acc: 82.76%

另一个选项是,当验证损失停止增加时,使用以下代码片段来解冻嵌入,而不是使用FREEZE_FOR条件:

if valid_loss < best_valid_loss:
    best_valid_loss = valid_loss
    torch.save(model.state_dict(), 'tutC-model.pt')
else:
    #unfreeze embeddings
    model.embedding.weight.requires_grad = unfrozen = True

加载保存的模型

model.load_state_dict(torch.load('tutC-model.pt'))

test_loss, test_acc = evaluate(model, test_iterator, criterion)

print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
Test Loss: 0.396 | Test Acc: 82.36%

保存嵌入

我们可能想要用另一个模型来重用我们在这里训练的嵌入。为此,我们将编写一个函数,循环遍历我们的词汇表,获取单词并嵌入每个单词,将它们以与自定义嵌入相同的格式写入文本文件,这样它们就可以再次与TorchText一起使用。

目前,TorchText向量似乎有加载特定unicode单词的问题,所以我们跳过这些,只写没有unicode符号的单词。

from tqdm import tqdm

def write_embeddings(path, embeddings, vocab):
    
    with open(path, 'w') as f:
        for i, embedding in enumerate(tqdm(embeddings)):
            word = vocab.itos[i]
            #skip words with unicode symbols
            if len(word) != len(word.encode()):
                continue
            vector = ' '.join([str(i) for i in embedding.tolist()])
            f.write(f'{word} {vector}\n')

我们将把嵌入写入trained_embeddings.txt。

write_embeddings('custom_embeddings/trained_embeddings.txt', 
                 model.embedding.weight.data, 
                 TEXT.vocab)

为了检查它们是否写对了,我们可以把它们作为向量载入。

trained_embeddings = vocab.Vectors(name = 'custom_embeddings/trained_embeddings.txt',
                                   cache = 'custom_embeddings',
                                   unk_init = torch.Tensor.normal_)

最后,让我们打印出加载的向量的前5行,以及模型的嵌入权重,检查它们是否相同。

print(trained_embeddings.vectors[:5])

tensor([[-0.2573, -0.2088,  0.2413, -0.1549,  0.1940, -0.1466, -0.2195, -0.1011,
         -0.1327,  0.1803,  0.2369, -0.2182,  0.1543, -0.2150, -0.0699, -0.0430,
         -0.1958, -0.0506, -0.0059, -0.0024],
        [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,
          0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,
          0.0000,  0.0000,  0.0000,  0.0000],
        [-0.1427, -0.4414,  0.7181, -0.5751, -0.3183,  0.0552, -1.6764, -0.3177,
          0.6592,  1.6143, -0.1920, -0.1881, -0.4321, -0.8578,  0.5266,  0.5243,
         -0.7083, -0.0048, -1.4680,  1.1425],
        [-0.4700, -0.0363,  0.0560, -0.7394, -0.2412, -0.4197, -1.7096,  0.9444,
          0.9633,  0.3703, -0.2243, -1.5279, -1.9086,  0.5718, -0.5721, -0.6015,
          0.3579, -0.3834,  0.8079,  1.0553],
        [-0.7055,  0.0954,  0.4646, -1.6595,  0.1138,  0.2208, -0.0220,  0.7397,
         -0.1153,  0.3586,  0.3040, -0.6414, -0.1579, -0.2738, -0.6942,  0.0083,
          1.4097,  1.5225,  0.6409,  0.0076]])
print(model.embedding.weight.data[:5])
tensor([[-0.2573, -0.2088,  0.2413, -0.1549,  0.1940, -0.1466, -0.2195, -0.1011,
         -0.1327,  0.1803,  0.2369, -0.2182,  0.1543, -0.2150, -0.0699, -0.0430,
         -0.1958, -0.0506, -0.0059, -0.0024],
        [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,
          0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000,
          0.0000,  0.0000,  0.0000,  0.0000],
        [-0.1427, -0.4414,  0.7181, -0.5751, -0.3183,  0.0552, -1.6764, -0.3177,
          0.6592,  1.6143, -0.1920, -0.1881, -0.4321, -0.8578,  0.5266,  0.5243,
         -0.7083, -0.0048, -1.4680,  1.1425],
        [-0.4700, -0.0363,  0.0560, -0.7394, -0.2412, -0.4197, -1.7096,  0.9444,
          0.9633,  0.3703, -0.2243, -1.5279, -1.9086,  0.5718, -0.5721, -0.6015,
          0.3579, -0.3834,  0.8079,  1.0553],
        [-0.7055,  0.0954,  0.4646, -1.6595,  0.1138,  0.2208, -0.0220,  0.7397,
         -0.1153,  0.3586,  0.3040, -0.6414, -0.1579, -0.2738, -0.6942,  0.0083,
          1.4097,  1.5225,  0.6409,  0.0076]], device='cuda:0')

一切看起来不错!两者之间的唯一区别是删除了词汇表中包含unicode符号的约50个单词。

猜你喜欢

转载自blog.csdn.net/weixin_40605573/article/details/113252419