SiameseNetwork实现面部识别(基于PyTorch)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/baidu_36669549/article/details/84978244

参考

文章链接(这篇文章有些图已经挂了。。)

SiameseNetwork又叫孪生网络---详细的介绍可以看这篇文章

网络的基本架构如图

下面直接介绍:

1.数据集

数据集采用的是AT&T面部数据集,里面是一些以pgm后缀的图像文件,可以用Sublime Text打开

是P5格式的,宽度为92,高度为112

用python进行可视化可以参考下面的代码

from PIL import Image
import matplotlib.pyplot as plt
import numpy as np 
import os

path="F:/Facial-Siamese/Siamese-Networks/data/faces/testing/s7/"
files=os.listdir(path)

for i in files:
    
    im = Image.open(path+i)
    im.show()
    i=i[:len(i)-4]
    #print(i)
    im.save(path+i+".bmp")

可视化的同时,我给转换成了bmp文件,其实两种格式的文件都可以用来训练。

扫描二维码关注公众号,回复: 4601299 查看本文章

2.网络架构

采用CNN架构,每次卷积后进行了批量归一化(BatchNorm),然后dropout

class SiameseNetwork(nn.Module):
    def __init__(self):
        super(SiameseNetwork, self).__init__()
        self.cnn1 = nn.Sequential(
            nn.ReflectionPad2d(1),
            nn.Conv2d(1, 4, kernel_size=3),
            nn.ReLU(inplace=True),
            nn.BatchNorm2d(4),
            
            nn.ReflectionPad2d(1),
            nn.Conv2d(4, 8, kernel_size=3),
            nn.ReLU(inplace=True),
            nn.BatchNorm2d(8),


            nn.ReflectionPad2d(1),
            nn.Conv2d(8, 8, kernel_size=3),
            nn.ReLU(inplace=True),
            nn.BatchNorm2d(8),


        )

        self.fc1 = nn.Sequential(
            nn.Linear(8*100*100, 500),
            nn.ReLU(inplace=True),

            nn.Linear(500, 500),
            nn.ReLU(inplace=True),

            nn.Linear(500, 5))

    def forward_once(self, x):
        output = self.cnn1(x)
        output = output.view(output.size(0), -1)
        output = self.fc1(output)
        return output

    def forward(self, input1, input2):
        output1 = self.forward_once(input1)
        output2 = self.forward_once(input2)
        return output1, output2

3.对比损失

class ContrastiveLoss(torch.nn.Module):

    def __init__(self, margin=2.0):
        super(ContrastiveLoss, self).__init__()
        self.margin = margin

    def forward(self, output1, output2, label):
        euclidean_distance = F.pairwise_distance(output1, output2)
        loss_contrastive = torch.mean((1-label) * torch.pow(euclidean_distance, 2) +
                                      (label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))

4.训练

batch=64

epoch=200

用.bmp类型的图片训练

5.测试

用10对图像进行测试

原作者给的代码有些错误,我改了一些地方,如有问题请多加指正。代码链接。

谷歌盘:权重链接

百度盘:权重链接

猜你喜欢

转载自blog.csdn.net/baidu_36669549/article/details/84978244