【Dlib】使用dlib_face_recognition_resnet_model_v1.dat无法实现微调fune-tuning

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u010168781/article/details/90201472
1、问题描述

dlib官方使用resnet训练人脸识别,训练了300万的数据,网络参数保存在dlib_face_recognition_resnet_model_v1.dat中。
测试中识别lfw数据时,准确率能达到99.13%,但是在识别自己的数据时,准确率有点低,想在此基础上使用自己的数据经行微调。
经过一番瞎搞,最终失败了(本人是个AI小白)。

2、原因分析

原因是训练网络和测试网络不一样,dlib_face_recognition_resnet_model_v1.dat中保存的是测试网络的序列化参数。
训练网络和测试网络的代码实现如下

template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET>
using residual = add_prev1<block<N,BN,1,tag1<SUBNET>>>;

template <template <int,template<typename>class,int,typename> class block, int N, template<typename>class BN, typename SUBNET>
using residual_down = add_prev2<avg_pool<2,2,2,2,skip1<tag2<block<N,BN,2,tag1<SUBNET>>>>>>;

template <int N, template <typename> class BN, int stride, typename SUBNET> 
using block  = BN<con<N,3,3,1,1,relu<BN<con<N,3,3,stride,stride,SUBNET>>>>>;


template <int N, typename SUBNET> using res       = relu<residual<block,N,bn_con,SUBNET>>;
template <int N, typename SUBNET> using ares      = relu<residual<block,N,affine,SUBNET>>;
template <int N, typename SUBNET> using res_down  = relu<residual_down<block,N,bn_con,SUBNET>>;
template <int N, typename SUBNET> using ares_down = relu<residual_down<block,N,affine,SUBNET>>;

// ----------------------------------------------------------------------------------------

template <typename SUBNET> using level0 = res_down<256,SUBNET>;
template <typename SUBNET> using level1 = res<256,res<256,res_down<256,SUBNET>>>;
template <typename SUBNET> using level2 = res<128,res<128,res_down<128,SUBNET>>>;
template <typename SUBNET> using level3 = res<64,res<64,res<64,res_down<64,SUBNET>>>>;
template <typename SUBNET> using level4 = res<32,res<32,res<32,SUBNET>>>;

template <typename SUBNET> using alevel0 = ares_down<256,SUBNET>;
template <typename SUBNET> using alevel1 = ares<256,ares<256,ares_down<256,SUBNET>>>;
template <typename SUBNET> using alevel2 = ares<128,ares<128,ares_down<128,SUBNET>>>;
template <typename SUBNET> using alevel3 = ares<64,ares<64,ares<64,ares_down<64,SUBNET>>>>;
template <typename SUBNET> using alevel4 = ares<32,ares<32,ares<32,SUBNET>>>;


// training network type
using net_type = loss_metric<fc_no_bias<128,avg_pool_everything<
                            level0<
                            level1<
                            level2<
                            level3<
                            level4<
                            max_pool<3,3,2,2,relu<bn_con<con<32,7,7,2,2,
                            input_rgb_image
                            >>>>>>>>>>>>;

// testing network type (replaced batch normalization with fixed affine transforms)
using anet_type = loss_metric<fc_no_bias<128,avg_pool_everything<
                            alevel0<
                            alevel1<
                            alevel2<
                            alevel3<
                            alevel4<
                            max_pool<3,3,2,2,relu<affine<con<32,7,7,2,2,
                            input_rgb_image
                            >>>>>>>>>>>>;

训练网络net_type和测试网络anet_type的主要却别是,affine代替了bn_con(用固定仿射变换代替批量标准化)

如果将dlib_face_recognition_resnet_model_v1.dat并行化到net_type上运行时会报错,然后崩溃

terminate called after throwing an instance of 'dlib::serialization_error'
  what():  An error occurred while trying to read the first object from the file dlib_face_recognition_resnet_model_v1.dat.
ERROR: Unexpected version 'affine_' found while deserializing dlib::bn_.

Aborted (core dumped)

从错误打印可以看出dlib_face_recognition_resnet_model_v1.dat中保存的是affine_,即测试网络。

在dlib官方demo:dnn_metric_learning_on_images_ex.cpp中有

anet_type testing_net = net;

说明 训练网络net_type 可以自动转换成 测试网络anet_type
原因是dlib源码中有bn_转affine_的代码

    class affine_
    {
    public:
		template <
            layer_mode bnmode
            >
        affine_(
            const bn_<bnmode>& item
        )
     ...

将dlib_face_recognition_resnet_model_v1.dat并行化到测试网络anet_type上可以正常运行,如果能将测试网络anet_type转化成训练网络net_type,就可以实现了。
本人小白,还没有找到方法。。。

猜你喜欢

转载自blog.csdn.net/u010168781/article/details/90201472