04-07 Tensorflow2.0深度神经网络+批归一化+激活函数selu+Dropout

1.深度神经网络DNN

指多个全连接层进行连接而形成的网络,层数太多不一定性能会更优,反而会下降,这是因为

  • 层数太多,参数太多
  • 导致了梯度消失

2.批归一化BatchNormalization

  • 缓解梯度消失

3.新的激活函数selu

  • 训练时间更短
  • 训练性能更高

4.Dropout防止过拟合

一般在全连接层的最后几层进行使用

  • Dropout
  • AlphaDropout:性能更优:
    1. 均值和方差不变
    2. 归一化性质不变
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
# 输出库的名字和版本
print(sys.version_info)
for module in tf, mpl, np, pd, sklearn, tf, keras:
    print(module.__name__, module.__version__)
 
# 指定GPU
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = '1'
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session(config=config)

# 导入数据集 fashion_mnist
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all,y_train_all),(x_test_all,y_test_all) = fashion_mnist.load_data()
x_valid , x_train = x_train_all[:5000],x_train_all[5000:]
y_valid , y_train = y_train_all[:5000],y_train_all[5000:]
x_test , y_test = x_test_all,y_test_all

print(x_train.shape,y_train.shape)
print(x_valid.shape,y_valid.shape)
print(x_test.shape,y_test.shape)

# 数据归一化
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
    x_train.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)

x_valid_scaled = scaler.transform(
    x_valid.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)

x_test_scaled = scaler.transform(
    x_test.astype(np.float32).reshape(-1,1)).reshape(-1,28,28)


# 模型建立
model = tf.keras.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(10):#DNN,多层全连接层
    model.add(keras.layers.Dense(100,activation='selu')) # 使用selu激活函数代替relu
    model.add(keras.layers.BatchNormalization()) # 批归一化只需要在此处加上即可
    '''
    激活函数可以放在批归一化的后面
    model.add(keras.layers.Dense(100))
    model.add(keras.layers.BatchNormalization())
    model.add(keras.layers.Activation('relu'))
    '''
model.add(keras.layers.AlphaDropout(rate=0.5)) #在最后一层进行Dropout
model.add(keras.layers.Dense(10, activation='softmax'))


# 模型编译
model.compile(loss='sparse_categorical_crossentropy',
               optimizer='sgd',
               metrics=['accuracy'])

# 模型训练 history.history是一个重要的参数
# callbacks : Tensorborad,earlystopping,ModelCheckpoint
logdir = './07-callbacks'
if not os.path.exists(logdir):
    os.makedirs(logdir)
output_model_file = os.path.join(logdir,'fashion_minst_models.h5')
callbacks = [
    keras.callbacks.TensorBoard(logdir),
    keras.callbacks.ModelCheckpoint(output_model_file,
                                    save_best_only = True),# 只保存最好的模型
    keras.callbacks.EarlyStopping(patience=5,min_delta=1e-3)
]

history = model.fit(x_train_scaled,y_train,
                    epochs=10,
                    validation_data=(x_valid_scaled,y_valid),
                    callbacks = callbacks)

# 绘制history图像
def plot_learning_curves(history):
    pd.DataFrame(history.history).plot(figsize=(8,5))
    plt.grid(True)
    plt.gca().set_ylim(0,1)
    plt.show()
plot_learning_curves(history)

# 测试集上进行测试
model.evaluate(x_test_scaled,y_test)

猜你喜欢

转载自blog.csdn.net/qq_44783177/article/details/108088777