神经网路(Neural Network building)—— 前向传播和反向传播

神经网络:

       随着人工智能的崛起,Python渐渐开始成为编写人工智能的主要工具语言,由于Python易于编写且在整个大环境下,工具包非常丰富,所以我认为也是写神经网络的最佳语言。神经网络由输入层、隐藏层和输出层构成,隐藏层越多,说明整个神经网络越复杂,计算复杂度也越高,理所当然的计算耗时就越长。每个隐藏层上的神经元节点,也就是说每个Neural,简单地说,它的值由它的入度边的权值*它相连的另外一个节点,每一层隐藏层都是如此,最后输出,形成一个映射函数(Output Layer)

神经网络的搭建环境:

Python+numpy+tensorflow+matplotlib

numpy是一个科学计算的数据包,然而tensorflow是一个前几年才被谷歌开源的一个编写人工智能框架的一个非常popular的困啊框架,matplotlib主要是用于整个数据的可视化(画图工具),就是通过图像,比如折线图,饼状图,直方图,更加直观地想读者或者用户展现数据地走向以及趋势,整个安装已经环境地搭建可以在百度上搜索到,我下面大致讲讲一个简单地神经网络的简单搭建。

神经网络的搭建的思想是基于计算图的思想,分为前向传播(forward-propagation)和反向传播(backward-propagation),前向传播的目的在于搭建计算图,也就是一个完整的从输入层输入数据,在隐藏层处运算,在输出层中输出,一个完整的从前至后的过程,建立起一套完整的计算思想。Backforward-propagation的目的就是从后往前优化参数,优化权重,达到对整个神经网络的一个优化调参的过程,也就是训练一个模型,下面我将展示出我学习神经网络时编写的第一个神经网络的例子:

#coding:utf-8
# 导入模块,生成模拟数据集
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

seed = 2
def generateds():
    # 基于seed生成随机数
    rdm = np.random.RandomState(seed)
    # 随机数返回一个300 x 2的矩阵
    X = rdm.randn(600,2)
    Y_ = [int(x0*x0 + x1*x1 < 2) for (x0, x1) in X]
    Y_c = [['red' if y else 'blue'] for y in Y_]
    # 对数据集X和Y进行数据整理,-1表示多行,2表示有两列
    X = np.vstack(X).reshape(-1, 2)
    Y_ = np.vstack(Y_).reshape(-1, 1)
    # np.squeeze()把向量化成秩为1的向量
    plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c))
    plt.show()
    return X, Y_, Y_c

# 定义前向传播过程
def get_weight(shape, regularizer):
    w = tf.Variable(tf.random_normal(shape), dtype=tf.float32)
    tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w))
    return w

def get_bias(shape):
    b = tf.Variable(tf.constant(0.01, shape=shape))
    return b

def forward(x, regularizer):
    w1 = get_weight([2,11], regularizer)
    b1 = get_bias([11])
    # 在tf.matmul(x, w1)结果上每一行都加上b1
    y1 = tf.nn.relu(tf.matmul(x, w1) + b1)
    
    w2 = get_weight([11,1], regularizer)
    b2 = get_bias([1])
    y = tf.matmul(y1, w2) + b2
    
    return y

STEPS = 70000
BATCH_SIZE = 30
LEARNNING_RATE_BASE = 0.001
LEARNNING_RATE_DECAY = 0.999
REGULARIZER = 0.01

def backward():
    x = tf.placeholder(tf.float32, shape=(None, 2))
    y_ = tf.placeholder(tf.float32, shape=(None, 1))
    
    X, Y_, Y_c = generateds()
    
    y = forward(x, REGULARIZER)
    
    global_step = tf.Variable(0, trainable=False)
    learning_rate = tf.train.exponential_decay(
        LEARNNING_RATE_BASE,
        global_step,
        300/BATCH_SIZE,
        LEARNNING_RATE_DECAY,
        staircase=True)
    
    # 定义损失函数
    loss_mse = tf.reduce_mean(tf.square(y-y_))
    loss_total = loss_mse + tf.add_n(tf.get_collection('losses'))
    
    # 定义反向传播方法:包含正则化
    train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss_total)
#     train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss_total)
    
    # 建立会话
    with tf.Session() as sess:
        init_op = tf.global_variables_initializer()
        sess.run(init_op)
        
        for i in range(STEPS):
            start = (i*BATCH_SIZE)%300
            end = start + BATCH_SIZE
            sess.run(train_step, feed_dict={x: X[start:end], y_: Y_[start:end]})
            if i%2000 == 0:
                loss_v = sess.run(loss_total, feed_dict={x: X, y_:Y_})
                print("After %d steps, loss is %f" % (i, loss_v))
        
        # xx在-3到3之间以步长为0.01,yy在-3到3之间以步长为0.01,生成二维网格坐标点
        xx, yy = np.mgrid[-3:3:.01, -3:3:.01]
        # np.ravel()将多维数组变为一维数组(是原array的引用)
        # np.flatten()将多维数组变成一维数组(是原array的copy)
        # np.c_() 表示按行组合array,
        # np.r_()表示按列组合array
        
        # 将xx,yy拉直,并合并成一个二维矩阵,得到一个网格坐标点的集合
        grid = np.c_[xx.ravel(), yy.ravel()]
        # 将网格坐标点喂给神经网络,probs为输出
        probs = sess.run(y, feed_dict={x: grid})
        # probs的shape调整为xx的样子
        probs = probs.reshape(xx.shape)
    
    plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c))
    # plt.contour()函数:告知x, y坐标和各点的高度,用levels指定高度的点瞄上颜色
    plt.contour(xx, yy, probs, levels=[.5])
    plt.show()
            
if __name__ == '__main__':
    backward()
            
            

运行结果如下:

After 0 steps, loss is 5.060760
After 2000 steps, loss is 0.195497
After 4000 steps, loss is 0.143852
After 6000 steps, loss is 0.118596
After 8000 steps, loss is 0.103567
After 10000 steps, loss is 0.096369
After 12000 steps, loss is 0.092850
After 14000 steps, loss is 0.091428
After 16000 steps, loss is 0.090583
After 18000 steps, loss is 0.090211
After 20000 steps, loss is 0.089995
After 22000 steps, loss is 0.089809
After 24000 steps, loss is 0.089600
After 26000 steps, loss is 0.089424
After 28000 steps, loss is 0.089278
After 30000 steps, loss is 0.089203
After 32000 steps, loss is 0.089093
After 34000 steps, loss is 0.088999
After 36000 steps, loss is 0.088909
After 38000 steps, loss is 0.088817
After 40000 steps, loss is 0.088766
After 42000 steps, loss is 0.088703
After 44000 steps, loss is 0.088323
After 46000 steps, loss is 0.088212
After 48000 steps, loss is 0.088162
After 50000 steps, loss is 0.088130
After 52000 steps, loss is 0.088103
After 54000 steps, loss is 0.088094
After 56000 steps, loss is 0.088094
After 58000 steps, loss is 0.088096
After 60000 steps, loss is 0.088093
After 62000 steps, loss is 0.088096
After 64000 steps, loss is 0.088096
After 66000 steps, loss is 0.088094
After 68000 steps, loss is 0.088097

发布了25 篇原创文章 · 获赞 8 · 访问量 4494

猜你喜欢

转载自blog.csdn.net/weixin_42414405/article/details/90641090