【深度学习图像识别课程】从hello,world开始学习tensorflow(输入/数学/线性函数/Softmax/交叉熵/mini-batch)

一、从hello, world开始


打印前面有个b,在sess.run后加decode(),则会打印正常

1、tensor

tensor张量。tf.constant()返回的tensor是一个常量tensor,值不变。

2、session

数学运算过程可视化


二、tensorflow输入

tf.placeholder(), Session的feed_dict

feed_dict设置一个tensor

x = tf.placeholder(tf.string)

with tf.Session() as sess:
    output = sess.run(x, feed_dict={x: 'Test String'})

也可以设置多个tensor

x = tf.placeholder(tf.string)
y = tf.placeholder(tf.int32)
z = tf.placeholder(tf.float32)
with tf.Session() as sess:
    output = sess.run(x, feed_dict={x: 'Test String', y:123, z:45.67})


三、tensorflow数学

加法:x = tf.add(5, 2)

减法:x = tf.subtract(10, 4)

乘法:y = tf.multiply(2, 5),矩阵相乘 tf.matmul(A, B)

除法:z = tf.divide(x, y)

类型转换:tf.subtract(tf.constant(2.0), tf.constant(1))                                  Error

                tf.subtract(tf.cast(tf.constant(2.0), tf.int32), tf.constant(1))      Right


四、tensorflow中的线性函数

y = xw + b 更新权重和偏差,因此不能使用tensor不变的tf.placeholder()、tf.constant()组合。引出tf.Variable

(1)tf.Variable

    x = tf.Variable(5)

tf.Variable类创建一个tensor,可以改变初始值。初始化如下:

init = tf.global_variable_initializer()
with tf.Session() as sess:
    sess.run(init) 
(2)tf.truncated_normal可以从一个正态分布中生成随机数
n_features = 120
n_labels = 5
weights = tf.Variable(tf.truncated_normal((n_features, n_labels)))
权重已经随机化了,所以不需要再把偏差随机化,bias设为0即可。
(3)tf.zeros
n_labels = 5
bias = tf.Variable(tf.zeros(n_labels))
y = xw + b
tf.add(tf.matmul(x,w), b)


五、Tensorflow Softmax

softmax:预测多分类的最佳输出激活函数


x = tf.nn.softmax([2.0, 1.0, 0.2])


六、Tensorflow交叉熵


# Solution is available in the other "solution.py" tab
import tensorflow as tf

softmax_data = [0.7, 0.2, 0.1]
one_hot_data = [1.0, 0.0, 0.0]

softmax = tf.placeholder(tf.float32)
one_hot = tf.placeholder(tf.float32)

# TODO: Print cross entropy from session
cross_entroy = -tf.reduce_sum(tf.multiply(one_hot, tf.log(softmax)))

with tf.Session() as sess:
    output = sess.run(cross_entroy, feed_dict={one_hot:one_hot_data, softmax:softmax_data})
    print(output)


七、mini-batch

mini-batch跟SGD随机梯度下降混合使用。每一次迭代训练前,对数据进行随机混洗,然后创建mini-batches,对每一个mini-batch,用梯度下降进行权重更新。由于batches是随机的,所以叫随机梯度下降SGD。

from quiz import batches
from pprint import pprint

# 4 Samples of features
example_features = [
    ['F11','F12','F13','F14'],
    ['F21','F22','F23','F24'],
    ['F31','F32','F33','F34'],
    ['F41','F42','F43','F44']]
# 4 Samples of labels
example_labels = [
    ['L11','L12'],
    ['L21','L22'],
    ['L31','L32'],
    ['L41','L42']]

# PPrint prints data structures like 2d arrays, so they are easier to read
pprint(batches(3, example_features, example_labels))
import math
def batches(batch_size, features, labels):
    """
    Create batches of features and labels
    :param batch_size: The batch size
    :param features: List of features
    :param labels: List of labels
    :return: Batches of (Features, Labels)
    """
    assert len(features) == len(labels)
    # TODO: Implement batching
    output_batches = []
    
    sample_size = len(features)
    for start_i in range(0, sample_size, batch_size):
        end_i = start_i + batch_size
        batch = [features[start_i:end_i], labels[start_i:end_i]]
        output_batches.append(batch)
        
    return output_batches

猜你喜欢

转载自blog.csdn.net/weixin_41770169/article/details/80195823