TensorFlow实战框架Chp6--tf.nn.drop函数解析

tf.nn.dropout是TensorFlow里面为了防止或减轻过拟合而使用的函数,它一般用在全连接层


  • tf.nn.drop(x, keep_prob, noise_shape=None, seed=None,name=None)
  • x: 输入值
  • keep_prob: float类型,每个元素被保留下来的概率
  • noise_shape: 一个1维的int32张量,代表了随机产生“保留/丢弃”标志的shape
  • 若神经元被抑制,输出y的取值为0;
  • 若神经元不被抑制,输出y值的范围最大为 y=y/(keep_prob)
# -*- coding: utf-8 -*-
"""
Created on Tue Jul  3 19:18:50 2018

@author: muli
"""

"""
测试Tensor经过dropout()的效果:
    1.输入与输出的Tensor的shape相同;
    2.随机使某些元素值为0,非零元素为:对应值/keep_prob
"""

import tensorflow as tf
import numpy as np

# 清除默认图的堆栈,并设置全局图为默认图 
tf.reset_default_graph()

dropout = tf.placeholder(tf.float32)
x = tf.reshape(np.array(range(25), dtype=np.float32), [5, 5])
y = tf.nn.dropout(x, dropout)
#print(x, y)

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print(sess.run(x))
    print("------------------")
    print(sess.run(y, feed_dict={dropout: 0.5}))
  • 输出值为:
[[ 0.  1.  2.  3.  4.]
 [ 5.  6.  7.  8.  9.]
 [10. 11. 12. 13. 14.]
 [15. 16. 17. 18. 19.]
 [20. 21. 22. 23. 24.]]
------------------
[[ 0.  2.  0.  6.  0.]
 [10.  0. 14. 16.  0.]
 [ 0. 22.  0.  0.  0.]
 [ 0.  0.  0. 36. 38.]
 [ 0.  0. 44.  0. 48.]]

猜你喜欢

转载自blog.csdn.net/mr_muli/article/details/80903330