Tensorflow函数用法(持续扩充)
1. tf.clip_by_value
tf.clip\_by\_value(v,min,max)
:
给定一个张量v,将张量v中地每一个元素压缩到[min,max]的值域内。(小于min的置为min,大于max的置为max)。
2. tf.reduce_mean
沿着张量的指定的轴(某一维度),计算张量中元素的平均值。
#Computes the mean of elements across dimensions of a tensor
def reduce_mean(input_tensor,
axis=None,
keepdims=None,
name=None,
reduction_indices=None,
keep_dims=None):
Args:
_ input_tensor : The tensor to reduce. Should have numeric type.
axis : The dimensions to reduce. IfNone
(the default), reduces all dimensions. Must be in the range[-rank(input_tensor), rank(input_tensor)]
.
keepdims : If true, retains reduced dimensions with length 1.
_ name : A name for the operation (optional).
_ reduction_indices : The old (deprecated) name for axis.
_ keep_dims : Deprecated alias forkeepdims
.
Returns:
The reduced tensor.
x = tf.constant([1, 0, 1, 0])
tf.reduce_mean(x) # 0
y = tf.constant([1., 0., 1., 0.])
tf.reduce_mean(y) # 0.5
x = tf.constant([[1., 1.], [2., 2.]])
tf.reduce_mean(x) # 1.5
tf.reduce_mean(x, 0) # [1.5, 1.5],以横轴为基准,对横轴每一维所包含的列元素求平均。简单来说,对行(第0维)做压缩。
tf.reduce_mean(x, 1) # [1., 2.],压缩列(第1维)。
3. cross_entropy(交叉熵)
- tf.nn.sigmoid_cross_entropy_with_logits
- tf.nn.softmax_cross_entropy_with_logits
- tf.nn.sparse_softmax_cross_entropy_with_logits
- tf.nn.weighted_cross_entropy_with_logits
4. tf.matmul(v1,v2):
矩阵相乘,与*不同。
*的结果为每个元素对应位置上的乘积
matmul
为矩阵相乘
5. tf.where/tf.greater
import tensorflow as tf
v1=tf.constant([1.,2.,3.])
v2=tf.constant([3.,1.,4.])
with tf.Session() as sess:
Great=tf.greater(v1,v2)
print (sess.run(Great))
#[False True False]
Where=tf.where(Great,v1,v2)
print(Where)
#Tensor("Select:0", shape=(3,), dtype=float32)
print(sess.run(Where))
#[ 3. 2. 4.]
6. tf.train.exponential_decay:
#Applies exponential decay to the learning rate.
def exponential_decay(learning_rate,
global_step,
decay_steps,
decay_rate,
staircase=False,
name=None):
Returns:
The function returns the decayed learning rate. It is computed as:
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
Args:
- learning_rate: A scalar
float32
orfloat64
Tensor
or a Python number. The initial learning rate. - global_step: A scalar
int32
orint64
Tensor
or a Python number. Global step to use for the decay computation. Must not be negative. - decay_steps: A scalar
int32
orint64
Tensor
or a Python number. Must be positive. See the decay computation above. - decay_rate: A scalar
float32
orfloat64
Tensor
or a Python number. The decay rate. staircase: Boolean. IfTrue
decay the learning rate at discrete intervals - name: String. Optional name of the operation. Defaults to ‘ExponentialDecay’.
7. tf.argmax:
#Returns the index with the largest value across dimensions of a tensor
def argmax(input,
axis=None,
name=None,
dimension=None,
output_type=dtypes.int64):
说明:tf\.argmax(V,1)
:
V代表一个张量1表示选取最大值的操作仅在第1个维度上进行,即只在每一行选取最大值对应的下标。
实例:
import tensorflow as tf
V=tf.constant([[1,2,3],[2,3,4]])
Max=tf.argmax(V,1)
print(Max.eval(session=tf.Session()))
#[2 2] 结果存储的是每一行的最大值对应的下标值
Max2=tf.argmax(V,0)
print(Max2.eval(session=tf.Session()))
#[1 1 1] 结果存储的是每一列的最大值对应的下标值