版权声明:本文为博主([email protected])原创文章,未经博主允许不得转载。 https://blog.csdn.net/z_feng12489/article/details/90033434
6.4 单输出感知机反向传播
单一输出感知机
x=tf.random.normal([1,3])
w=tf.ones([3,1])
b=tf.ones([1])
y = tf.constant([1])
with tf.GradientTape() as tape:
tape.watch([w, b])
logits = tf.sigmoid(x@w+b)
loss = tf.reduce_mean(tf.losses.MSE(y, logits))
grads = tape.gradient(loss, [w, b])
print('w grad:', grads[0])
print('b grad:', grads[1])
w grad: tf.Tensor(
[[-0.10537013]
[ 0.11153423]
[ 0.32909304]], shape=(3, 1), dtype=float32)
b grad: tf.Tensor([-0.27427486], shape=(1,), dtype=float32)