Tensorflow学习笔记Demo003

#tensorflow最重要的功能就是搭建神经网络,神经网络的基本原理这里就不在详细介绍了,这些都是实现代码的理论基础,所以不再这里进一步说明了

今天使用神经网络来搭建了一个浅层的神经网络,搭建的神经网络的为两层隐含层,第一层为300个节点,第二层为100个节点;输出层是10个节点,每一层都是全连接的神经网络,数据使用MINIST

一种方法对于简单的神经网络采用top_level函数(使用的方法有点类似于sklearn中fit和predict的用法)

from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=20)
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
dnn_clf=tf.contrib.learn.DNNClassifier(hidden_units=[10,5],n_classes=2,feature_columns=feature_columns)
dnn_clf.fit(x=X_train,y=y_train,batch_size=10,steps=50)

还有是比较常见的使用tensorflow来搭建每一层神经网络,这种比较常用,灵活性强

#train a DNN using  plain Tensorflow 
#define the hinddend layers

n_inputs=28*28
n_hidden1s=300
n_hidden2s=100
n_outputs=10
x=tf.placeholder(tf.float32,shape=(None,n_inputs),name="x")
y=tf.placeholder(tf.int64,shape=(None),name="y")
#定义自己的激活函数
def add_layer(X,n_input,n_output,activate_function=None):#shi yong moren de relu hanshu
    stddev=2/np.sqrt(n_input)# define the std values
    weights=tf.Variable(tf.truncated_normal((n_input,n_output),stddev=stddev),name="weight")
    bias=tf.Variable(tf.zeros(n_output),name="bias")#bais bushi yige yangben yige ershi yige jiedain yige  [0,0,0,0,0,0...]
    z=tf.matmul(X,weights)+bias
    if activate_function=="relu":
        return tf.nn.relu(z)
    else:
        return z
with tf.name_scope("dnn"):
    lay_out1=add_layer(x,n_inputs,n_hidden1s,"relu")
    lay_out2=add_layer(lay_out1,n_hidden1s,n_hidden2s,"relu")
    logits=add_layer(lay_out2,n_hidden2s,n_outputs,"outputs")
with tf.name_scope("loss"):
    #cross entropy
    xentropy=tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,logits=logits)
    loss=tf.reduce_mean(xentropy,name="loss")

learning_rate=0.01
with tf.name_scope("train"):
    optimizer=tf.train.GradientDescentOptimizer(learning_rate)
    trainint_op=optimizer.minimize(loss)

#the evaluation part
with tf.name_scope("eval"):
    correct=tf.nn.in_top_k(logits,y,1)
    accuracy=tf.reduce_mean(tf.cast(correct,tf.float32))

 
#shezhi xunhuan de  cishu
n_epochs=400
batch_size=50
init=tf.global_variables_initializer()
with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for iteration in ...



猜你喜欢

转载自blog.csdn.net/hufanglei007/article/details/79702933