关于优化器
#tensorflow
tf.train.GradientDescentOptimizer
tf.train.AdadeltaOptimizer
tf.train.AdagradOptimizer
tf.train.AdagradDAOptimizer
tf.train.MomentumOptimizer
tf.train.AdamOptimizer
tf.train.FtrlOptimizer
tf.train.ProximalGradientDescentOptimizer
tf.train.ProximalAdagradOptimizer
tf.train.RMSPropOptimizer
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
optimizer = tf.train.AdadeltaOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
优化器 |
mnist test error |
AdamOptimizer |
2.3% |
MomentumOptimizer |
0.7% |
GradientDescentOptimizer |
- |
AdadeltaOptimizer |
10.2% |
AdagradOptimizer |
1.8% |
AdagradDAOptimizer |
- |
FtrlOptimizer |
- |
ProximalGradientDescentOptimizer |
90.2% |
ProximalAdagradOptimizer |
1.8% |
RMSPropOptimizer |
1.4% |