一、环境
OS:Ubuntu 18.04
Environment: PyTorch&Anaconda3
Editor: Spyder
二、代码部分
已更新神经网络代码----PyTorch实现可视化一元线性回归的Demo2
Spyder输入不了中文,请原谅我注释里的的Chinglish
一元线性回归的代码在网络上有很多,本代码demo做了如下更新:添加了打印loss的switch 利用matplotlib实现了三个阶段的可视化图形和并可同时输出,方便对比 由交互式的Demo改成了可以解释到底的Demo,减少学习工作量 添加了我的进步汗水与失去的头发
学习不易,转载请注明出处。
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@ubuntu 18.04
@spyder editor
@author: ftimes
"""
import torch
import numpy as np
from torch.autograd import Variable
import matplotlib.pyplot as plt
def getNpArray():
return np.random.randn(15,1)
X = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
[9.779], [6.182], [7.59], [2.167], [7.042],
[10.791], [5.313], [7.997], [3.1]], dtype=np.float32)
Y = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],
[3.366], [2.596], [2.53], [1.221], [2.827],
[3.465], [1.65], [2.904], [1.3]], dtype=np.float32)
plt.title('initial value')
plt.figure(1)
plt.plot(X, Y, 'bo')
#trans to tensor
X = torch.from_numpy(X)
Y = torch.from_numpy(Y)
#def pra a,b
a = Variable(torch.randn(1), requires_grad=True)
b = Variable(torch.zeros(1), requires_grad=True)
#trans to variable
X = Variable(X)
Y = Variable(Y)
def linear_model(x):
return a*x + b
y_ = linear_model(X)
plt.figure(2)
plt.title('append estiamted value')
plt.plot(X.data.numpy(), Y.data.numpy(), 'bo', label='real')
plt.plot(X.data.numpy(), y_.data.numpy(), 'ro', label='estimated')
plt.legend()
#compute and def loss
def computeloss(y_, Y):
return torch.mean((y_ - Y) ** 2)
learningRate=1e-2#def rate of learning to contral model
#initialize
loss = computeloss(y_, Y)
loss.backward()
a.data=a.data-learningRate*a.grad.data
b.data=b.data-learningRate*b.grad.data
print('Need u print loss? [y/n]')
order2=input()
lossflag=False
if order2=='y':
lossflag=True
times=100
for i in range(times):
y_ = linear_model(X)
loss = computeloss(y_, Y)
a.grad.zero_() #must set zero
b.grad.zero_()
loss.backward()
#compute grad of a,b
a.data=a.data-learningRate*a.grad.data
b.data=b.data-learningRate*b.grad.data
if lossflag:
print('epoch: {}, loss: {}'.format(i, loss.data))
plt.figure(3)
plt.title('after update')
plt.plot(X.data.numpy(), Y.data.numpy(), 'bo', label='real')
plt.plot(X.data.numpy(), y_.data.numpy(), 'ro', label='estimated')
plt.legend()
三、示例
Need u print loss? [y/n]
y
epoch: 0, loss: 0.964177709961411
epoch: 1, loss: 0.9637709491045978
epoch: 2, loss: 0.9633792086085372
epoch: 3, loss: 0.963001928222742
epoch: 4, loss: 0.9626385689124767
epoch: 5, loss: 0.9622886116576249
epoch: 6, loss: 0.9619515570811104
epoch: 7, loss: 0.9616269249924927
epoch: 8, loss: 0.961314252842035
epoch: 9, loss: 0.9610130955541414
epoch: 10, loss: 0.9607230246097402
epoch: 11, loss: 0.9604436282531694
epoch: 12, loss: 0.9601745095184536
epoch: 13, loss: 0.9599152864984009
epoch: 14, loss: 0.9596655917972626
epoch:15, loss: 0.9594250718811523
epoch: 16, loss: 0.9591933860391304
epoch: 17, loss: 0.9589702067954244
epoch: 18, loss: 0.9587552187163422
epoch: 19, loss: 0.9585481182398691
epoch: 20, loss: 0.958348612958298