可获得索引和元素值,其中enumerate()第二个参数用来指定起始索引值,可省略,例
list = ["Python","Java","C"]
for index,key in enumerate(list):
print(index,key)
输出正态分布随机数
example1:
import torch
import torch.nn as nn
input = torch.randn(1, 1, 3, 4)
print(input)
结果:
tensor([[[[ 0.5028, -0.7468, 1.8858, 0.1745],
[ 0.8540, 0.0401, 1.4751, 0.9010],
[-0.3230, -0.4141, -0.4215, 0.1705]]]])
example2:
import torch
import torch.nn as nn
input = torch.randn(1, 2, 3, 4)
print(input)
结果:
tensor([[[[ 1.5492, 0.9120, 0.9391, -0.2901],
[-0.3356, -0.3431, 1.0347, -1.6674],
[-0.1109, -0.1498, -1.2600, 0.1818]],
[[ 1.8994, 0.0805, -2.7722, -1.1939],
[ 1.4740, -0.2008, -0.1438, -1.1926],
[-0.6315, 0.8516, 1.9624, -1.2148]]]])
nn.Conv2d()前三个输入参数分别是(input_channels,output_chanels,kernel_sizes)
卷积,重点说明一下padding=N,就填充N个数(默认填充0)
example1:
import torch
import torch.nn as nn
input = torch.randn(1, 1, 3, 4)
print(input)
m = nn.Conv2d(1,1,1,stride=1,bias=False)
print(m(input))
m1 = nn.Conv2d(1,1,1,stride=1,bias=False,padding=1)
print(m1(input))
m2 = nn.Conv2d(1,1,1,stride=1,bias=False,padding=2)
print(m2(input))
结果:
tensor([[[[ 1.2922, -1.6056, -0.2292, -1.1778],
[-1.1310, -1.9764, -1.2235, -0.5288],
[ 1.5305, -0.1229, -1.3054, 1.3235]]]])
tensor([[[[ 0.3306, -0.4108, -0.0586, -0.3014],
[-0.2894, -0.5057, -0.3131, -0.1353],
[ 0.3916, -0.0314, -0.3340, 0.3386]]]],
grad_fn=<ThnnConv2DBackward>)
tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.7538, -0.9366, -0.1337, -0.6871, 0.0000],
[ 0.0000, -0.6598, -1.1529, -0.7137, -0.3085, 0.0000],
[ 0.0000, 0.8928, -0.0717, -0.7615, 0.7720, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]],
grad_fn=<ThnnConv2DBackward>)
tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000],
[ 0.0000, 0.0000, -0.3415, 0.4243, 0.0606, 0.3113, 0.0000,
0.0000],
[ 0.0000, 0.0000, 0.2989, 0.5223, 0.3233, 0.1397, 0.0000,
0.0000],
[ 0.0000, 0.0000, -0.4045, 0.0325, 0.3450, -0.3498, 0.0000,
0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000]]]], grad_fn=<ThnnConv2DBackward>)
调换Tensor中各维度的顺序
example1:
import torch
x = torch.empty([6, 7, 8, 9])
print(x.size())
x = x.permute([0, 1, 3, 2])
print(x.size())
输出:
torch.Size([6, 7, 8, 9])
torch.Size([6, 7, 9, 8])
调换前tensor第三维度有9个元素,调换后第三维度变成8个元素(和第四维度的发生调换),第四维度有9个元素
view的作用是reshape(),要求tensor是连续的,一般用法是.contiguous().view(n, -1,…)
example1:
a = torch.arange(1, 17) # a's shape is (16,)
print(a.view(4, 4)) # output below
print(a.view(2, 2, 4)) # output below
结果:
tensor([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]])
[torch.FloatTensor of size 4x4]
tensor([[[ 1, 2, 3, 4],
[ 5, 6, 7, 8]],
[[ 9, 10, 11, 12],
[13, 14, 15, 16]]])
[torch.FloatTensor of size 2x2x4]
a.view(2, 2, 4)表示reshape成2个2*4矩阵,若写成a.view(n,-1,4)则表示reshape成n个x行4列的矩阵(-1会自动匹配有多少行)