1. install the pytorch version 0.1.11
## Version 0.1.11
## python2.7 and cuda 8.0
pip install http://download.pytorch.org/whl/cu80/torch-0.1.11.post5-cp27-none-linux_x86_64.whl
pip install torchvision
2. what happened when following errors occurs ???

Traceback (most recent call last):
File "examples/triplet_loss.py", line 221, in <module>
File "examples/triplet_loss.py", line 150, in main
File "build/bdist.linux-x86_64/egg/reid/evaluators.py", line 118, in evaluate
File "build/bdist.linux-x86_64/egg/reid/evaluators.py", line 21, in extract_features
File "/usr/local/lib/python2.7/dist-packages/torch/utils_v2/data/dataloader.py", line 301, in __iter__
File "/usr/local/lib/python2.7/dist-packages/torch/utils_v2/data/dataloader.py", line 163, in __init__
File "/usr/local/lib/python2.7/dist-packages/torch/utils_v2/data/dataloader.py", line 226, in _put_indices
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
File "/usr/local/lib/python2.7/dist-packages/torch/multiprocessing/queue.py", line 17, in send
File "/usr/lib/python2.7/pickle.py", line 224, in dump
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 548, in save_tuple
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 600, in save_list
File "/usr/lib/python2.7/pickle.py", line 633, in _batch_appends
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 600, in save_list
File "/usr/lib/python2.7/pickle.py", line 633, in _batch_appends
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 562, in save_tuple
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/multiprocessing/forking.py", line 67, in dispatcher
File "/usr/lib/python2.7/pickle.py", line 401, in save_reduce
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/pickle.py", line 548, in save_tuple
File "/usr/lib/python2.7/pickle.py", line 286, in save
File "/usr/lib/python2.7/multiprocessing/forking.py", line 66, in dispatcher
File "/usr/local/lib/python2.7/dist-packages/torch/multiprocessing/reductions.py", line 113, in reduce_storage
RuntimeError: unable to open shared memory object </torch_29419_2971992535> in read-write mode at /b/wheel/pytorch-src/torch/lib/TH/THAllocator.c:226
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
File "/usr/lib/python2.7/shutil.py", line 239, in rmtree
File "/usr/lib/python2.7/shutil.py", line 237, in rmtree
OSError: [Errno 24] Too many open files: '/tmp/pymp-QoKm2p'
3. GPU 和 CPU 数据之间的转换:
(1)CPU ---> GPU: a.cuda()
(2)GPU ---> CPU: a.cpu()
(3) torch.tensor ---> numpy array:
a_numpy_style = a.numpy()
(4)numpy array ---> torch.tensor:
1 >>> import numpy as np
2 >>> a = np.ones(5)
3 >>> b = torch.from_numpy(a)
4 >>> np.add(a, 1, out=a)
5 array([ 2., 2., 2., 2., 2.])
6 >>> print(a)
7 [ 2. 2. 2. 2. 2.]
8 >>> print(b)
9
10 2
11 2
12 2
13 2
14 2
15 [torch.DoubleTensor of size 5]
16
17 >>> c=b.numpy()
18 >>> c
19 array([ 2., 2., 2., 2., 2.])
4. Variable and Tensor:
==>> programs occured error:
expected a Variable, but got a Float.Tensor(), ~~~~
==>> this can be solved by adding:
from torch.autograd import Variable
hard_neg_differ_ = Variable(hard_neg_differ_)
==>> this will change the hard_neg_differ_ into a variable, not a Float.Tensor() any more.
we can read this reference: http://blog.csdn.net/shudaqi2010/article/details/54880748
it tell us:

>>> import torch
>>> x = torch.Tensor(2,3,4)
>>> x
(0 ,.,.) =
1.00000e-37 *
2.4168 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(1 ,.,.) =
1.00000e-37 *
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
[torch.FloatTensor of size 2x3x4]
>>> from torch.autograd import Variable
>>> x = Variable(x)
>>> x
Variable containing:
(0 ,.,.) =
1.00000e-37 *
2.4168 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(1 ,.,.) =
1.00000e-37 *
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
[torch.FloatTensor of size 2x3x4]
But, you can not directly convert the Variable to numpy() or something else. You can load the values in the Variable and convert to numpy() through:
value = varable.data.numpy().
5. Some Operations about tensor. obtained from blog: http://www.cnblogs.com/huangshiyu13/p/6672828.html
============改变数组的维度==================
已知reshape函数可以有一维数组形成多维数组
ravel函数可以展平数组
b.ravel()
flatten()函数也可以实现同样的功能
区别:ravel只提供视图view,而flatten分配内存存储
重塑:
用元祖设置维度
>>> b.shape=(4,2,3)
>>> b
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17],
[18, 19, 20],
[21, 22, 23]])
转置:
>>> b
array([0, 1],
[2, 3])
>>> b.transpose()
array([0, 2],
[1, 3])
=============数组的组合==============
>>> a
array([0, 1, 2],
[3, 4, 5],
[6, 7, 8])
>>> b = a*2
>>> b
array([ 0, 2, 4],
[ 6, 8, 10],
[12, 14, 16])
1.水平组合
>>> np.hstack((a,b))
array([ 0, 1, 2, 0, 2, 4],
[ 3, 4, 5, 6, 8, 10],
[ 6, 7, 8, 12, 14, 16])
>>> np.concatenate((a,b),axis=1)
array([ 0, 1, 2, 0, 2, 4],
[ 3, 4, 5, 6, 8, 10],
[ 6, 7, 8, 12, 14, 16])
2.垂直组合
>>> np.vstack((a,b))
array([ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 0, 2, 4],
[ 6, 8, 10],
[12, 14, 16])
>>> np.concatenate((a,b),axis=0)
array([ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 0, 2, 4],
[ 6, 8, 10],
[12, 14, 16])
3.深度组合:沿着纵轴方向组合
>>> np.dstack((a,b))
array([[ 0, 0],
[ 1, 2],
[ 2, 4],
[ 3, 6],
[ 4, 8],
[ 5, 10],
[ 6, 12],
[ 7, 14],
[ 8, 16]])
4.列组合column_stack()
一维数组:按列方向组合
二维数组:同hstack一样
5.行组合row_stack()
以为数组:按行方向组合
二维数组:和vstack一样
6.==用来比较两个数组
>>> a==b
array([ True, False, False],
[False, False, False],
[False, False, False], dtype=bool)
#True那个因为都是0啊
==================数组的分割===============
>>> a
array([0, 1, 2],
[3, 4, 5],
[6, 7, 8])
>>> b = a*2
>>> b
array([ 0, 2, 4],
[ 6, 8, 10],
[12, 14, 16])
1.水平分割(难道不是垂直分割???)
>>> np.hsplit(a,3)
[array([0],
[3],
[6]),
array([1],
[4],
[7]),
array([2],
[5],
[8])]
split(a,3,axis=1)同理达到目的
2.垂直分割
>>> np.vsplit(a,3)
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7, 8])]
split(a,3,axis=0)同理达到目的
3.深度分割
某三维数组:::
>>> d = np.arange(27).reshape(3,3,3)
>>> d
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17],
[18, 19, 20],
[21, 22, 23],
[24, 25, 26]])
深度分割后(即按照深度的方向分割)
注意:dsplite只对3维以上数组起作用
raise ValueError('dsplit only works on arrays of 3 or more dimensions')
ValueError: dsplit only works on arrays of 3 or more dimensions
>>> np.dsplit(d,3)
[array([[ 0],
[ 3],
[ 6],
[ 9],
[12],
[15],
[18],
[21],
[24]]), array([[ 1],
[ 4],
[ 7],
[10],
[13],
[16],
[19],
[22],
[25]]), array([[ 2],
[ 5],
[ 8],
[11],
[14],
[17],
[20],
[23],
[26]])]
===================数组的属性=================
>>> a.shape #数组维度
(3, 3)
>>> a.dtype #元素类型
dtype('int32')
>>> a.size #数组元素个数
9
>>> a.itemsize #元素占用字节数
4
>>> a.nbytes #整个数组占用存储空间=itemsize*size
36
>>> a.T #转置=transpose
array([0, 3, 6],
[1, 4, 7],
[2, 5, 8])
6. image paste using python:
im = Image.open('/home/wangxiao/Pictures/9c1147d3gy1fjuyywz23sj20dl09u3yw.jpg') box = (100,100,500,500) region = im.crop(box) im.paste(region,(100,70)) im.show()

7. pytorch save checkpoints
torch.save(model.state_dict(), filename)
8. install python3.5 on ubuntu system:
sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sudo apt-get install python3.5
when testing, just type: python3.5
9. load imge to tensor & save tensor data to image files.
def tensor_load_rgbimage(filename, size=None, scale=None):
img = Image.open(filename)
if size is not None:
img = img.resize((size, size), Image.ANTIALIAS)
elif scale is not None:
img = img.resize((int(img.size[0] / scale), int(img.size[1] / scale)), Image.ANTIALIAS)
img = np.array(img).transpose(2, 0, 1)
img = torch.from_numpy(img).float()
return img
def tensor_save_rgbimage(tensor, filename, cuda=False):
if cuda:
img = tensor.clone().cpu().clamp(0, 255).numpy()
else:
img = tensor.clone().clamp(0, 255).numpy()
img = img.transpose(1, 2, 0).astype('uint8')
img = Image.fromarray(img)
img.save(filename)
10. the often used opeartions in pytorch:
########################## save log files #############################################
logfile_path = './log_files_AAE_2017.10.08.16:20.txt'
fobj=open(logfile_path,'a')
fobj.writelines(['Epoch: %d Niter:%d Loss_VAE: %.4f Loss_D: %.4f Loss_D_noise: %.4f Loss_G: %.4f D(x): %.4f D(G(z)): %.4f / %.4f \n'
% (EEEPoch, total_epoch, VAEerr.data[0], errD_noise.data[0], errD.data[0], total_errG.data[0], D_x, D_G_z1, D_G_z2)])
fobj.close()
# print('==>> saving txt files ... Done!')
########################### save checkpoints ###########################
if epoch%opt.saveInt == 0 and epoch!=0:
torch.save(netG.state_dict(), '%s/netG_epoch_%d.pth' % (opt.outf, epoch))
# torch.save(netD.state_dict(), '%s/netD_epoch_%d.pth' % (opt.outf, epoch))
# torch.save(netD_gaussian.state_dict(), '%s/netD_Z_epoch_%d.pth' % (opt.outf, epoch))
# ########################### save middle images into folders ###########################
# img_index = EEEPoch + index_batch + epoch
# if epoch % 10 == 0:
# vutils.save_image(real_cpu, '%s/real_samples.png' % img_index,
# normalize=True)
# fake = netG.decoder(fixed_noise)
# vutils.save_image(fake.data,
# '%s/fake_samples_epoch_%03d.png' % (img_index, img_index),
# normalize=True)
11. error: RuntimeError: tensors are on different GPUs
==>> this is caused you set data into GPU mode, but not pre-defined model.
12.