设定 tensor 默认的 dtype:torch.set_default_tensor_type(torch.DoubleTensor)
Pytorch 有八个类型:
Daya type | dtype | Tensor types |
---|---|---|
32-bit 浮点 | torch.float32 or torch.float |
torch.*.FloatTensor |
64-bit 浮点 | torch.float64 or torch.double |
torch.*.DoubleTensor |
16-bit 浮点 | torch.float16 or torch.half |
torch.*.HalfTensor |
8-bit 整型(无符号) | torch.uint8 |
torch.*.ByteTensor |
8-bit 整型(有符号) | torch.int8 |
torch.*.CharTensor |
16-bit 整型(有符号) | torch.int16 or torch.short |
torch.*.ShortTensor |
32-bit 整型(有符号) | torch.int32 or torch.int |
torch.*.IntTensor |
64-bit 整型(有符号) | torch.int64 or torch.long |
torch.*.LongTensor |
保存模型:
def save_checkpoint(model, optimizer, scheduler, save_path):
# 如果还有其它变量想要保存,也可以添加
torch.save({
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'scheduler_state_dict': scheduler.state_dict(),
}, save_path)
# 加载模型
checkpoint = torch.load(pretrain_model_path)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
scheduler.load_state_dict(checkpoint['scheduler_state_dict'])
...
打印模型的梯度:
# 打印梯度
for name, parameters in model.named_parameters():
print('{}\'s grad is:\n{}\n'.format(name, parameters.grad))
使用梯度衰减策略:
# 指数衰减
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)
# 阶梯衰减
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=200, gamma=0.5)
# 自定义间隔衰减
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[400], gamma=0.5)
梯度截断:
def clip_gradient(optimizer, grad_clip):
"""
Clips gradients computed during backpropagation to avoid explosion of gradients.
:param optimizer: optimizer with the gradients to be clipped
:param grad_clip: clip value
"""
for group in optimizer.param_groups:
for param in group["params"]:
if param.grad is not None:
param.grad.data.clamp_(-grad_clip, grad_clip)
自定义激活函数示例:
class OutExp(nn.Module):
def __init__(self):
super(OutExp, self).__init__()
def forward(self, x):
x = -torch.exp(x)
return x
修改模型某一层参数:nn.Parameter()
:
# 修改第 2 层的 bias(`layer` 是模型定义时给的名称)
model.layer[2].bias = nn.Parameter(torch.tensor([-0.01, -0.4], device=device, requires_grad=True))
模型参数初始化:
# 自定义权重初始化
def weight_init(m):
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight, gain=0.1)
nn.init.constant_(m.bias, 0)
# 也可以判断是否为 conv2d,使用相应的初始化方式
elif isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
# 是否为批归一化层
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
# 模型应用函数
model.apply(weight_init)