本项目基于X2Paddle研发过程梳理了PyTorch(v1.8.1)与PaddlePaddle 2.0.0 模型转化以及常用API差异与分析。通过本项目,帮助开发者快速迁移PyTorch使用经验,完成模型的开发与调优。
X2Paddle
X2Paddle支持将其余深度学习框架训练得到的模型,转换至PaddlePaddle模型,包括TensorFlow/Caffe/ONNX/PyTorch。
安装
pip install x2paddle==1.0.0rc0 --index https://pypi.Python.org/simple/
PyTorch2Paddle
PyTorch2Paddle支持trace和script两种方式的转换,均是PyTorch动态图到Paddle动态图的转换,转换后的Paddle动态图运用动转静可转换为静态图模型。trace方式生成的代码可读性较强,较为接近原版PyTorch代码的组织结构;script方式不需要知道输入数据的类型和大小即可转换,使用上较为方便,但目前PyTorch支持的script代码方式有所限制,所以支持转换的代码也有所限制。用户可根据自身需求,选择转换方式。
使用trace方式需安装以下依赖 pandas treelib
使用方式
from x2paddle.convert import pytorch2paddle pytorch2paddle(module=torch_module, save_dir="./pd_model", jit_type="trace", input_examples=[torch_input]) # module (torch.nn.Module): PyTorch的Module。 # save_dir (str): 转换后模型的保存路径。 # jit_type (str): 转换方式。默认为"trace"。 # input_examples (list[torch.tensor]): torch.nn.Module的输入示例,list的长度必须与输入的长度一致。默认为None。
注意: 当jit_type为"trace"时,input_examples不可为None,转换后自动进行动转静; 当jit_type为"script"时",input_examples不为None时,才可以进行动转静。
使用示例
import torch import numpy as np from torchvision.models import AlexNet from torchvision.models.utils import load_state_dict_from_url # 构建输入 input_data = np.random.rand(1, 3, 224, 224).astype("float32") # 获取PyTorch Module torch_module = AlexNet() torch_state_dict = load_state_dict_from_url('https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth') torch_module.load_state_dict(torch_state_dict) # 设置为eval模式 torch_module.eval() # 进行转换 from x2paddle.convert import pytorch2paddle pytorch2paddle(torch_module, save_dir="pd_model_trace", jit_type="trace", input_examples=[torch.tensor(input_data)])
PyTorch-PaddlePaddle API映射表
API映射表梳理了PyTorch(v1.8.1)常用API与PaddlePaddle 2.0.0 API对应关系与差异分析。
API映射表目录
类别 | 简介 |
基础操作类API映射列表 | 主要为torch.XX 类API |
组网类API映射列表 | 主要为torch.nn.XX 类下组网相关的API |
Loss类API映射列表 | 主要为torch.nn.XX 类下loss相关的API |
工具类API映射列表 | 主要为torch.nn.XX 类下分布式相关的API和torch.utils.XX 类API |
视觉类API映射列表 | 主要为torchvision.XX 类API |
注:所有API列表均持续更新中……
一个简单的PyTorch-PaddlePaddle的例子
PyTorch代码(来自官方文档)
import torch from torch import nn from torch.utils.data import DataLoader from torchvision import datasets from torchvision.transforms import ToTensor, Lambda, Compose # Download training data from open datasets. training_data = datasets.FashionMNIST( root="data", train=True, download=True, transform=ToTensor(), ) # Download test data from open datasets. test_data = datasets.FashionMNIST( root="data", train=False, download=True, transform=ToTensor(), ) batch_size = 64 # Create data loaders. train_dataloader = DataLoader(training_data, batch_size=batch_size) test_dataloader = DataLoader(test_data, batch_size=batch_size) for X, y in test_dataloader: print("Shape of X [N, C, H, W]: ", X.shape) print("Shape of y: ", y.shape, y.dtype) break # Get cpu or gpu device for training. device = "cuda" if torch.cuda.is_available() else "cpu" print("Using {} device".format(device)) # Define model class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), nn.ReLU() ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork().to(device) print(model) loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=1e-3) def train(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) for batch, (X, y) in enumerate(dataloader): X, y = X.to(device), y.to(device) # Compute prediction error pred = model(X) loss = loss_fn(pred, y) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() if batch % 100 == 0: loss, current = loss.item(), batch * len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") def test(dataloader, model, loss_fn): size = len(dataloader.dataset) num_batches = len(dataloader) model.eval() test_loss, correct = 0, 0 with torch.no_grad(): for X, y in dataloader: X, y = X.to(device), y.to(device) pred = model(X) test_loss += loss_fn(pred, y).item() correct += (pred.argmax(1) == y).type(torch.float).sum().item() test_loss /= num_batches correct /= size print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") epochs = 5 for t in range(epochs): print(f"Epoch {t+1}\n-------------------------------") train(train_dataloader, model, loss_fn, optimizer) test(test_dataloader, model, loss_fn)
PaddlePaddle代码
1.导入所需库
import paddle from paddle import nn from paddle.io import DataLoader from paddle.vision import datasets from paddle.vision.transforms import ToTensor, Compose from visualdl import LogWriter
2.获取FashionMNIST数据集
training_data = datasets.FashionMNIST( mode='train', transform=ToTensor(), ) test_data = datasets.FashionMNIST( mode='test', transform=ToTensor(), )
3.设置DataLoader
batch_size = 64 train_dataloader = DataLoader(training_data, batch_size=batch_size) test_dataloader = DataLoader(test_data, batch_size=batch_size) for X, y in test_dataloader: print("Shape of X [N, C, H, W]: ", X.shape) print("Shape of y: ", y.shape, y.dtype) break
Shape of X [N, C, H, W]: [64, 1, 28, 28] Shape of y: [64, 1] paddle.int64
4.定义网络
place = paddle.set_device('gpu' if paddle.is_compiled_with_cuda() else 'cpu') class NeuralNetwork(nn.Layer): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), nn.ReLU() ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork() paddle.summary(model,input_size=(1,28*28),dtypes='float32')
--------------------------------------------------------------------------- Layer (type) Input Shape Output Shape Param # =========================================================================== Flatten-1 [[1, 784]] [1, 784] 0 Linear-1 [[1, 784]] [1, 512] 401,920 ReLU-1 [[1, 512]] [1, 512] 0 Linear-2 [[1, 512]] [1, 512] 262,656 ReLU-2 [[1, 512]] [1, 512] 0 Linear-3 [[1, 512]] [1, 10] 5,130 ReLU-3 [[1, 10]] [1, 10] 0 =========================================================================== Total params: 669,706 Trainable params: 669,706 Non-trainable params: 0 --------------------------------------------------------------------------- Input size (MB): 0.00 Forward/backward pass size (MB): 0.02 Params size (MB): 2.55 Estimated Total Size (MB): 2.58 --------------------------------------------------------------------------- {'total_params': 669706, 'trainable_params': 669706}
5.设置优化器
loss_fn = nn.CrossEntropyLoss() optimizer = paddle.optimizer.SGD(parameters=model.parameters(), learning_rate=1e-3)
6.定义训练过程
def train(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) for batch, (X, y) in enumerate(dataloader): # Compute prediction error pred = model(X) loss = loss_fn(pred, y) # Backpropagation optimizer.clear_grad() loss.backward() optimizer.step() if batch % 100 == 0: loss, current = loss.item(), batch * len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
7.定义测试过程
def test(dataloader, model, loss_fn): size = len(dataloader.dataset) num_batches = len(dataloader) model.eval() test_loss, correct = 0, 0 with paddle.no_grad(): for X, y in dataloader: pred = model(X) test_loss += loss_fn(pred, y).item() correct += (pred.argmax(1).numpy() == y.squeeze().numpy()).sum().item() test_loss /= num_batches correct /= size print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") return test_loss,correct
8.训练与验证
epochs = 5 log_writer = LogWriter(logdir="./log") for t in range(epochs): print(f"Epoch {t+1}\n-------------------------------") train(train_dataloader, model, loss_fn, optimizer) loss,acc=test(test_dataloader, model, loss_fn) log_writer.add_scalar(tag="test/loss", step=t, value=loss) log_writer.add_scalar(tag="test/acc", step=t, value=acc) print("Done!")
测试集上梯度下降图
请点击此处查看本环境基本用法.
Please click here for more detailed instructions.