迁移学习的 PyTorch 实现

本文涉及的产品
简介: 迁移学习的 PyTorch 实现

什么是迁移学习?
迁移学习是一种用于机器学习的技术,它使用预先训练好的模型来创建新的模型。这可以减少新创建模型所需的训练时间,因为它将继承以前学习的特性,这也将提高其准确性。
但是迁移学习能带来多大的不同呢?
为了找到答案,我们将在 PyTorch 中创建两个模型。一个模型有预先训练好的权重和偏差,另一个模型有随机初始化的权重和偏差。PyTorch 提供了在 ImageNet 数据集上预先训练的模型,该数据集包含数百万张不同的图片。为了训练我们的模型,我们将使用 Imagewoof 数据集,其中包含十种不同品种的狗的图像。我们的目标将是看到两个模型的准确性差异时,训练相同的时间。
那么让我们开始:
开始
首先,我们将导入所需的模块和库来部署我们的模型。

    import os
    import torch
    import torchvision
    import tarfile
    import torch.nn as nn
    import numpy as np
    import torch.nn.functional as F
    from torchvision.datasets.utils import download_url
    from torchvision.datasets import ImageFolder
    from torch.utils.data import DataLoader
    import torchvision.transforms as tt
    from torchvision.utils import make_grid
    import torchvision.models as models
    import matplotlib.pyplot as plt
    %matplotlib inline

    准备数据
    让我们开始下载数据集并创建 PyTorch 数据集来加载数据。这将下载 imagewoof2-160.tgz 文件,并解压它。


    # Dowload the dataset
    dataset_url = "https://s3.amazonaws.com/fast-ai-imageclas/imagewoof2-160.tgz"
    download_url(dataset_url, '.')
    # Extract from archive
    with tarfile.open('./imagewoof2-160.tgz', 'r:gz') as tar:
        tar.extractall(path='./data')
    # Look into the data directory
    data_dir = './data/imagewoof2-160'
    print(os.listdir(data_dir))
    classes = os.listdir(data_dir + "/train")
    print(classes)

    以上代码的输出如下:

      Downloading https://s3.amazonaws.com/fast-ai-imageclas/imagewoof2-160.tgz to ./imagewoof2-160.tgz
      HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
      ['train', 'val']
      ['n02115641', 'n02089973', 'n02099601', 'n02088364', 'n02086240', 'n02105641', 'n02111889', 'n02087394', 'n02093754', 'n02096294']

      这表明我们的数据文件夹有一个“ train”和一个“ val”文件夹。每个类别都有10个文件夹,里面都有相应的狗的照片。图像是彩色的,所以它有3个通道(RGB)与最小的尺寸是160像素。为了从文件夹形成一个数据集,我们将使用 Pytorch 的内置数据集制作器称为“ ImageFolder”。但在此之前,我们将在创建 PyTorch 数据集时做一些重要的修改:
      1. 通道数据标准化:我们将通过减去平均值并除以每个通道的标准差来标准化图像张量。因此,通过每个通道的数据的平均值是0,标准差是1。规范化的数据防止数值在训练过程中从任何一个通道不成比例地影响损失和梯度。
      2. 随机数据增强:我们将应用随机选择的转换,同时从训练数据集加载图像。具体来说,我们将每张图片填充4个像素,然后随机裁剪160 × 160个像素,然后以50% 的概率水平翻转图片。由于每次加载一个特定的图像时都会随机动态地应用转换,因此模型在每个训练阶段看到的图像略有不同,这使得它能够更好地进行泛化
      注意:对于我们的验证集,我们不会使用随机数据增强,因为我们希望它在整个培训过程中保持静态。我们将对验证集进行规范化和调整大小,使其与训练数据相匹配。

        #Data transforms (normalization & data augmentation)
        stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
        train_tfms = tt.Compose([tt.RandomCrop(160, padding=4, padding_mode='reflect'), 
                                 tt.RandomHorizontalFlip(), 
                                 tt.ToTensor(), 
                                 tt.Normalize(*stats,inplace=True)])
        valid_tfms = tt.Compose([tt.Resize([160,160]),tt.ToTensor(), tt.Normalize(*stats)])

          train_ds = ImageFolder(data_dir+'/train', train_tfms)
          valid_ds = ImageFolder(data_dir+'/val', valid_tfms)
          print(f'Length of training dataset = {len(train_ds)}')
          print(f'Length of validation dataset = {len(valid_ds)}')

          创建数据记录器
          现在我们将定义一个批处理大小并在数据加载器中加载数据集。


          batch_size = 64

          # PyTorch data loaders
          train_dl = DataLoader(train_ds, batch_size, shuffle=True, num_workers=3, pin_memory=True)
          valid_dl = DataLoader(valid_ds, batch_size*2, num_workers=3, pin_memory=True)

          现在让我们看一下来自训练数据加载器的一些示例图像。为此,我们将定义一个 helper 函数并调用它。


          def show_batch(dl):
              for images, labels in dl:
                  fig, ax = plt.subplots(figsize=(12, 12))
                  ax.set_xticks([]); ax.set_yticks([])
                  ax.imshow(make_grid(images[:64], nrow=8).permute(1, 2, 0))
                  break
          show_batch(train_dl)

          由于标准化,这些颜色似乎不合适。注意,在推理过程中还应用了标准化。如果你仔细看,你可以看到一些图像的裁剪和反射填充。水平翻转有点难以直观看出来。使用 GPU为了无缝地使用 GPU(如果有的话),我们定义了两个辅助函数(get_default_device & to_device)和一个辅助类 DeviceDataLoader 来根据需要将我们的模型和数据移动到 GPU上。


          def get_default_device():
              """Pick GPU if available, else CPU"""
              if torch.cuda.is_available():
                  return torch.device('cuda')
              else:
                  return torch.device('cpu')
          def to_device(data, device):
              """Move tensor(s) to chosen device"""
              if isinstance(data, (list,tuple)):
                  return [to_device(x, device) for x in data]
              return data.to(device, non_blocking=True)
          class DeviceDataLoader():
              """Wrap a dataloader to move data to a device"""
              def __init__(self, dl, device):
                  self.dl = dl
                  self.device = device
              def __iter__(self):
                  """Yield a batch of data after moving it to device"""
                  for b in self.dl: 
                      yield to_device(b, self.device)
              def __len__(self):
                  """Number of batches"""
                  return len(self.dl)

          根据运行笔记本的位置,您的默认设备可以是 CPU (torch.device (‘ CPU’))或 GPU (torch.device (‘ cuda’))。

            device = get_default_device()
            device
            device(type='cuda')

            我们现在可以使用 DeviceDataLoader 处理我们的训练和验证数据加载器,以便自动将批量数据传输到 GPU (如果可用的话)。

              train_dl = DeviceDataLoader(train_dl, device)
              valid_dl = DeviceDataLoader(valid_dl, device)

              基于批量归一化的预训练 ResNet 模型的定义
              我们将使用 PyTorch 的内置模型架构 ResNet50,它使用一个带有残余块的卷积神经网络。这个模型的输出有1000个功能。因此,我们将输出特性改为10,因为我们的数据集有10个输出类。我们已经设置了 pretrained = True,所以当模型初始化时,它会从互联网上下载预训练的参数并将它们添加到我们的模型中。

                def accuracy(outputs, labels):
                    _, preds = torch.max(outputs, dim=1)
                    return torch.tensor(torch.sum(preds == labels).item() / len(preds))
                class ImageClassificationBase(nn.Module):
                    def training_step(self, batch):
                        images, labels = batch 
                        out = self(images)                  # Generate predictions
                        loss = F.cross_entropy(out, labels) # Calculate loss
                        return loss
                    def validation_step(self, batch):
                        images, labels = batch 
                        out = self(images)                    # Generate predictions
                        loss = F.cross_entropy(out, labels)   # Calculate loss
                        acc = accuracy(out, labels)           # Calculate accuracy
                        return {'val_loss': loss.detach(), 'val_acc': acc}
                    def validation_epoch_end(self, outputs):
                        batch_losses = [x['val_loss'] for x in outputs]
                        epoch_loss = torch.stack(batch_losses).mean()   # Combine losses
                        batch_accs = [x['val_acc'] for x in outputs]
                        epoch_acc = torch.stack(batch_accs).mean()      # Combine accuracies
                        return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
                    def epoch_end(self, epoch, result):
                        print("Epoch [{}], last_lr: {:.5f}, train_loss: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}".format(
                            epoch, result['lrs'][-1], result['train_loss'], result['val_loss'], result['val_acc']))
                  class Resnet50(ImageClassificationBase):
                      def __init__(self):
                          super().__init__()
                          # Use a pretrained model
                          self.network = models.resnet50(pretrained=True)
                          # Replace last layer
                          num_ftrs = self.network.fc.in_features
                          self.network.fc = nn.Linear(num_ftrs, 10)
                      def forward(self, xb):
                          return torch.sigmoid(self.network(xb))
                      def freeze(self):
                          # To freeze the residual layers
                          for param in self.network.parameters():
                              param.require_grad = False
                          for param in self.network.fc.parameters():
                              param.require_grad = True
                      def unfreeze(self):
                          # Unfreeze all layers
                          for param in self.network.parameters():
                              param.require_grad = True

                  model = to_device(Resnet50(), device)
                  model
                    Resnet34(
                      (network): ResNet(
                        (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
                        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                        (relu): ReLU(inplace=True)
                        (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
                        (layer1): Sequential(
                          (0): Bottleneck(
                            (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                            (downsample): Sequential(
                              (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                              (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            )
                          )
                          (1): Bottleneck(
                            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (2): Bottleneck(
                            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                        )
                        (layer2): Sequential(
                          (0): Bottleneck(
                            (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                            (downsample): Sequential(
                              (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
                              (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            )
                          )
                          (1): Bottleneck(
                            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (2): Bottleneck(
                            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (3): Bottleneck(
                            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                        )
                        (layer3): Sequential(
                          (0): Bottleneck(
                            (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                            (downsample): Sequential(
                              (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
                              (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            )
                          )
                          (1): Bottleneck(
                            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (2): Bottleneck(
                            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (3): Bottleneck(
                            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (4): Bottleneck(
                            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (5): Bottleneck(
                            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                        )
                        (layer4): Sequential(
                          (0): Bottleneck(
                            (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                            (downsample): Sequential(
                              (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
                              (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            )
                          )
                          (1): Bottleneck(
                            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                          (2): Bottleneck(
                            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                            (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
                            (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                            (relu): ReLU(inplace=True)
                          )
                        )
                        (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
                        (fc): Linear(in_features=2048, out_features=10, bias=True)
                      )
                    )

                    模型的训练函数


                    在我们训练这个模型之前,我们将对我们的 fit 函数做一些小的但是重要的改进:
                    学习速率调度: 我们不使用固定的学习速率,而是使用一个学习速率调度器,它将改变每一批训练后的学习速率。在训练过程中有许多改变学习速度的策略,我们将使用的一个策略称为“一个周期”。
                    学习率策略:从低学习率开始,在30% 左右的时间内逐批提高到高学习率,然后在剩下的时间内逐渐降低到很低的学习率。
                    权重衰减:我们还使用了权重衰减法,这是另一种正则化技术,通过在损失函数中增加一个项来防止权重变得过大。
                    梯度裁剪:除了层的权重和输出,它还有助于限制梯度值的小范围,以防止不良变化的参数由于大梯度值。这种简单而有效的技术被称为梯度裁剪。
                    让我们定义一个 fit_one_cycle 函数来包含这些变化,我们还将记录每批次使用的学习率。

                      @torch.no_grad()
                      def evaluate(model, val_loader):
                          model.eval()
                          outputs = [model.validation_step(batch) for batch in val_loader]
                          return model.validation_epoch_end(outputs)
                      def get_lr(optimizer):
                          for param_group in optimizer.param_groups:
                              return param_group['lr']
                      def fit_one_cycle(epochs, max_lr, model, train_loader, val_loader, 
                                        weight_decay=0, grad_clip=None, opt_func=torch.optim.SGD):
                          torch.cuda.empty_cache()
                          history = []
                          # Set up cutom optimizer with weight decay
                          optimizer = opt_func(model.parameters(), max_lr, weight_decay=weight_decay)
                          # Set up one-cycle learning rate scheduler
                          sched = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, epochs=epochs, 
                                                                      steps_per_epoch=len(train_loader))
                          for epoch in range(epochs):
                              # Training Phase 
                              model.train()
                              train_losses = []
                              lrs = []
                              for batch in train_loader:
                                  loss = model.training_step(batch)
                                  train_losses.append(loss)
                                  loss.backward()
                                  # Gradient clipping
                                  if grad_clip: 
                                      nn.utils.clip_grad_value_(model.parameters(), grad_clip)
                                  optimizer.step()
                                  optimizer.zero_grad()
                                  # Record & update learning rate
                                  lrs.append(get_lr(optimizer))
                                  sched.step()
                              # Validation phase
                              result = evaluate(model, val_loader)
                              result['train_loss'] = torch.stack(train_losses).mean().item()
                              result['lrs'] = lrs
                              model.epoch_end(epoch, result)
                              history.append(result)
                          return history

                        history = [evaluate(model, valid_dl)]
                        history

                        [{'val_loss': 2.296131134033203, 'val_acc': 0.12421280145645142}]

                        训练


                        首先,冻结 ResNet 图层并训练一些 epoch,这只是训练最后一层开始对图像进行分类。


                        model.freeze()

                        我们现在准备训练我们的模型。

                          epochs = 10
                          max_lr = 0.0001
                          grad_clip = 0.1
                          weight_decay = 1e-4
                          opt_func = torch.optim.Adam

                            %%time
                            history += fit_one_cycle(epochs, max_lr, model, train_dl, valid_dl, 
                                                         grad_clip=grad_clip, 
                                                         weight_decay=weight_decay, 
                                                         opt_func=opt_func)

                            经过6分17秒的训练,我们的模型达到了90% 的准确率。我们可以绘制精确度和损失的图表,看到我们的模型每个 epoch 的演变。


                            def plot_accuracies(history):
                                accuracies = [x['val_acc'] for x in history]
                                plt.plot(accuracies, '-x')
                                plt.xlabel('epoch')
                                plt.ylabel('accuracy')
                                plt.title('Accuracy vs. No. of epochs');
                            plot_accuracies(history)


                            def plot_losses(history):
                                train_losses = [x.get('train_loss') for x in history]
                                val_losses = [x['val_loss'] for x in history]
                                plt.plot(train_losses, '-bx')
                                plt.plot(val_losses, '-rx')
                                plt.xlabel('epoch')
                                plt.ylabel('loss')
                                plt.legend(['Training', 'Validation'])
                                plt.title('Loss vs. No. of epochs');
                            plot_losses(history)

                            现在让我们看看,如果我们不使用预先训练好的模型,会有多大的不同。


                            用随机初始化的参数定义 ResNet 模型
                            我们将再次使用 PyTorch 的内置模型架构 ResNet50,并将其输出特征从1000减少到10。但是这次我们将设置 pretrained = False。


                            class Resnet50_np(ImageClassificationBase):
                                def __init__(self):
                                    super().__init__()
                                    # Use a pretrained model
                                    self.network = models.resnet50(pretrained=False)
                                    # Replace last layer
                                    num_ftrs = self.network.fc.in_features
                                    self.network.fc = nn.Linear(num_ftrs, 10)
                                def forward(self, xb):
                                    return torch.sigmoid(self.network(xb))

                              model = to_device(Resnet50_np(), device)
                              model
                                Resnet50_np(
                                  (network): ResNet(
                                    (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
                                    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                    (relu): ReLU(inplace=True)
                                    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
                                    (layer1): Sequential(
                                      (0): Bottleneck(
                                        (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                        (downsample): Sequential(
                                          (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                          (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        )
                                      )
                                      (1): Bottleneck(
                                        (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (2): Bottleneck(
                                        (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                    )
                                    (layer2): Sequential(
                                      (0): Bottleneck(
                                        (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                        (downsample): Sequential(
                                          (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
                                          (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        )
                                      )
                                      (1): Bottleneck(
                                        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (2): Bottleneck(
                                        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (3): Bottleneck(
                                        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                    )
                                    (layer3): Sequential(
                                      (0): Bottleneck(
                                        (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                        (downsample): Sequential(
                                          (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
                                          (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        )
                                      )
                                      (1): Bottleneck(
                                        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (2): Bottleneck(
                                        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (3): Bottleneck(
                                        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (4): Bottleneck(
                                        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (5): Bottleneck(
                                        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                    )
                                    (layer4): Sequential(
                                      (0): Bottleneck(
                                        (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                        (downsample): Sequential(
                                          (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
                                          (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        )
                                      )
                                      (1): Bottleneck(
                                        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                      (2): Bottleneck(
                                        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                                        (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
                                        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
                                        (relu): ReLU(inplace=True)
                                      )
                                    )
                                    (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
                                    (fc): Linear(in_features=2048, out_features=10, bias=True)
                                  )
                                )


                                history = [evaluate(model, valid_dl)]
                                history

                                [{'val_loss': 2.390498638153076, 'val_acc': 0.1010584682226181}]

                                训练我们的新模型


                                epochs = 10
                                max_lr = 0.0001
                                grad_clip = 0.1
                                weight_decay = 1e-4
                                opt_func = torch.optim.Adam

                                  %%time
                                  history += fit_one_cycle(epochs, max_lr, model, train_dl, valid_dl, 
                                                               grad_clip=grad_clip, 
                                                               weight_decay=weight_decay, 
                                                               opt_func=opt_func)
                                    Epoch [0], last_lr: 0.00003, train_loss: 2.2902, val_loss: 2.2740, val_acc: 0.1371
                                    Epoch [1], last_lr: 0.00008, train_loss: 2.2404, val_loss: 2.2038, val_acc: 0.1733
                                    Epoch [2], last_lr: 0.00010, train_loss: 2.1558, val_loss: 2.1367, val_acc: 0.2148
                                    Epoch [3], last_lr: 0.00010, train_loss: 2.1181, val_loss: 2.1076, val_acc: 0.2314
                                    Epoch [4], last_lr: 0.00008, train_loss: 2.0963, val_loss: 2.1132, val_acc: 0.2310
                                    Epoch [5], last_lr: 0.00006, train_loss: 2.0660, val_loss: 2.0577, val_acc: 0.2906
                                    Epoch [6], last_lr: 0.00004, train_loss: 2.0395, val_loss: 2.0279, val_acc: 0.3014
                                    Epoch [7], last_lr: 0.00002, train_loss: 2.0136, val_loss: 2.0171, val_acc: 0.3147
                                    Epoch [8], last_lr: 0.00000, train_loss: 1.9977, val_loss: 2.0056, val_acc: 0.3239
                                    Epoch [9], last_lr: 0.00000, train_loss: 1.9896, val_loss: 2.0015, val_acc: 0.3328
                                    CPU times: user 2min 42s, sys: 52.8 s, total: 3min 35s
                                    Wall time: 6min 16

                                    在相同的训练时间和时间内,我们的新模型只能达到33% 的准确率。我们可以绘制精确度和损失的图表,看到我们的模型每个 epoch 的演变。


                                    plot_accuracies(history)


                                    plot_losses(history)


                                    结论


                                    通过比较两个模型的准确性,可以看出转移学习对第一个模型的结果有显著的促进作用。在相同的训练时间内,我们的预训练模型能够达到90% 的准确率,而随机初始化权重和偏差的模型只能达到预训练模型准确率的三分之一以上。随着训练时间的延长,准确率的提高逐渐减小,达到90% 的准确率,随机初始化模型需要较长的时间。这表明,如果正确使用迁移学习,可以使新模型的训练过程非常有效。

                                    相关实践学习
                                    基于阿里云DeepGPU实例,用AI画唯美国风少女
                                    本实验基于阿里云DeepGPU实例,使用aiacctorch加速stable-diffusion-webui,用AI画唯美国风少女,可提升性能至高至原性能的2.6倍。
                                    相关文章
                                    |
                                    1月前
                                    |
                                    机器学习/深度学习 算法 PyTorch
                                    【PyTorch实战演练】自调整学习率实例应用(附代码)
                                    【PyTorch实战演练】自调整学习率实例应用(附代码)
                                    42 0
                                    |
                                    4月前
                                    |
                                    数据可视化 PyTorch 算法框架/工具
                                    使用PyTorch搭建VGG模型进行图像风格迁移实战(附源码和数据集)
                                    使用PyTorch搭建VGG模型进行图像风格迁移实战(附源码和数据集)
                                    117 1
                                    |
                                    8月前
                                    |
                                    机器学习/深度学习 缓存 监控
                                    Pytorch学习笔记(7):优化器、学习率及调整策略、动量
                                    Pytorch学习笔记(7):优化器、学习率及调整策略、动量
                                    355 0
                                    Pytorch学习笔记(7):优化器、学习率及调整策略、动量
                                    |
                                    9天前
                                    |
                                    机器学习/深度学习 PyTorch 算法框架/工具
                                    PyTorch与迁移学习:利用预训练模型提升性能
                                    【4月更文挑战第18天】PyTorch支持迁移学习,助力提升深度学习性能。预训练模型(如ResNet、VGG)在大规模数据集(如ImageNet)训练后,可在新任务中加速训练,提高准确率。通过选择模型、加载预训练权重、修改结构和微调,可适应不同任务需求。迁移学习节省资源,但也需考虑源任务与目标任务的相似度及超参数选择。实践案例显示,预训练模型能有效提升小数据集上的图像分类任务性能。未来,迁移学习将继续在深度学习领域发挥重要作用。
                                    |
                                    1月前
                                    |
                                    机器学习/深度学习 PyTorch 算法框架/工具
                                    通过实例学习Pytorch加载权重.load_state_dict()与保存权重.save()
                                    通过实例学习Pytorch加载权重.load_state_dict()与保存权重.save()
                                    13 0
                                    |
                                    1月前
                                    |
                                    机器学习/深度学习 算法 PyTorch
                                    PyTorch使用Tricks:学习率衰减 !!
                                    PyTorch使用Tricks:学习率衰减 !!
                                    42 0
                                    |
                                    1月前
                                    |
                                    机器学习/深度学习 算法 PyTorch
                                    从零开始学习线性回归:理论、实践与PyTorch实现
                                    从零开始学习线性回归:理论、实践与PyTorch实现
                                    从零开始学习线性回归:理论、实践与PyTorch实现
                                    |
                                    3月前
                                    |
                                    机器学习/深度学习 PyTorch 语音技术
                                    Pytorch迁移学习使用Resnet50进行模型训练预测猫狗二分类
                                    深度学习在图像分类、目标检测、语音识别等领域取得了重大突破,但是随着网络层数的增加,梯度消失和梯度爆炸问题逐渐凸显。随着层数的增加,梯度信息在反向传播过程中逐渐变小,导致网络难以收敛。同时,梯度爆炸问题也会导致网络的参数更新过大,无法正常收敛。 为了解决这些问题,ResNet提出了一个创新的思路:引入残差块(Residual Block)。残差块的设计允许网络学习残差映射,从而减轻了梯度消失问题,使得网络更容易训练。
                                    |
                                    4月前
                                    |
                                    机器学习/深度学习 PyTorch 调度
                                    迁移学习的 PyTorch 实现
                                    迁移学习的 PyTorch 实现
                                    |
                                    2月前
                                    |
                                    机器学习/深度学习 编解码 PyTorch
                                    Pytorch实现手写数字识别 | MNIST数据集(CNN卷积神经网络)
                                    Pytorch实现手写数字识别 | MNIST数据集(CNN卷积神经网络)