基于OpenVINO的PaddlePaddle的鲜花识别模型预测部署(上)

简介: 基于OpenVINO的PaddlePaddle的鲜花识别模型预测部署(上)

一、鲜花识别


1.数据集简介


Oxford 102 Flowers Dataset 是一个花卉集合数据集,主要用于图像分类,它分为 102 个类别共计 102 种花,其中每个类别包含 40 到 258 张图像。

该数据集由牛津大学工程科学系于 2008 年发布,相关论文有《Automated flower classification over a large number of classes》。

image.png

在文件夹下已经生成用于训练和测试的三个.txt文件:train.txt(训练集,1020张图)、valid.txt(验证集,1020张图)、test.txt(6149)。文件中每行格式:图像相对路径 图像的label_id(注意:中间有空格)。


2.PaddleClas简介


PaddleClas目前已经是 release2.3了,和以前有脱胎换骨的差别,所以需要重新熟悉。

地址: gitee.com/paddlepaddl…

configs已经移动到了ppcls目录 部署为单独的deploy目录

image.pngimage.png



3.OpenVINO 2022.1 部署支持


OpenVINO™ 是开源的AI预测部署工具箱,支持多种格式,对飞桨支持友好,目前无需转换即可使用。

image.png


4.OpenVINO 2022.1 工作流程


image.png

# 解压缩数据集
!tar -xvf  data/data19852/flowers102.tar -C ./data/ >log.log


二、PaddleClas准备


# 下载最新版
!git clone https://gitee.com/paddlepaddle/PaddleClas/ --depth=1
%cd PaddleClas/
!pip install -r requirements.txt >log.log
/home/aistudio/PaddleClas


三、模型训练


1.修改imagenet_dataset.py


目录: \ppcls\data\dataloader\imagenet_dataset.py

修改原因是目录这块存在bug,注释:

  • assert os.path.exists(self._cls_path)
  • assert os.path.exists(self._img_root)

添加

  • self._cls_path=os.path.join(self._img_root,self._cls_path)

否则不能使用相对路径

class ImageNetDataset(CommonDataset):
    def _load_anno(self, seed=None):
        会对目录进行检测,如果cls_path使用相对目录,就会报错,在此注释掉,并修改为self._cls_path=os.path.join(self._img_root,self._cls_path)
        # assert os.path.exists(self._cls_path)
        # assert os.path.exists(self._img_root)
        self._cls_path=os.path.join(self._img_root,self._cls_path)
        print('self._cls_path',self._cls_path)
        self.images = []
        self.labels = []
        with open(self._cls_path) as fd:
            lines = fd.readlines()
            if seed is not None:
                np.random.RandomState(seed).shuffle(lines)
            for l in lines:
                l = l.strip().split(" ")
                self.images.append(os.path.join(self._img_root, l[0]))
                self.labels.append(int(l[1]))
                assert os.path.exists(self.images[-1])


2.修改配置文件


# global configs
Global:
  checkpoints: null
  pretrained_model: null
  output_dir: ./output/
  # gpu或cpu配置
  device: gpu
  # 分类数量
  class_num: 102
  # 保存间隔
  save_interval: 5
  # 是否再训练立案过程中进行eval
  eval_during_train: True
  # eval间隔
  eval_interval: 5
  # 训练轮数
  epochs: 20
  # 打印batch step设置
  print_batch_step: 10
  # 是否使用visualdl
  use_visualdl: False
  # used for static mode and model export
  image_shape: [3, 224, 224]
  # 保存地址
  save_inference_dir: ./inference
# model architecture
Arch:
  name: ResNet50_vd
# loss function config for traing/eval process
Loss:
  Train:
    - CELoss:
        weight: 1.0
  Eval:
    - CELoss:
        weight: 1.0
Optimizer:
  name: Momentum
  momentum: 0.9
  lr:
    name: Cosine
    learning_rate: 0.0125
    warmup_epoch: 5
  regularizer:
    name: 'L2'
    coeff: 0.00001
# data loader for train and eval
DataLoader:
  Train:
    dataset:
      name: ImageNetDataset
      image_root: /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/
      cls_label_path: train.txt
      transform_ops:
        - DecodeImage:
            to_rgb: True
            channel_first: False
        - RandCropImage:
            size: 224
        - RandFlipImage:
            flip_code: 1
        - NormalizeImage:
            scale: 1.0/255.0
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''
    sampler:
      name: DistributedBatchSampler
      batch_size: 256
      drop_last: False
      shuffle: True
    loader:
      num_workers: 4
      use_shared_memory: True
  Eval:
    dataset: 
      name: ImageNetDataset
      image_root: /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/
      cls_label_path: valid.txt
      transform_ops:
        - DecodeImage:
            to_rgb: True
            channel_first: False
        - ResizeImage:
            resize_short: 256
        - CropImage:
            size: 224
        - NormalizeImage:
            scale: 1.0/255.0
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''
    sampler:
      name: DistributedBatchSampler
      batch_size: 256
      drop_last: False
      shuffle: False
    loader:
      num_workers: 4
      use_shared_memory: True
Infer:
  infer_imgs: /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/
  batch_size: 10
  transforms:
    - DecodeImage:
        to_rgb: True
        channel_first: False
    - ResizeImage:
        resize_short: 256
    - CropImage:
        size: 224
    - NormalizeImage:
        scale: 1.0/255.0
        mean: [0.485, 0.456, 0.406]
        std: [0.229, 0.224, 0.225]
        order: ''
    - ToCHWImage:
  PostProcess:
    name: Topk
    topk: 5
    class_id_map_file: /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/jpg/image_00030.jpg
Metric:
  Train:
    - TopkAcc:
        topk: [1, 5]
  Eval:
    - TopkAcc:
        topk: [1, 5]
  • -c 参数是指定训练的配置文件路径,训练的具体超参数可查看yaml文件


  • yaml文Global.device 参数设置为cpu,即使用CPU进行训练(若不设置,此参数默认为True)


  • yaml文件中epochs参数设置为20,说明对整个数据集进行20个epoch迭代,预计训练20分钟左右(不同CPU,训练时间略有不同),此时训练模型不充分。若提高训练模型精度,请将此参数设大,如40,训练时间也会相应延长


3.配置说明


3.1 全局配置(Global)


参数名字 具体含义 默认值 可选值
checkpoints 断点模型路径,用于恢复训练 null str
pretrained_model 预训练模型路径 null str
output_dir 保存模型路径 "./output/" str
save_interval 每隔多少个epoch保存模型 1 int
eval_during_train 是否在训练时进行评估 True bool
eval_interval 每隔多少个epoch进行模型评估 1 int
epochs 训练总epoch数 int
print_batch_step 每隔多少个mini-batch打印输出 10 int
use_visualdl 是否是用visualdl可视化训练过程 False bool
image_shape 图片大小 [3,224,224] list, shape: (3,)
save_inference_dir inference模型的保存路径 "./inference" str
eval_mode eval的模式 "classification" "retrieval"


3.2 结构(Arch)


参数名字 具体含义 默认值 可选值
name 模型结构名字 ResNet50 PaddleClas提供的模型结构
class_num 分类数 1000 int
pretrained 预训练模型 False bool, str


3.3 损失函数(Loss)


参数名字 具体含义 默认值 可选值
CELoss 交叉熵损失函数 —— ——
CELoss.weight CELoss的在整个Loss中的权重 1.0 float
CELoss.epsilon CELoss中label_smooth的epsilon值 0.1 float,0-1之间


3.4 优化器(Optimizer)


参数名字 具体含义 默认值 可选值
name 优化器方法名 "Momentum" "RmsProp"等其他优化器
momentum momentum值 0.9 float
lr.name 学习率下降方式 "Cosine" "Linear"、"Piecewise"等其他下降方式
lr.learning_rate 学习率初始值 0.1 float
lr.warmup_epoch warmup轮数 0 int,如5
regularizer.name 正则化方法名 "L2" ["L1", "L2"]
regularizer.coeff 正则化系数 0.00007 float


4.训练


!pwd
!cp ~/ResNet50_vd.yaml  ./ppcls/configs/quick_start/ResNet50_vd.yaml 
!cp ~/imagenet_dataset.py ./ppcls/data/dataloader/imagenet_dataset.py
/home/aistudio/PaddleClas
# GPU设置
!export CUDA_VISIBLE_DEVICES=0
# -o Arch.pretrained=True 使用预训练模型,当选择为True时,预训练权重会自动下载到本地
!python tools/train.py -c ./ppcls/configs/quick_start/ResNet50_vd.yaml -o Arch.pretrained=True
A new filed (pretrained) detected!
[2022/04/04 17:51:03] root INFO: 
===========================================================
==        PaddleClas is powered by PaddlePaddle !        ==
===========================================================
==                                                       ==
==   For more info please go to the following website.   ==
==                                                       ==
==       https://github.com/PaddlePaddle/PaddleClas      ==
===========================================================
[2022/04/04 17:51:03] root INFO: Arch : 
[2022/04/04 17:51:03] root INFO:     name : ResNet50_vd
[2022/04/04 17:51:03] root INFO:     pretrained : True
[2022/04/04 17:51:03] root INFO: DataLoader : 
[2022/04/04 17:51:03] root INFO:     Eval : 
[2022/04/04 17:51:03] root INFO:         dataset : 
[2022/04/04 17:51:03] root INFO:             cls_label_path : valid.txt
[2022/04/04 17:51:03] root INFO:             image_root : /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/
[2022/04/04 17:51:03] root INFO:             name : ImageNetDataset
[2022/04/04 17:51:03] root INFO:             transform_ops : 
[2022/04/04 17:51:03] root INFO:                 DecodeImage : 
[2022/04/04 17:51:03] root INFO:                     channel_first : False
[2022/04/04 17:51:03] root INFO:                     to_rgb : True
[2022/04/04 17:51:03] root INFO:                 ResizeImage : 
[2022/04/04 17:51:03] root INFO:                     resize_short : 256
[2022/04/04 17:51:03] root INFO:                 CropImage : 
[2022/04/04 17:51:03] root INFO:                     size : 224
[2022/04/04 17:51:03] root INFO:                 NormalizeImage : 
[2022/04/04 17:51:03] root INFO:                     mean : [0.485, 0.456, 0.406]
[2022/04/04 17:51:03] root INFO:                     order : 
[2022/04/04 17:51:03] root INFO:                     scale : 1.0/255.0
[2022/04/04 17:51:03] root INFO:                     std : [0.229, 0.224, 0.225]
[2022/04/04 17:51:03] root INFO:         loader : 
[2022/04/04 17:51:03] root INFO:             num_workers : 4
[2022/04/04 17:51:03] root INFO:             use_shared_memory : True
[2022/04/04 17:51:03] root INFO:         sampler : 
[2022/04/04 17:51:03] root INFO:             batch_size : 128
[2022/04/04 17:51:03] root INFO:             drop_last : False
[2022/04/04 17:51:03] root INFO:             name : DistributedBatchSampler
[2022/04/04 17:51:03] root INFO:             shuffle : False
[2022/04/04 17:51:03] root INFO:     Train : 
[2022/04/04 17:51:03] root INFO:         dataset : 
[2022/04/04 17:51:03] root INFO:             cls_label_path : train.txt
[2022/04/04 17:51:03] root INFO:             image_root : /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/
[2022/04/04 17:51:03] root INFO:             name : ImageNetDataset
[2022/04/04 17:51:03] root INFO:             transform_ops : 
[2022/04/04 17:51:03] root INFO:                 DecodeImage : 
[2022/04/04 17:51:03] root INFO:                     channel_first : False
[2022/04/04 17:51:03] root INFO:                     to_rgb : True
[2022/04/04 17:51:03] root INFO:                 RandCropImage : 
[2022/04/04 17:51:03] root INFO:                     size : 224
[2022/04/04 17:51:03] root INFO:                 RandFlipImage : 
[2022/04/04 17:51:03] root INFO:                     flip_code : 1
[2022/04/04 17:51:03] root INFO:                 NormalizeImage : 
[2022/04/04 17:51:03] root INFO:                     mean : [0.485, 0.456, 0.406]
[2022/04/04 17:51:03] root INFO:                     order : 
[2022/04/04 17:51:03] root INFO:                     scale : 1.0/255.0
[2022/04/04 17:51:03] root INFO:                     std : [0.229, 0.224, 0.225]
[2022/04/04 17:51:03] root INFO:         loader : 
[2022/04/04 17:51:03] root INFO:             num_workers : 4
[2022/04/04 17:51:03] root INFO:             use_shared_memory : True
[2022/04/04 17:51:03] root INFO:         sampler : 
[2022/04/04 17:51:03] root INFO:             batch_size : 128
[2022/04/04 17:51:03] root INFO:             drop_last : False
[2022/04/04 17:51:03] root INFO:             name : DistributedBatchSampler
[2022/04/04 17:51:03] root INFO:             shuffle : True
[2022/04/04 17:51:03] root INFO: Global : 
[2022/04/04 17:51:03] root INFO:     checkpoints : None
[2022/04/04 17:51:03] root INFO:     class_num : 102
[2022/04/04 17:51:03] root INFO:     device : gpu
[2022/04/04 17:51:03] root INFO:     epochs : 20
[2022/04/04 17:51:03] root INFO:     eval_during_train : True
[2022/04/04 17:51:03] root INFO:     eval_interval : 5
[2022/04/04 17:51:03] root INFO:     image_shape : [3, 224, 224]
[2022/04/04 17:51:03] root INFO:     output_dir : ./output/
[2022/04/04 17:51:03] root INFO:     pretrained_model : None
[2022/04/04 17:51:03] root INFO:     print_batch_step : 10
[2022/04/04 17:51:03] root INFO:     save_inference_dir : ./inference
[2022/04/04 17:51:03] root INFO:     save_interval : 5
[2022/04/04 17:51:03] root INFO:     use_visualdl : False
[2022/04/04 17:51:03] root INFO: Infer : 
[2022/04/04 17:51:03] root INFO:     PostProcess : 
[2022/04/04 17:51:03] root INFO:         class_id_map_file : /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/jpg/image_00030.jpg
[2022/04/04 17:51:03] root INFO:         name : Topk
[2022/04/04 17:51:03] root INFO:         topk : 5
[2022/04/04 17:51:03] root INFO:     batch_size : 10
[2022/04/04 17:51:03] root INFO:     infer_imgs : /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/
[2022/04/04 17:51:03] root INFO:     transforms : 
[2022/04/04 17:51:03] root INFO:         DecodeImage : 
[2022/04/04 17:51:03] root INFO:             channel_first : False
[2022/04/04 17:51:03] root INFO:             to_rgb : True
[2022/04/04 17:51:03] root INFO:         ResizeImage : 
[2022/04/04 17:51:03] root INFO:             resize_short : 256
[2022/04/04 17:51:03] root INFO:         CropImage : 
[2022/04/04 17:51:03] root INFO:             size : 224
[2022/04/04 17:51:03] root INFO:         NormalizeImage : 
[2022/04/04 17:51:03] root INFO:             mean : [0.485, 0.456, 0.406]
[2022/04/04 17:51:03] root INFO:             order : 
[2022/04/04 17:51:03] root INFO:             scale : 1.0/255.0
[2022/04/04 17:51:03] root INFO:             std : [0.229, 0.224, 0.225]
[2022/04/04 17:51:03] root INFO:         ToCHWImage : None
[2022/04/04 17:51:03] root INFO: Loss : 
[2022/04/04 17:51:03] root INFO:     Eval : 
[2022/04/04 17:51:03] root INFO:         CELoss : 
[2022/04/04 17:51:03] root INFO:             weight : 1.0
[2022/04/04 17:51:03] root INFO:     Train : 
[2022/04/04 17:51:03] root INFO:         CELoss : 
[2022/04/04 17:51:03] root INFO:             weight : 1.0
[2022/04/04 17:51:03] root INFO: Metric : 
[2022/04/04 17:51:03] root INFO:     Eval : 
[2022/04/04 17:51:03] root INFO:         TopkAcc : 
[2022/04/04 17:51:03] root INFO:             topk : [1, 5]
[2022/04/04 17:51:03] root INFO:     Train : 
[2022/04/04 17:51:03] root INFO:         TopkAcc : 
[2022/04/04 17:51:03] root INFO:             topk : [1, 5]
[2022/04/04 17:51:03] root INFO: Optimizer : 
[2022/04/04 17:51:03] root INFO:     lr : 
[2022/04/04 17:51:03] root INFO:         learning_rate : 0.0125
[2022/04/04 17:51:03] root INFO:         name : Cosine
[2022/04/04 17:51:03] root INFO:         warmup_epoch : 5
[2022/04/04 17:51:03] root INFO:     momentum : 0.9
[2022/04/04 17:51:03] root INFO:     name : Momentum
[2022/04/04 17:51:03] root INFO:     regularizer : 
[2022/04/04 17:51:03] root INFO:         coeff : 1e-05
[2022/04/04 17:51:03] root INFO:         name : L2
[2022/04/04 17:51:03] root INFO: profiler_options : None
[2022/04/04 17:51:03] root INFO: train with paddle 2.1.2 and device CUDAPlace(0)
[2022/04/04 17:51:03] root WARNING: The Global.class_num will be deprecated. Please use Arch.class_num instead. Arch.class_num has been set to 102.
self._cls_path /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/train.txt
self._cls_path /home/aistudio/data/oxford-102-flowers/oxford-102-flowers/valid.txt
[2022/04/04 17:51:03] root WARNING: 'TopkAcc' metric can not be used when setting 'batch_transform_ops' in config. The 'TopkAcc' metric has been removed.
W0404 17:51:03.718078   846 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0404 17:51:03.723338   846 device_context.cc:422] device: 0, cuDNN Version: 7.6.
[2022/04/04 17:51:08] root INFO: unique_endpoints {''}
[2022/04/04 17:51:08] root INFO: Found /home/aistudio/.paddleclas/weights/ResNet50_vd_pretrained.pdparams
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1301: UserWarning: Skip loading for fc.weight. fc.weight receives a shape [2048, 1000], but the expected shape is [2048, 102].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1301: UserWarning: Skip loading for fc.bias. fc.bias receives a shape [1000], but the expected shape is [102].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
[2022/04/04 17:51:09] root WARNING: The training strategy in config files provided by PaddleClas is based on 4 gpus. But the number of gpus is 1 in current training. Please modify the stategy (learning rate, batch size and so on) if use config files in PaddleClas to train.
[2022/04/04 17:51:12] root INFO: [Train][Epoch 1/20][Iter: 0/8]lr: 0.00031, CELoss: 4.64081, loss: 4.64081, batch_cost: 3.14893s, reader_cost: 2.57331, ips: 40.64869 images/sec, eta: 0:08:23
[2022/04/04 17:51:16] root INFO: [Train][Epoch 1/20][Avg]CELoss: 4.64772, loss: 4.64772
[2022/04/04 17:51:16] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:51:19] root INFO: [Train][Epoch 2/20][Iter: 0/8]lr: 0.00281, CELoss: 4.61437, loss: 4.61437, batch_cost: 1.05037s, reader_cost: 0.63829, ips: 121.86160 images/sec, eta: 0:02:39
[2022/04/04 17:51:22] root INFO: [Train][Epoch 2/20][Avg]CELoss: 4.58869, loss: 4.58869
[2022/04/04 17:51:23] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:51:26] root INFO: [Train][Epoch 3/20][Iter: 0/8]lr: 0.00531, CELoss: 4.55260, loss: 4.55260, batch_cost: 1.19191s, reader_cost: 0.78459, ips: 107.39076 images/sec, eta: 0:02:51
[2022/04/04 17:51:30] root INFO: [Train][Epoch 3/20][Avg]CELoss: 4.45869, loss: 4.45869
[2022/04/04 17:51:30] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:51:33] root INFO: [Train][Epoch 4/20][Iter: 0/8]lr: 0.00781, CELoss: 4.32414, loss: 4.32414, batch_cost: 0.95354s, reader_cost: 0.56087, ips: 134.23697 images/sec, eta: 0:02:09
[2022/04/04 17:51:36] root INFO: [Train][Epoch 4/20][Avg]CELoss: 4.16781, loss: 4.16781
[2022/04/04 17:51:37] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:51:39] root INFO: [Train][Epoch 5/20][Iter: 0/8]lr: 0.01031, CELoss: 3.81278, loss: 3.81278, batch_cost: 0.97704s, reader_cost: 0.57012, ips: 131.00846 images/sec, eta: 0:02:05
[2022/04/04 17:51:43] root INFO: [Train][Epoch 5/20][Avg]CELoss: 3.64121, loss: 3.64121
[2022/04/04 17:51:46] root INFO: [Eval][Epoch 5][Iter: 0/8]CELoss: 3.10458, loss: 3.10458, top1: 0.62500, top5: 0.84375, batch_cost: 2.88314s, reader_cost: 2.69916, ips: 44.39597 images/sec
[2022/04/04 17:51:49] root INFO: [Eval][Epoch 5][Avg]CELoss: 3.06763, loss: 3.06763, top1: 0.58627, top5: 0.83039
[2022/04/04 17:51:49] root INFO: Already save model in ./output/ResNet50_vd/best_model
[2022/04/04 17:51:49] root INFO: [Eval][Epoch 5][best metric: 0.586274507933972]
[2022/04/04 17:51:50] root INFO: Already save model in ./output/ResNet50_vd/epoch_5
[2022/04/04 17:51:50] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:51:53] root INFO: [Train][Epoch 6/20][Iter: 0/8]lr: 0.01250, CELoss: 3.23097, loss: 3.23097, batch_cost: 1.12798s, reader_cost: 0.72043, ips: 113.47702 images/sec, eta: 0:02:15
[2022/04/04 17:51:56] root INFO: [Train][Epoch 6/20][Avg]CELoss: 2.82732, loss: 2.82732
[2022/04/04 17:51:57] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:00] root INFO: [Train][Epoch 7/20][Iter: 0/8]lr: 0.01233, CELoss: 2.31577, loss: 2.31577, batch_cost: 0.97315s, reader_cost: 0.58090, ips: 131.53180 images/sec, eta: 0:01:48
[2022/04/04 17:52:03] root INFO: [Train][Epoch 7/20][Avg]CELoss: 2.02123, loss: 2.02123
[2022/04/04 17:52:04] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:06] root INFO: [Train][Epoch 8/20][Iter: 0/8]lr: 0.01189, CELoss: 1.60169, loss: 1.60169, batch_cost: 0.96181s, reader_cost: 0.56401, ips: 133.08299 images/sec, eta: 0:01:40
[2022/04/04 17:52:10] root INFO: [Train][Epoch 8/20][Avg]CELoss: 1.38111, loss: 1.38111
[2022/04/04 17:52:10] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:13] root INFO: [Train][Epoch 9/20][Iter: 0/8]lr: 0.01121, CELoss: 1.10794, loss: 1.10794, batch_cost: 1.00229s, reader_cost: 0.59248, ips: 127.70792 images/sec, eta: 0:01:36
[2022/04/04 17:52:16] root INFO: [Train][Epoch 9/20][Avg]CELoss: 0.93859, loss: 0.93859
[2022/04/04 17:52:17] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:20] root INFO: [Train][Epoch 10/20][Iter: 0/8]lr: 0.01031, CELoss: 0.78641, loss: 0.78641, batch_cost: 1.16687s, reader_cost: 0.75940, ips: 109.69474 images/sec, eta: 0:01:42
[2022/04/04 17:52:23] root INFO: [Train][Epoch 10/20][Avg]CELoss: 0.66623, loss: 0.66623
[2022/04/04 17:52:27] root INFO: [Eval][Epoch 10][Iter: 0/8]CELoss: 0.75807, loss: 0.75807, top1: 0.85938, top5: 0.95312, batch_cost: 3.28170s, reader_cost: 3.07438, ips: 39.00414 images/sec
[2022/04/04 17:52:29] root INFO: [Eval][Epoch 10][Avg]CELoss: 0.66149, loss: 0.66149, top1: 0.89118, top5: 0.97647
[2022/04/04 17:52:30] root INFO: Already save model in ./output/ResNet50_vd/best_model
[2022/04/04 17:52:30] root INFO: [Eval][Epoch 10][best metric: 0.8911764738606471]
[2022/04/04 17:52:30] root INFO: Already save model in ./output/ResNet50_vd/epoch_10
[2022/04/04 17:52:31] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:33] root INFO: [Train][Epoch 11/20][Iter: 0/8]lr: 0.00923, CELoss: 0.42259, loss: 0.42259, batch_cost: 0.98574s, reader_cost: 0.57787, ips: 129.85227 images/sec, eta: 0:01:18
[2022/04/04 17:52:37] root INFO: [Train][Epoch 11/20][Avg]CELoss: 0.48286, loss: 0.48286
[2022/04/04 17:52:37] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:40] root INFO: [Train][Epoch 12/20][Iter: 0/8]lr: 0.00803, CELoss: 0.42716, loss: 0.42716, batch_cost: 1.01280s, reader_cost: 0.57595, ips: 126.38195 images/sec, eta: 0:01:12
[2022/04/04 17:52:44] root INFO: [Train][Epoch 12/20][Avg]CELoss: 0.35100, loss: 0.35100
[2022/04/04 17:52:44] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:47] root INFO: [Train][Epoch 13/20][Iter: 0/8]lr: 0.00674, CELoss: 0.27117, loss: 0.27117, batch_cost: 0.95948s, reader_cost: 0.56412, ips: 133.40572 images/sec, eta: 0:01:01
[2022/04/04 17:52:50] root INFO: [Train][Epoch 13/20][Avg]CELoss: 0.31605, loss: 0.31605
[2022/04/04 17:52:51] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:52:54] root INFO: [Train][Epoch 14/20][Iter: 0/8]lr: 0.00543, CELoss: 0.28394, loss: 0.28394, batch_cost: 1.13772s, reader_cost: 0.72433, ips: 112.50533 images/sec, eta: 0:01:03
[2022/04/04 17:52:57] root INFO: [Train][Epoch 14/20][Avg]CELoss: 0.26122, loss: 0.26122
[2022/04/04 17:52:58] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:53:01] root INFO: [Train][Epoch 15/20][Iter: 0/8]lr: 0.00416, CELoss: 0.16402, loss: 0.16402, batch_cost: 1.09271s, reader_cost: 0.68246, ips: 117.14036 images/sec, eta: 0:00:52
[2022/04/04 17:53:04] root INFO: [Train][Epoch 15/20][Avg]CELoss: 0.21305, loss: 0.21305
[2022/04/04 17:53:08] root INFO: [Eval][Epoch 15][Iter: 0/8]CELoss: 0.53277, loss: 0.53277, top1: 0.88281, top5: 0.94531, batch_cost: 3.47772s, reader_cost: 3.29045, ips: 36.80568 images/sec
[2022/04/04 17:53:10] root INFO: [Eval][Epoch 15][Avg]CELoss: 0.42379, loss: 0.42379, top1: 0.91863, top5: 0.98039
[2022/04/04 17:53:11] root INFO: Already save model in ./output/ResNet50_vd/best_model
[2022/04/04 17:53:11] root INFO: [Eval][Epoch 15][best metric: 0.9186274477079803]
[2022/04/04 17:53:11] root INFO: Already save model in ./output/ResNet50_vd/epoch_15
[2022/04/04 17:53:11] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:53:14] root INFO: [Train][Epoch 16/20][Iter: 0/8]lr: 0.00298, CELoss: 0.30324, loss: 0.30324, batch_cost: 0.98888s, reader_cost: 0.57842, ips: 129.43957 images/sec, eta: 0:00:39
[2022/04/04 17:53:18] root INFO: [Train][Epoch 16/20][Avg]CELoss: 0.22449, loss: 0.22449
[2022/04/04 17:53:18] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:53:21] root INFO: [Train][Epoch 17/20][Iter: 0/8]lr: 0.00195, CELoss: 0.19888, loss: 0.19888, batch_cost: 0.95664s, reader_cost: 0.56364, ips: 133.80189 images/sec, eta: 0:00:30
[2022/04/04 17:53:24] root INFO: [Train][Epoch 17/20][Avg]CELoss: 0.24154, loss: 0.24154
[2022/04/04 17:53:25] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:53:28] root INFO: [Train][Epoch 18/20][Iter: 0/8]lr: 0.00110, CELoss: 0.26926, loss: 0.26926, batch_cost: 1.14401s, reader_cost: 0.73260, ips: 111.88671 images/sec, eta: 0:00:27
[2022/04/04 17:53:31] root INFO: [Train][Epoch 18/20][Avg]CELoss: 0.23758, loss: 0.23758
[2022/04/04 17:53:32] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:53:35] root INFO: [Train][Epoch 19/20][Iter: 0/8]lr: 0.00048, CELoss: 0.31946, loss: 0.31946, batch_cost: 0.94231s, reader_cost: 0.54846, ips: 135.83640 images/sec, eta: 0:00:15
[2022/04/04 17:53:38] root INFO: [Train][Epoch 19/20][Avg]CELoss: 0.20997, loss: 0.20997
[2022/04/04 17:53:39] root INFO: Already save model in ./output/ResNet50_vd/latest
[2022/04/04 17:53:42] root INFO: [Train][Epoch 20/20][Iter: 0/8]lr: 0.00010, CELoss: 0.16348, loss: 0.16348, batch_cost: 1.05688s, reader_cost: 0.64513, ips: 121.11092 images/sec, eta: 0:00:08
[2022/04/04 17:53:45] root INFO: [Train][Epoch 20/20][Avg]CELoss: 0.20710, loss: 0.20710
[2022/04/04 17:53:48] root INFO: [Eval][Epoch 20][Iter: 0/8]CELoss: 0.51250, loss: 0.51250, top1: 0.89062, top5: 0.95312, batch_cost: 3.19902s, reader_cost: 2.99184, ips: 40.01228 images/sec
[2022/04/04 17:53:51] root INFO: [Eval][Epoch 20][Avg]CELoss: 0.40260, loss: 0.40260, top1: 0.92059, top5: 0.98235
[2022/04/04 17:53:52] root INFO: Already save model in ./output/ResNet50_vd/best_model
[2022/04/04 17:53:52] root INFO: [Eval][Epoch 20][best metric: 0.9205882329566806]
[2022/04/04 17:53:52] root INFO: Already save model in ./output/ResNet50_vd/epoch_20
[2022/04/04 17:53:52] root INFO: Already save model in ./output/ResNet50_vd/latest


相关实践学习
基于阿里云DeepGPU实例,用AI画唯美国风少女
本实验基于阿里云DeepGPU实例,使用aiacctorch加速stable-diffusion-webui,用AI画唯美国风少女,可提升性能至高至原性能的2.6倍。
目录
相关文章
|
7月前
|
数据挖掘
基于PaddlePaddle的中风患者线性模型预测
基于PaddlePaddle的中风患者线性模型预测
45 0
|
7月前
|
机器学习/深度学习 传感器 数据采集
基于PaddlePaddle的工业蒸汽预测
基于PaddlePaddle的工业蒸汽预测
35 0
|
机器学习/深度学习 算法 Linux
飞桨文心大模型中Paddle库编译
飞桨文心大模型中Paddle库编译
221 0
飞桨文心大模型中Paddle库编译
|
机器学习/深度学习 编解码 算法
Paddle目标检测学习笔记
Paddle目标检测学习笔记
165 0
Paddle目标检测学习笔记
|
编解码 算法 计算机视觉
Paddle目标检测学习笔记(二)
Paddle目标检测学习笔记(二)
142 0
Paddle目标检测学习笔记(二)
|
机器学习/深度学习 算法 计算机视觉
Paddle目标检测学习笔记(一)
Paddle目标检测学习笔记(一)
109 0
Paddle目标检测学习笔记(一)
|
API 异构计算
基于OpenVINO的PaddlePaddle的鲜花识别模型预测部署(下)
基于OpenVINO的PaddlePaddle的鲜花识别模型预测部署(下)
172 0
基于OpenVINO的PaddlePaddle的鲜花识别模型预测部署(下)
|
机器学习/深度学习 数据采集 并行计算
[Paddle Detection]基于PP-YOLOE+实现道路场景目标检测及部署
该项目着眼于基于视觉深度学习的自动驾驶场景,旨在对车载摄像头采集的视频数据进行道路场景解析,为自动驾驶提供一种解决思路。利用YOLO系列模型PP_YOLOE+完成车辆检测实现一种高效高精度的道路场景解析方式,从而实现真正意义上的自动驾驶,减少交通事故的发生,保障车主的人身安全。数据集地址视频数据: 超过1,100小时的100000个高清视频序列在一天中许多不同的时间,天气条件,和驾驶场景驾驶经验。视频序列还包括GPS位置、IMU数据和时间戳。道路目标检测。
579 1
[Paddle Detection]基于PP-YOLOE+实现道路场景目标检测及部署
|
API 异构计算
使用OpenVINO 和 PaddlePaddle 进行图像分类预测
使用OpenVINO 和 PaddlePaddle 进行图像分类预测
224 0
使用OpenVINO 和 PaddlePaddle 进行图像分类预测
|
数据采集 计算机视觉 Python
基于PaddlePaddle的猫咪识别--你是谁家的小猫咪?
基于PaddlePaddle的猫咪识别--你是谁家的小猫咪?
78 0
基于PaddlePaddle的猫咪识别--你是谁家的小猫咪?