【nvidia jetson xavier】Deepstream Yolox,Yolov4,Yolov5模型部署

本文涉及的产品
交互式建模 PAI-DSW,5000CU*H 3个月
模型训练 PAI-DLC,5000CU*H 3个月
模型在线服务 PAI-EAS,A10/V100等 500元 1个月
简介: 【nvidia jetson xavier】Deepstream Yolox,Yolov4,Yolov5模型部署

Deepstream YoloX 模型部署

https://github.com/Megvii-BaseDetection/YOLOX

https://github.com/nanmi/YOLOX-deepstream

Deepstream Yolov4 模型部署

https://github.com/NVIDIA-AI-IOT/yolov4_deepstream

Deepstream Yolov5 模型部署

https://github.com/DanaHan/Yolov5-in-Deepstream-5.0

https://blog.csdn.net/zong596568821xp/article/details/109444343

这里主要以yolov5为例,其他两个可以是同样的流程。

Geneate yolov5 engine model

1.在Jetson 平台安装Yolov5环境

https://www.runoob.com/w3cnote/python-pip-install-usage.html

安装pip

sudo apt-get install python-pip

升级pip

pip install -U pip

【报错】python 2.7版本不符合requirements.txt中包的版本

ubuntu 修改python默认版本,系统级修改

https://blog.csdn.net/White_Idiot/article/details/78240298

基于软链接:

先删除默认的Python软链接:

sudo rm /usr/bin/python

然后创建一个新的软链接指向需要的Python版本:

sudo ln -s /usr/bin/python3.5 /usr/bin/python

安装requirements.txt遇到报错,重新安装pip:

sudo apt-get install python-pip

根据安装提示卸载多余包

sudo apt autoremove

再次尝试还是失败,尝试卸载pip重装:

sudo apt remove python-pip

仍然报错:

ImportError: No module named pip

尝试:

sudo apt-get install python3-pip

成功安装pip3,然后执行:

pip3 install -r requirements.txt

配置yolov5运行环境

【报错】matplotlab无法安装:

更换yolov5配置方法:

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5-5.0.md

配置yolov5 Requirements

Matplotlib (for Jetson plataform)

sudo apt-get install python3-matplotlib

PyTorch (for Jetson plataform)

wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl 
#如果因为网络问题无法下载,可以先下载好torch-1.8.0-cp36-cp36m-linux_aarch64.whl,然后复制过去
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
pip3 install Cython
pip3 install numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl

TorchVision (for Jetson platform)

sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision
cd torchvision
export BUILD_VERSION=0.9.0
python3 setup.py install --user

Convert PyTorch model to wts file

  1. Download repositories
git clone https://github.com/wang-xinyu/tensorrtx.git
git clone https://github.com/ultralytics/yolov5.git
  1. Download latest YoloV5 (YOLOv5s, YOLOv5m, YOLOv5l or YOLOv5x) weights to yolov5 folder (example for YOLOv5s)
wget https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt -P yolov5/
  1. Copy gen_wts.py file (from tensorrtx/yolov5 folder) to yolov5 (ultralytics) folder
cp tensorrtx/yolov5/gen_wts.py yolov5/gen_wts.py
  1. Generate wts file
cd yolov5
python3 gen_wts.py yolov5s.pt

yolov5s.wts file will be generated in yolov5 folder

【错误】No module named tqdm

pip install tqdm

【错误】No module named seaborn

pip3 install seaborn

因matplotlib无法正常安装导致seaborn无法安装,尝试:

python --version
python -m pip install seaborn

再次因matplotlib失败

尝试:

https://toptechboy.com/

https://blog.csdn.net/LYiiiiiii/article/details/119052823

sudo apt-get install python3-seaborn

再次执行

python3 gen_wts.py yolov5s.pt

成功!

Convert wts file to TensorRT model

根据https://github.com/DanaHan/Yolov5-in-Deepstream-5.0的说明,在Build tensorrtx/yolov5之前还需要:

Important Note:

You should replace yololayer.cu and hardswish.cu file in tensorrtx/yolov5

  1. Build tensorrtx/yolov5
cd tensorrtx/yolov5
mkdir build
cd build
cmake ..
make
  1. Move generated yolov5s.wts file to tensorrtx/yolov5 folder (example for YOLOv5s)
cp yolov5/yolov5s.wts tensorrtx/yolov5/build/yolov5s.wts
  1. Convert to TensorRT model (yolov5s.engine file will be generated in tensorrtx/yolov5/build folder)
sudo ./yolov5 -s yolov5s.wts yolov5s.engine s
  1. Create a custom yolo folder and copy generated file (example for YOLOv5s)
mkdir /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
cp yolov5s.engine /opt/nvidia/deepstream/deepstream-5.1/sources/yolo/yolov5s.engine

Note: by default, yolov5 script generate model with batch size = 1 and FP16 mode.

#define USE_FP16  // set USE_INT8 or USE_FP16 or USE_FP32
#define DEVICE 0  // GPU id
#define NMS_THRESH 0.4
#define CONF_THRESH 0.5
#define BATCH_SIZE 1

Edit yolov5.cpp file before compile if you want to change this parameters.

We can get ‘yolov5s.engine’ and ‘libmyplugin.so’ here for the future use.

【!】切换使用https://github.com/DanaHan/Yolov5-in-Deepstream-5.0的deepstream配置进行之后的nvdsinfer_custom_impl_yolo plugin等操作(也可以按https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5-5.0.md继续)

在Yolov5-in-Deepstream-5.0/Deepstream 5.0下执行

CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

并将deepstream_app_config_yoloV5.txt文件中的deepstream-5.0全部修改为deepstream-5.1

Testing model

【报错】无labels.txt文件

在/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo文件夹下找到labels.txt文件复制到Yolov5-in-Deepstream-5.0/Deepstream 5.0

【报错】原因config_infer_primary_yoloV5.txt中出现路径错误

修改config_infer_primary_yoloV5.txt中的custom-lib-path,删除 ‘objectDetector_Yolo_V5/’

执行

deepstream-app -c deepstream_app_config_yoloV5.txt

【报错】有关engine文件

将之前生成的yolov5s.engine文件复制到Yolov5-in-Deepstream-5.0/Deepstream 5.0

仍然报错…

【报错】关于config_infer_primary_yoloV5.txt文件:NVDSINFER_CONFIG_FAILED

未知错误,不知道如何修改,更换部署方法:

【更换部署方法】https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/YOLOv5-5.0.md

Compile nvdsinfer_custom_impl_Yolo

  1. Run command
sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/
  1. Donwload my external/yolov5-5.0 folder and move files to created yolo folder
  2. Compile lib
  • x86 platform
cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo
  • Jetson platform
cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

Testing model

Use my edited deepstream_app_config.txt and config_infer_primary.txt files available in my external/yolov5-5.0 folder

Run command

deepstream-app -c deepstream_app_config.txt

YoloV5部署成功!


AIEarth是一个由众多领域内专家博主共同打造的学术平台,旨在建设一个拥抱智慧未来的学术殿堂!【平台地址:https://devpress.csdn.net/aiearth】 很高兴认识你!加入我们共同进步!

目录
相关文章
|
6月前
|
机器学习/深度学习 数据采集 数据挖掘
RCS-YOLO | 比YOLOv7精度提高了2.6%,推理速度提高了60%
RCS-YOLO | 比YOLOv7精度提高了2.6%,推理速度提高了60%
161 2
|
6月前
|
算法框架/工具 开发工具 git
【项目--Hi3559A】(caffe-yolov3)yolov3的darknet模型转caffe模型详细教程
【项目--Hi3559A】(caffe-yolov3)yolov3的darknet模型转caffe模型详细教程
70 1
【项目--Hi3559A】(caffe-yolov3)yolov3的darknet模型转caffe模型详细教程
|
PyTorch 开发工具 算法框架/工具
yolo系列的ONNX部署(C++)【适用于v4\v5\v5-6.1\v7】
yolo系列的ONNX部署(C++)【适用于v4\v5\v5-6.1\v7】
1378 0
yolo系列的ONNX部署(C++)【适用于v4\v5\v5-6.1\v7】
|
机器学习/深度学习 人工智能 计算机视觉
【YOLOv8】实战三:基于LabVIEW TensorRT部署YOLOv8
【YOLOv8】实战三:基于LabVIEW TensorRT部署YOLOv8
584 0
【YOLOv8】实战三:基于LabVIEW TensorRT部署YOLOv8
|
机器学习/深度学习 并行计算 计算机视觉
|
缓存 计算机视觉
|
PyTorch Go 算法框架/工具
YOLOv8来啦 | 详细解读YOLOv8的改进模块!YOLOv5官方出品YOLOv8,必卷!
YOLOv8来啦 | 详细解读YOLOv8的改进模块!YOLOv5官方出品YOLOv8,必卷!
2677 0
|
机器学习/深度学习 存储 人工智能
深度学习模型部署综述(ONNX/NCNN/OpenVINO/TensorRT)(上)
今天自动驾驶之心很荣幸邀请到逻辑牛分享深度学习部署的入门介绍,带大家盘一盘ONNX、NCNN、OpenVINO等框架的使用场景、框架特点及代码示例。
深度学习模型部署综述(ONNX/NCNN/OpenVINO/TensorRT)(上)
|
机器学习/深度学习 人工智能 自动驾驶
深度学习模型部署综述(ONNX/NCNN/OpenVINO/TensorRT)(下)
今天自动驾驶之心很荣幸邀请到逻辑牛分享深度学习部署的入门介绍,带大家盘一盘ONNX、NCNN、OpenVINO等框架的使用场景、框架特点及代码示例。
深度学习模型部署综述(ONNX/NCNN/OpenVINO/TensorRT)(下)
|
机器学习/深度学习 边缘计算 自动驾驶
改进Yolov5 | 用 GSConv+Slim Neck 一步步把 Yolov5 提升到极致!!!(一)
改进Yolov5 | 用 GSConv+Slim Neck 一步步把 Yolov5 提升到极致!!!(一)
935 0