yolov3 darknet 转 TVM 推理输出、一文读懂

简介: 一文读懂
🥇 版权: 本文由【墨理学AI】原创、首发、各位大佬、敬请查阅
🎉 声明: 作为全网 AI 领域 干货最多的博主之一,❤️ 不负光阴不负卿 ❤️
  • 🍊 计算机视觉: Yolo专栏、一文读懂
  • 🍊 Yolo 系列推荐:yolov3 darknet 转 TVM Python 推理
  • 📆 最近更新:2022年1月10日
  • 🍊 点赞 👍 收藏 ⭐留言 📝 都是博主坚持写作、更新高质量博文的最大动力!

📕 tvm 源码安装

tvm 源码安装,可参考博文

【初识TVM】| LLVM编译 | tvm 源码安装 | deploy ONNX models with Relay 测试【一文读懂】

git clone --recursive https://github.com/apache/tvm.git

cd tvm

mkdir build

cp cmake/config.cmake build

cmake ..

make -j8
  • 直接 cmake .. 输出如下
-- Found Threads: TRUE  
-- Configuring done
-- Generating done
-- Build files have been written to: /home/moli/project/project21/modelTrans/tvm/build
  • make -j8 输出如下
[100%] Built target tvm_objs
[100%] Linking CXX shared library libtvm.so
[100%] Built target tvm

📕 此次运行Python代码如下

该代码支持 YOLO-V2 and YOLO-V3 DarkNet Models 转 TVM 推理输出
  1. 模型下载【代码自动下载、网速不佳、也可手动下载】
  2. 模型转换【DarkNet Models 转 TVM 】
  3. 模型推理【TVM 推理示例】

"""
Compile YOLO-V2 and YOLO-V3 in DarkNet Models
=============================================
**Author**: `Siju Samuel <https://siju-samuel.github.io/>`_

This article is an introductory tutorial to deploy darknet models with TVM.
All the required models and libraries will be downloaded from the internet by the script.
This script runs the YOLO-V2 and YOLO-V3 Model with the bounding boxes
Darknet parsing have dependancy with CFFI and CV2 library
Please install CFFI and CV2 before executing this script

.. code-block:: bash

  pip install cffi
  pip install opencv-python
"""

"""
**Second release Author**: 墨理学AI 
=============================================
CSDN 博客主页
<https://positive.blog.csdn.net/>

计算机视觉各领域交流群
<https://gitee.com/bravePatch/datasets/blob/master/jindachang.md>
"""

# numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
import sys

# tvm, relay
import tvm
from tvm import te
from tvm import relay
from ctypes import *
from tvm.contrib.download import download_testdata
from tvm.relay.testing.darknet import __darknetffi__
import tvm.relay.testing.yolo_detection
import tvm.relay.testing.darknet

######################################################################
# Choose the model
# -----------------------
# Models are: 'yolov2', 'yolov3' or 'yolov3-tiny'

# Model name
MODEL_NAME = "yolov3"

######################################################################
# Download required files
# -----------------------
# Download cfg and weights file if first time.
CFG_NAME = MODEL_NAME + ".cfg"
WEIGHTS_NAME = MODEL_NAME + ".weights"
REPO_URL = "https://github.com/dmlc/web-data/blob/main/darknet/"
CFG_URL = REPO_URL + "cfg/" + CFG_NAME + "?raw=true"
WEIGHTS_URL = "https://pjreddie.com/media/files/" + WEIGHTS_NAME

cfg_path = download_testdata(CFG_URL, CFG_NAME, module="darknet")
weights_path = download_testdata(WEIGHTS_URL, WEIGHTS_NAME, module="darknet")

# Download and Load darknet library
if sys.platform in ["linux", "linux2"]:
    DARKNET_LIB = "libdarknet2.0.so"
    DARKNET_URL = REPO_URL + "lib/" + DARKNET_LIB + "?raw=true"
elif sys.platform == "darwin":
    DARKNET_LIB = "libdarknet_mac2.0.so"
    DARKNET_URL = REPO_URL + "lib_osx/" + DARKNET_LIB + "?raw=true"
else:
    err = "Darknet lib is not supported on {} platform".format(sys.platform)
    raise NotImplementedError(err)

lib_path = download_testdata(DARKNET_URL, DARKNET_LIB, module="darknet")

DARKNET_LIB = __darknetffi__.dlopen(lib_path)
net = DARKNET_LIB.load_network(cfg_path.encode("utf-8"), weights_path.encode("utf-8"), 0)
dtype = "float32"
batch_size = 1

data = np.empty([batch_size, net.c, net.h, net.w], dtype)
shape_dict = {"data": data.shape}
print("Converting darknet to relay functions...")
mod, params = relay.frontend.from_darknet(net, dtype=dtype, shape=data.shape)

######################################################################
# Import the graph to Relay
# -------------------------
# compile the model
target = tvm.target.Target("llvm", host="llvm")
dev = tvm.cpu(0)
data = np.empty([batch_size, net.c, net.h, net.w], dtype)
shape = {"data": data.shape}
print("Compiling the model...")
with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, target=target, params=params)

[neth, netw] = shape["data"][2:]  # Current image shape is 608x608
######################################################################
# Load a test image
# -----------------
test_image = "dog.jpg"
print("Loading the test image...")
img_url = REPO_URL + "data/" + test_image + "?raw=true"
img_path = download_testdata(img_url, test_image, "data")

data = tvm.relay.testing.darknet.load_image(img_path, netw, neth)
######################################################################
# Execute on TVM Runtime
# ----------------------
# The process is no different from other examples.
from tvm.contrib import graph_executor

m = graph_executor.GraphModule(lib["default"](dev))

# set inputs
m.set_input("data", tvm.nd.array(data.astype(dtype)))
# execute
print("Running the test image...")

# detection
# thresholds
thresh = 0.5
nms_thresh = 0.45

m.run()
# get outputs
tvm_out = []
if MODEL_NAME == "yolov2":
    layer_out = {}
    layer_out["type"] = "Region"
    # Get the region layer attributes (n, out_c, out_h, out_w, classes, coords, background)
    layer_attr = m.get_output(2).numpy()
    layer_out["biases"] = m.get_output(1).numpy()
    out_shape = (layer_attr[0], layer_attr[1] // layer_attr[0], layer_attr[2], layer_attr[3])
    layer_out["output"] = m.get_output(0).numpy().reshape(out_shape)
    layer_out["classes"] = layer_attr[4]
    layer_out["coords"] = layer_attr[5]
    layer_out["background"] = layer_attr[6]
    tvm_out.append(layer_out)

elif MODEL_NAME == "yolov3":
    for i in range(3):
        layer_out = {}
        layer_out["type"] = "Yolo"
        # Get the yolo layer attributes (n, out_c, out_h, out_w, classes, total)
        layer_attr = m.get_output(i * 4 + 3).numpy()
        layer_out["biases"] = m.get_output(i * 4 + 2).numpy()
        layer_out["mask"] = m.get_output(i * 4 + 1).numpy()
        out_shape = (layer_attr[0], layer_attr[1] // layer_attr[0], layer_attr[2], layer_attr[3])
        layer_out["output"] = m.get_output(i * 4).numpy().reshape(out_shape)
        layer_out["classes"] = layer_attr[4]
        tvm_out.append(layer_out)

elif MODEL_NAME == "yolov3-tiny":
    for i in range(2):
        layer_out = {}
        layer_out["type"] = "Yolo"
        # Get the yolo layer attributes (n, out_c, out_h, out_w, classes, total)
        layer_attr = m.get_output(i * 4 + 3).numpy()
        layer_out["biases"] = m.get_output(i * 4 + 2).numpy()
        layer_out["mask"] = m.get_output(i * 4 + 1).numpy()
        out_shape = (layer_attr[0], layer_attr[1] // layer_attr[0], layer_attr[2], layer_attr[3])
        layer_out["output"] = m.get_output(i * 4).numpy().reshape(out_shape)
        layer_out["classes"] = layer_attr[4]
        tvm_out.append(layer_out)
        thresh = 0.560

# do the detection and bring up the bounding boxes
img = tvm.relay.testing.darknet.load_image_color(img_path)
_, im_h, im_w = img.shape
dets = tvm.relay.testing.yolo_detection.fill_network_boxes(
    (netw, neth), (im_w, im_h), thresh, 1, tvm_out
)
last_layer = net.layers[net.n - 1]
tvm.relay.testing.yolo_detection.do_nms_sort(dets, last_layer.classes, nms_thresh)

coco_name = "coco.names"
coco_url = REPO_URL + "data/" + coco_name + "?raw=true"
font_name = "arial.ttf"
font_url = REPO_URL + "data/" + font_name + "?raw=true"
coco_path = download_testdata(coco_url, coco_name, module="data")
font_path = download_testdata(font_url, font_name, module="data")

with open(coco_path) as f:
    content = f.readlines()

names = [x.strip() for x in content]

tvm.relay.testing.yolo_detection.show_detections(img, dets, thresh, names, last_layer.classes)
tvm.relay.testing.yolo_detection.draw_detections(
    font_path, img, dets, thresh, names, last_layer.classes
)

plt.imshow(img.transpose(1, 2, 0))
plt.show()
plt.savefig("yolov3_infer.png")

"""
# 代码运行输出如下:

python yolov3_darknet_infer.py

File /home/moli/.tvm_test_data/darknet/yolov3.cfg exists, skip.
File /home/moli/.tvm_test_data/darknet/yolov3.weights exists, skip.
File /home/moli/.tvm_test_data/darknet/libdarknet2.0.so exists, skip.
Converting darknet to relay functions...
Compiling the model...
One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
Loading the test image...
File /home/moli/.tvm_test_data/data/dog.jpg exists, skip.
Running the test image...
File /home/moli/.tvm_test_data/data/coco.names exists, skip.
File /home/moli/.tvm_test_data/data/arial.ttf exists, skip.
class:['dog 0.994'] left:127 right:227 top:316 bottom:533
class:['truck 0.9266'] left:471 right:83 top:689 bottom:169
class:['bicycle 0.9984'] left:111 right:113 top:577 bottom:447

"""

📕 yolov3 darknet 转 TVM 推理输出

依赖库

环境搭建比较常规、主要是顺利安装 tvm 即可

0-1

代码会自动下载相关模型和 dog.jpg 测试图片,到如下路径
  • /home/moli/.tvm_test_data

0-2

推理运行输出效果如下

0-9


📗 此次源码仓库地址

0-11


📙 博主 AI 领域八大干货专栏、诚不我欺


📙 预祝各位 2022 前途似锦、可摘星辰

🎉 作为全网 AI 领域 干货最多的博主之一,❤️ 不负光阴不负卿 ❤️
❤️ 如果文章对你有帮助、 点赞、评论鼓励博主的每一分认真创作
❤️ 比寻找温暖更重要的是,让自己成为一盏灯火 ❤️

小黄人封面.jpg

目录
相关文章
|
8月前
|
机器学习/深度学习 数据采集 数据挖掘
RCS-YOLO | 比YOLOv7精度提高了2.6%,推理速度提高了60%
RCS-YOLO | 比YOLOv7精度提高了2.6%,推理速度提高了60%
186 2
|
3月前
|
计算机视觉
目标检测笔记(二):测试YOLOv5各模块的推理速度
这篇文章是关于如何测试YOLOv5中不同模块(如SPP和SPPF)的推理速度,并通过代码示例展示了如何进行性能分析。
142 3
|
7月前
|
固态存储
【YOLO系列】YOLOv10模型结构详解与推理部署实现
【YOLO系列】YOLOv10模型结构详解与推理部署实现
1191 0
|
8月前
|
算法 文件存储 计算机视觉
【YOLOv8改进】MobileNetV3替换Backbone (论文笔记+引入代码)
YOLO目标检测专栏探讨了MobileNetV3的创新改进,该模型通过硬件感知的NAS和NetAdapt算法优化,适用于手机CPU。引入的新架构包括反转残差结构和线性瓶颈层,提出高效分割解码器LR-ASPP,提升了移动设备上的分类、检测和分割任务性能。MobileNetV3-Large在ImageNet上准确率提升3.2%,延迟降低20%,COCO检测速度增快25%。MobileNetV3-Small则在保持相近延迟下,准确率提高6.6%。此外,还展示了MobileNetV3_InvertedResidual模块的代码实现。
|
PyTorch 算法框架/工具
ShuffleNet v2网络结构复现(Pytorch版)
ShuffleNet v2网络结构复现(Pytorch版)
ShuffleNet v2网络结构复现(Pytorch版)
|
算法 PyTorch 调度
ResNet 高精度预训练模型在 MMDetection 中的最佳实践
作为最常见的骨干网络,ResNet 在目标检测算法中起到了至关重要的作用。许多目标检测经典算法,如 RetinaNet 、Faster R-CNN 和 Mask R-CNN 等都是以 ResNet 为骨干网络,并在此基础上进行调优。同时,大部分后续改进算法都会以 RetinaNet 、Faster R-CNN 和 Mask R-CNN 为 baseline 进行公平对比。
932 0
ResNet 高精度预训练模型在 MMDetection 中的最佳实践
|
算法 PyTorch 算法框架/工具
UNet++详细解读(二)pytorch从头开始搭建UNet++
UNet++详细解读(二)pytorch从头开始搭建UNet++
508 0
|
PyTorch 算法框架/工具 网络架构
UNet详细解读(二)pytorch从头开始搭建UNet
UNet详细解读(二)pytorch从头开始搭建UNet
333 0
|
机器学习/深度学习 PyTorch 算法框架/工具
什么是LSTM模型,什么是BILSTM模型,给出 pytorch案例
LSTM模型是一种循环神经网络模型,它在处理序列数据时能够有效地解决梯度消失和梯度爆炸的问题。LSTM模型引入了门机制(如遗忘门、输入门和输出门),以便在序列中选择性地保存或遗忘信息。这些门可以根据输入数据自适应地学习。 BILSTM模型是一种双向LSTM模型,它包含两个LSTM模型,一个正向模型和一个反向模型。正向模型按照时间顺序读取输入序列,而反向模型按照相反的顺序读取输入序列。这使得BILSTM模型能够同时考虑过去和未来的上下文信息,因此通常比单向LSTM模型表现更好。
1005 0
|
机器学习/深度学习 编解码 PyTorch
金字塔ViT | 华为提出使用金字塔结构改进Transformer,涨点明显(Pytorch逐行解读)
金字塔ViT | 华为提出使用金字塔结构改进Transformer,涨点明显(Pytorch逐行解读)
311 0