训练阶段yolov7主干部分结构图

简介: 笔记

通过分析yaml文件,并将训练阶段的yolov7结构转onnx可视化以后,用Visio画了一下主干和SPPCSPC的结构图,其他结构部分后面有时间了再细画。【注意这里强调了是训练阶段,不是预测阶段,预测阶段的结构图和训练是不一样的,因为预测阶段采用了重参化结构,可以看我另一篇将RepVGG重构的文章】


根据项目中 yolov7-main\cfg\training\yolov7.yaml绘制主干结构。下面的注释内容是每次for循环得到的输出通道列表

backbone:
  # [from, number, module, args]
  [[-1, 1, Conv, [32, 3, 1]],  # 0 conv1(3,32,3,s=1)
   [-1, 1, Conv, [64, 3, 2]],  # 1-P1/2 conv2(32,64,3,s=2)
   [-1, 1, Conv, [64, 3, 1]],  # conv3(64,64,3,1)
   [-1, 1, Conv, [128, 3, 2]],  # 3-P2/4  conv4(64,128,k=3,s=2)
   [-1, 1, Conv, [64, 1, 1]], # conv5(128,64,1,1) 此刻的out_channels=[32,64,64,128,64]*
   [-2, 1, Conv, [64, 1, 1]], # conv6(128,64,1,1)* 这个卷积层就是一个1*1的identity
   [-1, 1, Conv, [64, 3, 1]], # conv7(64,64,3,1)
   [-1, 1, Conv, [64, 3, 1]], # conv8(64,64,3,1)*
   [-1, 1, Conv, [64, 3, 1]], # conv9(64,64,3,1)
   [-1, 1, Conv, [64, 3, 1]], # conv10(64,64,3,1) 此刻的output_channels = [32,64,64,128,64,64,64,64,64,64]*
   [[-1, -3, -5, -6], 1, Concat, [1]], # 取出通道[64,64,64,64], 拼接后256通道 output_channels = [32,64,64,128,64,64,64,64,64,64,256]
   [-1, 1, Conv, [256, 1, 1]],  # 11 conv11(256,256,1,1) output_channels = [32,64,64,128,64,64,64,64,64,64,256,256]
   [-1, 1, MP, []], # maxpooling(k=2,s=2)通道数还是为256
   [-1, 1, Conv, [128, 1, 1]], # conv12(256,128,1,1) output_channels = [32,64,64,128,64,64,64,64,64,64,256,256,256,128]*
   [-3, 1, Conv, [128, 1, 1]], # conv13(256,128,1,1) [32,64,64,128,64,64,64,64,64,64,256,256,256,128,128]
   [-1, 1, Conv, [128, 3, 2]], # conv14(128,128,3,2) [32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128]*
   [[-1, -3], 1, Concat, [1]],  # 16-P3/8  输出256 [32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256]
   [-1, 1, Conv, [128, 1, 1]], # conv15(256,128,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128]*
   [-2, 1, Conv, [128, 1, 1]], # conv16(256,128,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128]*
   [-1, 1, Conv, [128, 3, 1]], # conv17(128,128,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128]
   [-1, 1, Conv, [128, 3, 1]], # conv18(128,128,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128]*
   [-1, 1, Conv, [128, 3, 1]], # conv19(128,128,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128]
   [-1, 1, Conv, [128, 3, 1]], # conv20(128,128,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128]*
   [[-1, -3, -5, -6], 1, Concat, [1]],# 512 [32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512]
   [-1, 1, Conv, [512, 1, 1]],  # 24 conv21(512,512,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512]
   [-1, 1, MP, []],# [32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512]
   [-1, 1, Conv, [256, 1, 1]],# conv22(512,256,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256]*
   [-3, 1, Conv, [256, 1, 1]], # conv23(512,256,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256]
   [-1, 1, Conv, [256, 3, 2]], # conv24(256,256,3,2)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256]*
   [[-1, -3], 1, Concat, [1]],  # 29-P4/16 512[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512]
   [-1, 1, Conv, [256, 1, 1]],# conv25(512,256,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256]*
   [-2, 1, Conv, [256, 1, 1]],# conv26(512,256,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256]*
   [-1, 1, Conv, [256, 3, 1]],# conv27(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256]
   [-1, 1, Conv, [256, 3, 1]],# conv28(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256]*
   [-1, 1, Conv, [256, 3, 1]],# conv29(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256]
   [-1, 1, Conv, [256, 3, 1]],# conv30(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256]*
   [[-1, -3, -5, -6], 1, Concat, [1]],# 1024[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024]
   [-1, 1, Conv, [1024, 1, 1]],  # 37 conv31(1024,1024,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024]
   [-1, 1, MP, []],# 1024 maxpooling(2,2)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024]
   [-1, 1, Conv, [512, 1, 1]],# conv32(1024,512,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512]*
   [-3, 1, Conv, [512, 1, 1]],# conv33(1024,512,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512]
   [-1, 1, Conv, [512, 3, 2]],# conv34(512,512,3,2)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512]*
   [[-1, -3], 1, Concat, [1]],  # 42-P5/32 1024 [32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024]
   [-1, 1, Conv, [256, 1, 1]],# conv35(1024,256,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256]*
   [-2, 1, Conv, [256, 1, 1]],# conv36(1024,256,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256,256]*
   [-1, 1, Conv, [256, 3, 1]],# conv37(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256,256,256]
   [-1, 1, Conv, [256, 3, 1]],# conv38(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256,256,256,256]*
   [-1, 1, Conv, [256, 3, 1]],# conv39(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256,256,256,256,256]
   [-1, 1, Conv, [256, 3, 1]],# conv40(256,256,3,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256,256,256,256,256,256]*
   [[-1, -3, -5, -6], 1, Concat, [1]],# 1024[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256,256,256,256,256,256,1024]
   [-1, 1, Conv, [1024, 1, 1]],  # 50conv41(1024,1024,1,1)[32,64,64,128,64,64,64,64,64,64,256,256,256,128,128,128,256,128,128,128,128,128,128,512,512,512,256,256,256,512,256,256,256,256,256,256,1024,1024,1024,512,512,512,1024,256,256,256,256,256,256,1024,1024]
  ]

网络结构图如下(这可是我一点点看着yaml文件和onnx自己画的哦):可以看到在每个stage中,均有一个1 * 1大小的identity,就和ResNet是类似的,会会将4个部分进行拼接(通道层次上的拼接),因此拼接后的通道数会变成4倍。同时每次concat后又会经过一个K=1,S=1的卷积,再分别有两个分支,其中一个分支是经过MaxPooling,另一个分支就是正常的卷积,再将两个分支进行拼接。

30.png

SSPCSPC网络结构如下:可以看到该结构和SPP是有些相似的,都有1*1 5*5 9*9 13*13的池化层,同时与v3和v4一样,前后也均有卷积,不同的是有一个1*1的残差边。


31.png

这里我将SPPCSPC的onnx结构图也附上。

从onnx图上可以看到有个sigmoid和conv的相乘,其实这个过程就是SiLU激活函数的过程,只是因为转onnx的时候不支持SiLU激活函数,因此需要转化一下。大家就把那部分看为SiLU就可以啦。


32.png

转onnx代码如下:

yaml_file = '../cfg/training/yolov7.yaml'
model = Model(yaml_file)
x = torch.ones(1,3,640,640)
torch.onnx.export(model, x, 'yolov7.onnx', verbose=False,opset_version=11)
目录
相关文章
|
XML 存储 数据格式
【30】yolov5的数据集准备 | 处理Pascal voc格式的数据集
【30】yolov5的数据集准备 | 处理Pascal voc格式的数据集
750 0
【30】yolov5的数据集准备 | 处理Pascal voc格式的数据集
|
人工智能 计算机视觉
Dataset之BDD100K:BDD100K数据集的简介、下载、使用方法之详细攻略
Dataset之BDD100K:BDD100K数据集的简介、下载、使用方法之详细攻略
Dataset之BDD100K:BDD100K数据集的简介、下载、使用方法之详细攻略
|
机器学习/深度学习 编解码 计算机视觉
深入 YOLOv8:探索 block.py 中的模块,逐行代码分析(一)
深入 YOLOv8:探索 block.py 中的模块,逐行代码分析(一)
|
9月前
|
机器学习/深度学习 计算机视觉
RT-DETR改进策略【Backbone/主干网络】| 替换骨干网络为2023-CVPR LSKNet (附网络详解和完整配置步骤)
RT-DETR改进策略【Backbone/主干网络】| 替换骨干网络为2023-CVPR LSKNet (附网络详解和完整配置步骤)
306 13
RT-DETR改进策略【Backbone/主干网络】| 替换骨干网络为2023-CVPR LSKNet (附网络详解和完整配置步骤)
|
9月前
|
机器学习/深度学习 编解码 计算机视觉
RT-DETR改进策略【注意力机制篇】| CVPR-2023 FSAS 基于频域的自注意力求解器 结合频域计算和卷积操作 降低噪声影响
RT-DETR改进策略【注意力机制篇】| CVPR-2023 FSAS 基于频域的自注意力求解器 结合频域计算和卷积操作 降低噪声影响
361 2
|
10月前
|
SQL 存储 运维
云端问道5期方案教学-基于 Hologres 轻量实时的高性能OLAP分析
本文介绍了基于Hologres的轻量实时高性能OLAP分析方案,涵盖OLAP典型应用场景及Hologres的核心能力。Hologres是阿里云的一站式实时数仓,支持多种数据源同步、多场景查询和丰富的生态工具。它解决了复杂OLAP场景中的技术栈复杂、需求响应慢、开发运维成本高、时效性差、生态兼容弱、业务间相互影响等难题。通过与ClickHouse对比,Hologres在性能、写入更新、主键支持等方面表现更优。文中还展示了小红书、乐元素等客户案例,验证了Hologres在实际应用中的优势,如免运维、查询快、成本节约等。
182 0
云端问道5期方案教学-基于 Hologres 轻量实时的高性能OLAP分析
|
算法 计算机视觉
非极大值抑制详细原理(NMS含代码及详细注释)
非极大值抑制(Non-Maximum Suppression,NMS)详细原理(含代码及详细注释)
2501 1
非极大值抑制详细原理(NMS含代码及详细注释)
|
机器学习/深度学习 计算机视觉 Python
【YOLOv11改进 - 注意力机制】EMA(Efficient Multi-Scale Attention):基于跨空间学习的高效多尺度注意力
【YOLOv11改进 - 注意力机制】EMA(Efficient Multi-Scale Attention):基于跨空间学习的高效多尺度注意力.EMA(Efficient Multi-Scale Attention)模块是一种高效多尺度注意力机制,旨在提高计算机视觉任务中的特征表示效果。该模块通过结合通道和空间信息、采用多尺度并行子网络结构以及优化坐标注意力机制,实现了更高效和有效的特征表示。EMA模块在图像分类和目标检测任务中表现出色,使用CIFAR-100、ImageNet-1k、MS COCO和VisDrone2019等数据集进行了广泛测试。
【YOLOv11改进 - 注意力机制】EMA(Efficient Multi-Scale Attention):基于跨空间学习的高效多尺度注意力
|
算法 Go vr&ar
YOLOv8模型yaml结构图理解(逐层分析)
YOLOv8模型yaml结构图理解(逐层分析)
18310 0
|
算法 计算机视觉
【YOLOv8训练结果评估】YOLOv8如何使用训练好的模型对验证集进行评估及评估参数详解
【YOLOv8训练结果评估】YOLOv8如何使用训练好的模型对验证集进行评估及评估参数详解