【保姆级教程】【YOLOv8替换主干网络】【1】使用efficientViT替换YOLOV8主干网络结构(3)https://developer.aliyun.com/article/1536653
_predict_once函数修改
替换ultralytics/nn/tasks.py
中的BaseModel
类的_predict_once
函数,代码如下:
def _predict_once(self, x, profile=False, visualize=False): """ Perform a forward pass through the network. Args: x (torch.Tensor): The input tensor to the model. profile (bool): Print the computation time of each layer if True, defaults to False. visualize (bool): Save the feature maps of the model if True, defaults to False. Returns: (torch.Tensor): The last output of the model. """ y, dt = [], [] # outputs for m in self.model: if m.f != -1: # if not from previous layer x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers if profile: self._profile_one_layer(m, x, dt) if hasattr(m, 'backbone'): x = m(x) for _ in range(5 - len(x)): x.insert(0, None) for i_idx, i in enumerate(x): if i_idx in self.save: y.append(i) else: y.append(None) # for i in x: # if i is not None: # print(i.size()) x = x[-1] else: x = m(x) # run y.append(x if m.i in self.save else None) # save output if visualize: feature_visualization(x, m.type, m.i, save_dir=visualize) return x
第3步:创建配置文件–yolov8-efficientViT.yaml
在ultralytics/cfg/models/v8
目录下,创建新的配置文件yolov8-efficientViT.yaml
,内容如下:
注:可以使用EfficientViT_M0, EfficientViT_M1, EfficientViT_M2, EfficientViT_M3, EfficientViT_M4, EfficientViT_M5中的任何一个,参数量不同。
# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # 0-P1/2 # 1-P2/4 # 2-P3/8 # 3-P4/16 # 4-P5/32 # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, EfficientViT_M0, []] # 4 - [-1, 1, SPPF, [1024, 5]] # 5 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 6 - [[-1, 3], 1, Concat, [1]] # 7 cat backbone P4 - [-1, 3, C2f, [512]] # 8 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 9 - [[-1, 2], 1, Concat, [1]] # 10 cat backbone P3 - [-1, 3, C2f, [256]] # 11 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] # 12 - [[-1, 8], 1, Concat, [1]] # 13 cat head P4 - [-1, 3, C2f, [512]] # 14 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] # 15 - [[-1, 5], 1, Concat, [1]] # 16 cat head P5 - [-1, 3, C2f, [1024]] # 17 (P5/32-large) - [[11, 14, 17], 1, Detect, [nc]] # Detect(P3, P4, P5)
yolov8.yaml
与yolov8-efficientViT.yaml
对比
backbone部分:yolov8.yaml
与yolov8-efficientViT.yaml
对比:
head部分:yolov8.yaml
与yolov8-efficientViT.yaml
对比:【注意层数的变化,所以要修改对应的层数数字部分】
第4步:加载配置文件训练模型
运行训练代码train.py
文件,内容如下:
#coding:utf-8 # 替换主干网络,训练 from ultralytics import YOLO if __name__ == '__main__': model = YOLO('ultralytics/cfg/models/v8/yolov8-efficientViT.yaml') model.load('yolov8n.pt') # loading pretrain weights model.train(data='datasets/TomatoData/data.yaml', epochs=250, batch=4)
第5步:模型推理
模型训练完成后,我们使用训练好的模型对图片进行检测:
#coding:utf-8 from ultralytics import YOLO import cv2 # 所需加载的模型目录 # path = 'models/best2.pt' path = 'runs/detect/train9/weights/best.pt' # 需要检测的图片地址 img_path = "TestFiles/Riped tomato_31.jpeg" # 加载预训练模型 # conf 0.25 object confidence threshold for detection # iou 0.7 intersection over union (IoU) threshold for NMS model = YOLO(path, task='detect') # 检测图片 results = model(img_path) res = results[0].plot() # res = cv2.resize(res,dsize=None,fx=2,fy=2,interpolation=cv2.INTER_LINEAR) cv2.imshow("YOLOv8 Detection", res) cv2.waitKey(0)