# 一、物体识别算法原理概述

## 2、Yolo算法原理概述

Yolo的识别原理简单清晰。对于输入的图片，将整张图片分为7×7（7为参数，可调）个方格。当某个物体的中心点落在了某个方格中，该方格则负责预测该物体。每个方格会为被预测物体产生2（参数，可调）个候选框并生成每个框的置信度。最后选取置信度较高的方框作为预测结果。

# 二、opencv调用darknet物体识别模型（yolov3/yolov4）

## 1、darknet模型的获取

• cfg文件：模型描述文件
• weights文件：模型权重文件

## 2、python调用darknet模型实现物体识别

（1）dnn模块调用darknet模型

net = cv2.dnn.readNetFromDarknet("yolov3/yolov3.cfg", "yolov3/yolov3.weights")


（2）获取三个输出端的LayerName

def getOutputsNames(net):
# Get the names of all the layers in the network
layersNames = net.getLayerNames()
# Get the names of the output layers, i.e. the layers with unconnected outputs
return [layersNames[i - 1] for i in net.getUnconnectedOutLayers()]


（3）图像预处理

Size=(416,416)或(608,608)
Scale=1/255
Means=[0,0,0]

blob = cv2.dnn.blobFromImage(frame, 1/255, (416, 416), [0,0,0], 1, crop=False)


（4）推理

net.setInput(blob)
outs = net.forward(getOutputsNames(net))


（5）后处理（postrocess）

def postprocess(frame, outs):
frameHeight = frame.shape[0]
frameWidth = frame.shape[1]
classIds = []
confidences = []
boxes = []
classIds = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > confThreshold:
center_x = int(detection[0] * frameWidth)
center_y = int(detection[1] * frameHeight)
width = int(detection[2] * frameWidth)
height = int(detection[3] * frameHeight)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
classIds.append(classId)
confidences.append(float(confidence))
boxes.append([left, top, width, height])
print(boxes)
print(confidences)


（6）后处理（postrocess）

    indices = cv.dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThreshold)
for i in indices:
box = boxes[i]
left = box[0]
top = box[1]
width = box[2]
height = box[3]
drawPred(classIds[i], confidences[i], left, top, left + width, top + height)


（7）画出检测到的对象

def drawPred(classId, conf, left, top, right, bottom):
# Draw a bounding box.
cv.rectangle(frame, (left, top), (right, bottom), (0, 0, 255))

label = '%.2f' % conf

# Get the label for the class name and its confidence
if classes:
assert(classId < len(classes))
label = '%s:%s' % (classes[classId], label)

#Display the label at the top of the bounding box
labelSize, baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 1)
top = max(top, labelSize[1])
cv.putText(frame, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255))


（8）完整源码及检测结果（cv_call_yolo.py）

import cv2
cv=cv2
import numpy as np
import time
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)

confThreshold = 0.5  #Confidence threshold
nmsThreshold = 0.4   #Non-maximum suppression threshold
classesFile = "coco.names";
classes = None
with open(classesFile, 'rt') as f:

def getOutputsNames(net):
# Get the names of all the layers in the network
layersNames = net.getLayerNames()
# Get the names of the output layers, i.e. the layers with unconnected outputs
return [layersNames[i - 1] for i in net.getUnconnectedOutLayers()]
print(getOutputsNames(net))
# Remove the bounding boxes with low confidence using non-maxima suppression

def postprocess(frame, outs):
frameHeight = frame.shape[0]
frameWidth = frame.shape[1]
classIds = []
confidences = []
boxes = []
# Scan through all the bounding boxes output from the network and keep only the
# ones with high confidence scores. Assign the box's class label as the class with the highest score.
classIds = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > confThreshold:
center_x = int(detection[0] * frameWidth)
center_y = int(detection[1] * frameHeight)
width = int(detection[2] * frameWidth)
height = int(detection[3] * frameHeight)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
classIds.append(classId)
confidences.append(float(confidence))
boxes.append([left, top, width, height])

# Perform non maximum suppression to eliminate redundant overlapping boxes with
# lower confidences.
print(boxes)
print(confidences)
indices = cv.dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThreshold)
for i in indices:
#print(i)
#i = i[0]
box = boxes[i]
left = box[0]
top = box[1]
width = box[2]
height = box[3]
drawPred(classIds[i], confidences[i], left, top, left + width, top + height)

# Draw the predicted bounding box
def drawPred(classId, conf, left, top, right, bottom):
# Draw a bounding box.
cv.rectangle(frame, (left, top), (right, bottom), (0, 0, 255))
label = '%.2f' % conf
# Get the label for the class name and its confidence
if classes:
assert(classId < len(classes))
label = '%s:%s' % (classes[classId], label)
#Display the label at the top of the bounding box
labelSize, baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 1)
top = max(top, labelSize[1])
cv.putText(frame, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255))
blob = cv2.dnn.blobFromImage(frame, 1/255, (416, 416), [0,0,0], 1, crop=False)
t1=time.time()
net.setInput(blob)
outs = net.forward(getOutputsNames(net))
print(time.time()-t1)
postprocess(frame, outs)
t, _ = net.getPerfProfile()
label = 'Inference time: %.2f ms' % (t * 1000.0 / cv.getTickFrequency())
cv.putText(frame, label, (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255))
cv2.imshow("result",frame)


## 3、LabVIEW调用darknet模型实现物体识别yolo_example.vi

（1）LabVIEW调用yolov3的方式及步骤和python类似，源码如下所示：

（2）识别结果如下：

## 4、LabVIEW实现实时摄像头物体识别（yolo_example_camera.vi）

（1）使用GPU加速

Nvidia GPU模式：net.serPreferenceBackend(5)， net.serPerferenceTarget(6)

（2）程序源码如下：

（3）物体识别结果如下：

（4）使用GPU加速结果：

# 三、tensorflow的物体识别模型调用

## 1、下载预训练模型并生成pbtxt文件

（1）下载ssd_mobilenet_v2_coco，下载地址如下：

（2）解压后的文件内容

（3）根据pb模型生成pbtxt文件

python tf_text_graph_ssd.py --input ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb --config ssd_mobilenet_v1_coco_2017_11_17/ssd_mobilenet_v1_coco.config --output ssd_mobilenet_v1_coco_2017_11_17.pbtxt

（1）程序源码如下：

（2）运行结果如下：

# 总结拓展

|
8天前
|

[opencv学习笔记] jiazhigang 30讲源码C++版本(含Makefile)
[opencv学习笔记] jiazhigang 30讲源码C++版本(含Makefile)
31 0
|
8天前
|

OpenCV中使用Eigenfaces人脸识别器识别人脸实战（附Python源码）
OpenCV中使用Eigenfaces人脸识别器识别人脸实战（附Python源码）
126 0
|
8天前
|

OpenCV获取视频文件的属性并动态显示实战（附Python源码）
OpenCV获取视频文件的属性并动态显示实战（附Python源码）
57 0
|
8天前
|
Ubuntu 编译器 C++
Ubuntu系统下编译OpenCV4.8源码

23 6
|
8天前
|

OpenCV中Fisherfaces人脸识别器识别人脸实战（附Python源码）
OpenCV中Fisherfaces人脸识别器识别人脸实战（附Python源码）
81 0
|
8天前
|
XML 算法 计算机视觉

141 1
|
8天前
|

TensorFlow、PyTorch、Keras、Scikit-learn和ChatGPT。视觉开发软件工具 Halcon、VisionPro、LabView、OpenCV
TensorFlow、PyTorch、Keras、Scikit-learn和ChatGPT。视觉开发软件工具 Halcon、VisionPro、LabView、OpenCV
38 1
|
8天前
|

OpenCV中LBPH人脸识别器识别人脸实战（附Python源码）
OpenCV中LBPH人脸识别器识别人脸实战（附Python源码）
140 0
|
8天前
|

OpenCV检测眼睛、猫脸、行人、车牌实战（附Python源码）
OpenCV检测眼睛、猫脸、行人、车牌实战（附Python源码）
97 0
|
8天前
|

OpenCV保存摄像头视频和视频文件操作实战（附Python源码）
OpenCV保存摄像头视频和视频文件操作实战（附Python源码）
285 0