前言
今天和大家一起分享如何使用LabVIEW調(diào)用pb模型實現(xiàn)物體識別,本博客中使用的智能工具包可到主頁置頂博客LabVIEW AI視覺工具包(非NI Vision)下載與安裝教程中下載
一、物體識別算法原理概述
1、物體識別的概念
物體識別也稱 目標(biāo)檢測 ,目標(biāo)檢測所要解決的問題是目標(biāo)在哪里以及其狀態(tài)的問題。但是,這個問題并不是很容易解決。形態(tài)不合理,對象出現(xiàn)的區(qū)域不確定,更不用說對象也可以是多個類別。**
**
目標(biāo)檢測用的比較多的主要是RCNN,spp- net,fast- rcnn,faster- rcnn;YOLO系列,如YOLOV3和YOLOV4;除此之外還有SSD,ResNet等。
2、Yolo算法原理概述
Yolo的識別原理簡單清晰。對于輸入的圖片,將整張圖片分為7×7(7為參數(shù),可調(diào))個方格。當(dāng)某個物體的中心點落在了某個方格中,該方格則負責(zé)預(yù)測該物體。每個方格會為被預(yù)測物體產(chǎn)生2(參數(shù),可調(diào))個候選框并生成每個框的置信度。最后選取置信度較高的方框作為預(yù)測結(jié)果。
二、opencv調(diào)用darknet物體識別模型(yolov3/yolov4)
相關(guān)源碼及模型在darknt文件夾下
使用darknet訓(xùn)練yolo的模型,生成weights文件。使用opencv調(diào)用生成的模型
1、darknet模型的獲取
文件含義:
- **cfg文件:模型描述文件 **
- weights文件:模型權(quán)重文件
Yolov3獲取鏈接:
https://github.com/pjreddie/darknet/blob/master/cfg/yolov3.cfg
https://pjreddie.com/media/files/yolov3.weights
Yolov4獲取鏈接:
https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.cfg
https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
2、python調(diào)用darknet模型實現(xiàn)物體識別
(1)dnn模塊調(diào)用darknet模型
net = cv2.dnn.readNetFromDarknet("yolov3/yolov3.cfg", "yolov3/yolov3.weights")
(2)獲取三個輸出端的LayerName
使用getUnconnectedOutLayer獲取三個只有輸入,沒有輸出的層的名字,Yolov3的三個輸出端層名為:['yolo_82', 'yolo_94', 'yolo_106']
def getOutputsNames(net):
# Get the names of all the layers in the network
layersNames = net.getLayerNames()
# Get the names of the output layers, i.e. the layers with unconnected outputs
return [layersNames[i - 1] for i in net.getUnconnectedOutLayers()]
**(3)圖像預(yù)處理 **
使用blobFromImage將圖像轉(zhuǎn)為image
Size=(416,416)或(608,608)
Scale=1/255
Means=[0,0,0]
blob = cv2.dnn.blobFromImage(frame, 1/255, (416, 416), [0,0,0], 1, crop=False)
(4)推理
使用net.forward(multiNames)獲取多個層的結(jié)果,其中g(shù)etOutputsNames(net)=['yolo_82', 'yolo_94', 'yolo_106']
net.setInput(blob)
outs = net.forward(getOutputsNames(net))
**(5)后處理(postrocess) **
獲取的結(jié)果(outs)里面有三個矩陣(out),每個矩陣的大小為85*n,n表示檢測到了n個物體,85的排列順序是這樣的:
- 第0列代表物體中心x在圖中的位置(0~1)
- 第1列表示物體中心y在圖中的位置(0~1)
- 第2列表示物體的寬度
- 第3列表示物體的高度
- 第4列是置信概率,值域為[0-1],用來與閾值作比較決定是否標(biāo)記目標(biāo)
- 第5~84列為基于COCO數(shù)據(jù)集的80分類的標(biāo)記權(quán)重,最大的為輸出分類。使用這些參數(shù)保留置信度高的識別結(jié)果(confidence>confThreshold)
def postprocess(frame, outs):
frameHeight = frame.shape[0]
frameWidth = frame.shape[1]
classIds = []
confidences = []
boxes = []
classIds = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > confThreshold:
center_x = int(detection[0] * frameWidth)
center_y = int(detection[1] * frameHeight)
width = int(detection[2] * frameWidth)
height = int(detection[3] * frameHeight)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
classIds.append(classId)
confidences.append(float(confidence))
boxes.append([left, top, width, height])
print(boxes)
print(confidences)
**(6)后處理(postrocess) **
使用NMSBoxes函數(shù)過濾掉重復(fù)識別的區(qū)域。
indices = cv.dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThreshold)
for i in indices:
box = boxes[i]
left = box[0]
top = box[1]
width = box[2]
height = box[3]
drawPred(classIds[i], confidences[i], left, top, left + width, top + height)
(7)畫出檢測到的對象
def drawPred(classId, conf, left, top, right, bottom):
# Draw a bounding box.
cv.rectangle(frame, (left, top), (right, bottom), (0, 0, 255))
label = '%.2f' % conf
# Get the label for the class name and its confidence
if classes:
assert(classId < len(classes))
label = '%s:%s' % (classes[classId], label)
#Display the label at the top of the bounding box
labelSize, baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 1)
top = max(top, labelSize[1])
cv.putText(frame, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255))
(8)完整源碼及檢測結(jié)果(cv_call_yolo.py)
import cv2
cv=cv2
import numpy as np
import time
net = cv2.dnn.readNetFromDarknet("yolov3/yolov3.cfg", "yolov3/yolov3.weights")
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
?
confThreshold = 0.5 #Confidence threshold
nmsThreshold = 0.4 #Non-maximum suppression threshold
frame=cv2.imread("dog.jpg")
classesFile = "coco.names";
classes = None
with open(classesFile, 'rt') as f:
classes = f.read().rstrip('\\\\\\\\\\\\\\\\n').split('\\\\\\\\\\\\\\\\n')
?
def getOutputsNames(net):
# Get the names of all the layers in the network
layersNames = net.getLayerNames()
# Get the names of the output layers, i.e. the layers with unconnected outputs
return [layersNames[i - 1] for i in net.getUnconnectedOutLayers()]
print(getOutputsNames(net))
# Remove the bounding boxes with low confidence using non-maxima suppression
?
def postprocess(frame, outs):
frameHeight = frame.shape[0]
frameWidth = frame.shape[1]
classIds = []
confidences = []
boxes = []
# Scan through all the bounding boxes output from the network and keep only the
# ones with high confidence scores. Assign the box's class label as the class with the highest score.
classIds = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > confThreshold:
center_x = int(detection[0] * frameWidth)
center_y = int(detection[1] * frameHeight)
width = int(detection[2] * frameWidth)
height = int(detection[3] * frameHeight)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
classIds.append(classId)
confidences.append(float(confidence))
boxes.append([left, top, width, height])
# Perform non maximum suppression to eliminate redundant overlapping boxes with
# lower confidences.
print(boxes)
print(confidences)
indices = cv.dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThreshold)
for i in indices:
#print(i)
#i = i[0]
box = boxes[i]
left = box[0]
top = box[1]
width = box[2]
height = box[3]
drawPred(classIds[i], confidences[i], left, top, left + width, top + height)
?
# Draw the predicted bounding box
def drawPred(classId, conf, left, top, right, bottom):
# Draw a bounding box.
cv.rectangle(frame, (left, top), (right, bottom), (0, 0, 255))
label = '%.2f' % conf
# Get the label for the class name and its confidence
if classes:
assert(classId < len(classes))
label = '%s:%s' % (classes[classId], label)
#Display the label at the top of the bounding box
labelSize, baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 1)
top = max(top, labelSize[1])
cv.putText(frame, label, (left, top), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255))
blob = cv2.dnn.blobFromImage(frame, 1/255, (416, 416), [0,0,0], 1, crop=False)
t1=time.time()
net.setInput(blob)
outs = net.forward(getOutputsNames(net))
print(time.time()-t1)
postprocess(frame, outs)
t, _ = net.getPerfProfile()
label = 'Inference time: %.2f ms' % (t * 1000.0 / cv.getTickFrequency())
cv.putText(frame, label, (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255))
cv2.imshow("result",frame)
?
3、LabVIEW調(diào)用darknet模型實現(xiàn)物體識別yolo_example.vi
(1)LabVIEW調(diào)用yolov3的方式及步驟和python類似,源碼如下所示:
將待識別圖片與yolo_example.vi置于同一路徑下,即可進行物體識別
(2)識別結(jié)果如下:
4、LabVIEW實現(xiàn)實時攝像頭物體識別(yolo_example_camera.vi)
(1)使用GPU加速
使用順序結(jié)構(gòu)檢測神經(jīng)網(wǎng)絡(luò)推理的時間
比較使用GPU和不使用GPU兩種情況下的推理速度
普通模式 :net.serPerferenceBackend(0),net.serPerferenceTarget(0)**
** Nvidia GPU模式 :net.serPreferenceBackend(5), net.serPerferenceTarget(6)**
**
**注:普通的c++、python、LabVIEW版本的opencv,即便選了GPU模式也沒用,程序仍然運行在CPU上,需要安裝CUDA和CUDNN后重新從源碼編譯opencv **
(2)程序源碼如下:
(3)物體識別結(jié)果如下:
注意,使用如上程序,可以點擊STOP按鈕,停止本次物體識別,也可勾選使用GPU進行加速
(4)使用GPU加速結(jié)果:
三、tensorflow的物體識別模型調(diào)用
相關(guān)源碼及模型在tf1文件夾下
1、下載預(yù)訓(xùn)練模型并生成pbtxt文件
(1)下載ssd_mobilenet_v2_coco,下載地址如下:
http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
(2)解壓后的文件內(nèi)容
(3)根據(jù)pb模型生成pbtxt文件
運行 tf_text_graph_ssd.py以生成pptxt文件
在cmd中運行:
**python tf_text_graph_ssd.py --input ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb --config ssd_mobilenet_v1_coco_2017_11_17/ssd_mobilenet_v1_coco.config --output ssd_mobilenet_v1_coco_2017_11_17.pbtxt **
2、LabVIEW調(diào)用tensorflow模型推理并實現(xiàn)物體識別(callpb.vi)
(1)程序源碼如下:
(2)運行結(jié)果如下:
四、項目源碼及模型下載
鏈接: https://pan.baidu.com/s/1zwbLQe0VehGhsqNIHyaFRw?pwd=8888
**提取碼:8888 **
總結(jié)拓展
**可以使用Yolov3訓(xùn)練自己的數(shù)據(jù)集,具體訓(xùn)練方法可參考博客
可實現(xiàn)案例:口罩佩戴識別、肺炎分類、CT等,如口罩佩戴檢測
審核編輯 黃宇
-
LabVIEW
+關(guān)注
關(guān)注
1971文章
3654瀏覽量
323695 -
目標(biāo)檢測
+關(guān)注
關(guān)注
0文章
209瀏覽量
15611 -
OpenCV
+關(guān)注
關(guān)注
31文章
635瀏覽量
41355 -
物體識別
+關(guān)注
關(guān)注
0文章
16瀏覽量
7513 -
dnn
+關(guān)注
關(guān)注
0文章
60瀏覽量
9053
發(fā)布評論請先 登錄
相關(guān)推薦
評論