您好,登錄后才能下訂單哦!
這篇文章主要介紹“python怎么使用OpenCV實(shí)現(xiàn)多目標(biāo)跟蹤”的相關(guān)知識(shí),小編通過實(shí)際案例向大家展示操作過程,操作方法簡(jiǎn)單快捷,實(shí)用性強(qiáng),希望這篇“python怎么使用OpenCV實(shí)現(xiàn)多目標(biāo)跟蹤”文章能幫助大家解決問題。
計(jì)算機(jī)視覺和機(jī)器學(xué)習(xí)的大多數(shù)初學(xué)者都學(xué)習(xí)對(duì)象檢測(cè)。如果您是初學(xué)者,您可能會(huì)想到為什么我們需要對(duì)象跟蹤。我們不能只檢測(cè)每一幀中的物體嗎?
讓我們探討一下跟蹤有用的幾個(gè)原因:
首先,當(dāng)在視頻幀中檢測(cè)到多個(gè)對(duì)象(比如人)時(shí),跟蹤有助于跨幀確定對(duì)象的身份。
其次,在某些情況下,目標(biāo)檢測(cè)可能會(huì)失敗,但仍可能跟蹤對(duì)象,因?yàn)楦檿?huì)考慮前一幀中對(duì)象的位置和外觀。
第三,一些跟蹤算法非??欤?yàn)樗鼈冞M(jìn)行本地搜索而不是全局搜索。因此,我們可以通過每第n幀執(zhí)行目標(biāo)檢測(cè)并在中間幀中跟蹤對(duì)象來為我們的系統(tǒng)獲得非常高的性能。
那么,為什么不在第一次檢測(cè)后無限期地跟蹤對(duì)象呢?跟蹤算法有時(shí)可能會(huì)丟失其正在跟蹤的對(duì)象。例如,當(dāng)對(duì)象的運(yùn)動(dòng)太大時(shí),跟蹤算法可能無法跟上。通常會(huì)在目標(biāo)跟蹤一段時(shí)間后再次目標(biāo)檢測(cè)。
在本教程中,我們將只關(guān)注跟蹤部分。我們要跟蹤的對(duì)象將通過指定它們周圍的邊界框來獲取。
OpenCV中的多目標(biāo)跟蹤器MultiTracker類提供了多目標(biāo)跟蹤的實(shí)現(xiàn)。但是這只是一個(gè)初步的實(shí)現(xiàn),因?yàn)樗惶幚砀檶?duì)象,而不對(duì)被跟蹤對(duì)象進(jìn)行任何優(yōu)化。
多對(duì)象跟蹤器只是單個(gè)對(duì)象跟蹤器的集合。我們首先定義一個(gè)將跟蹤器類型作為輸入并創(chuàng)建跟蹤器對(duì)象的函數(shù)。
OpenCV有8種不同的跟蹤器類型:BOOSTING,MIL,KCF,TLD,MEDIANFLOW,GOTURN,MOSSE,CSRT。本文不使用GOTURN跟蹤器。一般我們先給定跟蹤器類的名稱,再返回單跟蹤器對(duì)象,然后建立多跟蹤器類。
C++代碼:
vector<string> trackerTypes = {"BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT"}; /** * @brief Create a Tracker By Name object 根據(jù)設(shè)定的類型初始化跟蹤器 * * @param trackerType * @return Ptr<Tracker> */ Ptr<Tracker> createTrackerByName(string trackerType) { Ptr<Tracker> tracker; if (trackerType == trackerTypes[0]) tracker = TrackerBoosting::create(); else if (trackerType == trackerTypes[1]) tracker = TrackerMIL::create(); else if (trackerType == trackerTypes[2]) tracker = TrackerKCF::create(); else if (trackerType == trackerTypes[3]) tracker = TrackerTLD::create(); else if (trackerType == trackerTypes[4]) tracker = TrackerMedianFlow::create(); else if (trackerType == trackerTypes[5]) tracker = TrackerGOTURN::create(); else if (trackerType == trackerTypes[6]) tracker = TrackerMOSSE::create(); else if (trackerType == trackerTypes[7]) tracker = TrackerCSRT::create(); else { cout << "Incorrect tracker name" << endl; cout << "Available trackers are: " << endl; for (vector<string>::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it) { std::cout << " " << *it << endl; } } return tracker; }
python代碼:
from __future__ import print_function import sys import cv2 from random import randint trackerTypes = ['BOOSTING', 'MIL', 'KCF','TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] def createTrackerByName(trackerType): # Create a tracker based on tracker name if trackerType == trackerTypes[0]: tracker = cv2.TrackerBoosting_create() elif trackerType == trackerTypes[1]: tracker = cv2.TrackerMIL_create() elif trackerType == trackerTypes[2]: tracker = cv2.TrackerKCF_create() elif trackerType == trackerTypes[3]: tracker = cv2.TrackerTLD_create() elif trackerType == trackerTypes[4]: tracker = cv2.TrackerMedianFlow_create() elif trackerType == trackerTypes[5]: tracker = cv2.TrackerGOTURN_create() elif trackerType == trackerTypes[6]: tracker = cv2.TrackerMOSSE_create() elif trackerType == trackerTypes[7]: tracker = cv2.TrackerCSRT_create() else: tracker = None print('Incorrect tracker name') print('Available trackers are:') for t in trackerTypes: print(t) return tracker
多對(duì)象跟蹤器需要兩個(gè)輸入即一個(gè)視頻幀和我們想要跟蹤的所有對(duì)象的位置(邊界框)。
給定此信息,跟蹤器在所有后續(xù)幀中跟蹤這些指定對(duì)象的位置。在下面的代碼中,我們首先使用VideoCapture
類加載視頻并讀取第一幀。稍后將使用它來初始化MultiTracker。
C++代碼:
// Set tracker type. Change this to try different trackers. 選擇追蹤器類型 string trackerType = trackerTypes[6]; // set default values for tracking algorithm and video 視頻讀取 string videoPath = "video/run.mp4"; // Initialize MultiTracker with tracking algo 邊界框 vector<Rect> bboxes; // create a video capture object to read videos 讀視頻 cv::VideoCapture cap(videoPath); Mat frame; // quit if unable to read video file if (!cap.isOpened()) { cout << "Error opening video file " << videoPath << endl; return -1; } // read first frame 讀第一幀 cap >> frame;
python代碼:
# Set video to load videoPath = "video/run.mp4" # Create a video capture object to read videos cap = cv2.VideoCapture(videoPath) # Read first frame success, frame = cap.read() # quit if unable to read the video file if not success: print('Failed to read video') sys.exit(1)
接下來,我們需要在第一幀中找到我們想要跟蹤的對(duì)象。OpenCV提供了一個(gè)名為selectROIs的函數(shù),它彈出一個(gè)GUI來選擇邊界框(也稱為感興趣區(qū)域(ROI))。在C++版本中可以通過selectROIs允許您獲取多個(gè)邊界框,但在Python版本中,只能通過selectROI獲得一個(gè)邊界框。因此,在Python版本中,我們需要一個(gè)循環(huán)來獲取多個(gè)邊界框。對(duì)于每個(gè)對(duì)象,我們還選擇隨機(jī)顏色來顯示邊界框。selectROI函數(shù)步驟為先在圖像上畫框,然后按ENTER確定完成畫框畫下一個(gè)框。按ESC退出畫框開始執(zhí)行程序
C++代碼:
// Get bounding boxes for first frame // selectROI's default behaviour is to draw box starting from the center // when fromCenter is set to false, you can draw box starting from top left corner bool showCrosshair = true; bool fromCenter = false; cout << "\n==========================================================\n"; cout << "OpenCV says press c to cancel objects selection process" << endl; cout << "It doesn't work. Press Escape to exit selection process" << endl; cout << "\n==========================================================\n"; cv::selectROIs("MultiTracker", frame, bboxes, showCrosshair, fromCenter); // quit if there are no objects to track if(bboxes.size() < 1) return 0; vector<Scalar> colors; getRandomColors(colors, bboxes.size());
// Fill the vector with random colors void getRandomColors(vector<Scalar>& colors, int numColors) { RNG rng(0); for(int i=0; i < numColors; i++) colors.push_back(Scalar(rng.uniform(0,255), rng.uniform(0, 255), rng.uniform(0, 255))); }
python代碼:
## Select boxes bboxes = [] colors = [] # OpenCV's selectROI function doesn't work for selecting multiple objects in Python # So we will call this function in a loop till we are done selecting all objects while True: # draw bounding boxes over objects # selectROI's default behaviour is to draw box starting from the center # when fromCenter is set to false, you can draw box starting from top left corner bbox = cv2.selectROI('MultiTracker', frame) bboxes.append(bbox) colors.append((randint(0, 255), randint(0, 255), randint(0, 255))) print("Press q to quit selecting boxes and start tracking") print("Press any other key to select next object") k = cv2.waitKey(0) & 0xFF if (k == 113): # q is pressed break print('Selected bounding boxes {}'.format(bboxes))
到目前為止,我們已經(jīng)讀取了第一幀并獲得了對(duì)象周圍的邊界框。這是我們初始化多對(duì)象跟蹤器所需的所有信息。我們首先創(chuàng)建一個(gè)MultiTracker對(duì)象,并添加你要跟蹤目標(biāo)數(shù)的單個(gè)對(duì)象跟蹤器。在此示例中,我們使用CSRT單個(gè)對(duì)象跟蹤器,但您可以通過將下面的trackerType變量更改為本文開頭提到的8個(gè)跟蹤器時(shí)間之一來嘗試其他跟蹤器類型。該CSRT跟蹤器是不是最快的,但它產(chǎn)生在我們嘗試很多情況下,最好的結(jié)果。
您也可以使用包含在同一MultiTracker
中的不同跟蹤器,但當(dāng)然,它沒有多大意義。能用的不多。CSRT精度最高,KCF速度精度綜合最好,MOSSE速度最快。
MultiTracker類只是這些單個(gè)對(duì)象跟蹤器的包裝器。正如我們?cè)谏弦黄恼轮兴赖哪菢樱褂玫谝粠瓦吔缈虺跏蓟瘑蝹€(gè)對(duì)象跟蹤器,該邊界框指示我們想要跟蹤的對(duì)象的位置。MultiTracker將此信息傳遞給它內(nèi)部包裝的單個(gè)目標(biāo)跟蹤器。
C++代碼:
// Create multitracker 創(chuàng)建多目標(biāo)跟蹤類 Ptr<MultiTracker> multiTracker = cv::MultiTracker::create(); // initialize multitracker 初始化 for (int i = 0; i < bboxes.size(); i++) { multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i])); }
python代碼:
# Specify the tracker type trackerType = "CSRT" # Create MultiTracker object multiTracker = cv2.MultiTracker_create() # Initialize MultiTracker for bbox in bboxes: multiTracker.add(createTrackerByName(trackerType), frame, bbox)
最后,我們的MultiTracker
準(zhǔn)備就緒,我們可以在新的幀中跟蹤多個(gè)對(duì)象。我們使用MultiTracker
類的update方法在新幀中定位對(duì)象。每個(gè)被跟蹤對(duì)象的每個(gè)邊界框都使用不同的顏色繪制。
Update函數(shù)會(huì)返回true和false。update
如果跟蹤失敗會(huì)返回false,C++代碼加了判斷,Python沒有加。但是要注意的是update函數(shù)哪怕返回了false,也會(huì)繼續(xù)更新函數(shù),給出邊界框。所以返回false,建議停止追蹤。
C++代碼:
while (cap.isOpened()) { // get frame from the video 逐幀處理 cap >> frame; // stop the program if reached end of video if (frame.empty()) { break; } //update the tracking result with new frame 更新每一幀 bool ok = multiTracker->update(frame); if (ok == true) { cout << "Tracking success" << endl; } else { cout << "Tracking failure" << endl; } // draw tracked objects 畫框 for (unsigned i = 0; i < multiTracker->getObjects().size(); i++) { rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1); } // show frame imshow("MultiTracker", frame); // quit on x button if (waitKey(1) == 27) { break; } }
python代碼:
# Process video and track objects while cap.isOpened(): success, frame = cap.read() if not success: break # get updated location of objects in subsequent frames success, boxes = multiTracker.update(frame) # draw tracked objects for i, newbox in enumerate(boxes): p1 = (int(newbox[0]), int(newbox[1])) p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3])) cv2.rectangle(frame, p1, p2, colors[i], 2, 1) # show frame cv2.imshow('MultiTracker', frame) # quit on ESC button if cv2.waitKey(1) & 0xFF == 27: # Esc pressed break
就結(jié)果而言,多目標(biāo)跟蹤就是生成多個(gè)單目標(biāo)跟蹤器,每個(gè)單目標(biāo)跟蹤器跟蹤一個(gè)對(duì)象。如果你想和目標(biāo)檢測(cè)結(jié)合,其中的對(duì)象框如果要自己設(shè)定,push
一個(gè)Rect對(duì)象就行了。
//自己設(shè)定對(duì)象的檢測(cè)框
//x,y,width,height
//bboxes.push_back(Rect(388, 155, 30, 40));
//bboxes.push_back(Rect(492, 205, 50, 80));
總體來說精度和單目標(biāo)跟蹤器差不多,所耗時(shí)間差不多5到7倍,不同算法不同。
完整代碼如下:
C++:
// Opencv_MultiTracker.cpp : 此文件包含 "main" 函數(shù)。程序執(zhí)行將在此處開始并結(jié)束。 // #include "pch.h" #include <iostream> #include <opencv2/opencv.hpp> #include <opencv2/tracking.hpp> using namespace cv; using namespace std; vector<string> trackerTypes = {"BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT"}; /** * @brief Create a Tracker By Name object 根據(jù)設(shè)定的類型初始化跟蹤器 * * @param trackerType * @return Ptr<Tracker> */ Ptr<Tracker> createTrackerByName(string trackerType) { Ptr<Tracker> tracker; if (trackerType == trackerTypes[0]) tracker = TrackerBoosting::create(); else if (trackerType == trackerTypes[1]) tracker = TrackerMIL::create(); else if (trackerType == trackerTypes[2]) tracker = TrackerKCF::create(); else if (trackerType == trackerTypes[3]) tracker = TrackerTLD::create(); else if (trackerType == trackerTypes[4]) tracker = TrackerMedianFlow::create(); else if (trackerType == trackerTypes[5]) tracker = TrackerGOTURN::create(); else if (trackerType == trackerTypes[6]) tracker = TrackerMOSSE::create(); else if (trackerType == trackerTypes[7]) tracker = TrackerCSRT::create(); else { cout << "Incorrect tracker name" << endl; cout << "Available trackers are: " << endl; for (vector<string>::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it) { std::cout << " " << *it << endl; } } return tracker; } /** * @brief Get the Random Colors object 隨機(jī)涂色 * * @param colors * @param numColors */ void getRandomColors(vector<Scalar> &colors, int numColors) { RNG rng(0); for (int i = 0; i < numColors; i++) { colors.push_back(Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255))); } } int main(int argc, char *argv[]) { // Set tracker type. Change this to try different trackers. 選擇追蹤器類型 string trackerType = trackerTypes[7]; // set default values for tracking algorithm and video 視頻讀取 string videoPath = "video/run.mp4"; // Initialize MultiTracker with tracking algo 邊界框 vector<Rect> bboxes; // create a video capture object to read videos 讀視頻 cv::VideoCapture cap(videoPath); Mat frame; // quit if unable to read video file if (!cap.isOpened()) { cout << "Error opening video file " << videoPath << endl; return -1; } // read first frame 讀第一幀 cap >> frame; // draw bounding boxes over objects 在第一幀內(nèi)確定對(duì)象框 /* 先在圖像上畫框,然后按ENTER確定畫下一個(gè)框。按ESC退出畫框開始執(zhí)行程序 */ cout << "\n==========================================================\n"; cout << "OpenCV says press c to cancel objects selection process" << endl; cout << "It doesn't work. Press Esc to exit selection process" << endl; cout << "\n==========================================================\n"; cv::selectROIs("MultiTracker", frame, bboxes, false); //自己設(shè)定對(duì)象的檢測(cè)框 //x,y,width,height //bboxes.push_back(Rect(388, 155, 30, 40)); //bboxes.push_back(Rect(492, 205, 50, 80)); // quit if there are no objects to track 如果沒有選擇對(duì)象 if (bboxes.size() < 1) { return 0; } vector<Scalar> colors; //給各個(gè)框涂色 getRandomColors(colors, bboxes.size()); // Create multitracker 創(chuàng)建多目標(biāo)跟蹤類 Ptr<MultiTracker> multiTracker = cv::MultiTracker::create(); // initialize multitracker 初始化 for (int i = 0; i < bboxes.size(); i++) { multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i])); } // process video and track objects 開始處理圖像 cout << "\n==========================================================\n"; cout << "Started tracking, press ESC to quit." << endl; while (cap.isOpened()) { // get frame from the video 逐幀處理 cap >> frame; // stop the program if reached end of video if (frame.empty()) { break; } //update the tracking result with new frame 更新每一幀 bool ok = multiTracker->update(frame); if (ok == true) { cout << "Tracking success" << endl; } else { cout << "Tracking failure" << endl; } // draw tracked objects 畫框 for (unsigned i = 0; i < multiTracker->getObjects().size(); i++) { rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1); } // show frame imshow("MultiTracker", frame); // quit on x button if (waitKey(1) == 27) { break; } } waitKey(0); return 0; }
Python:
from __future__ import print_function import sys import cv2 from random import randint trackerTypes = ['BOOSTING', 'MIL', 'KCF','TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] def createTrackerByName(trackerType): # Create a tracker based on tracker name if trackerType == trackerTypes[0]: tracker = cv2.TrackerBoosting_create() elif trackerType == trackerTypes[1]: tracker = cv2.TrackerMIL_create() elif trackerType == trackerTypes[2]: tracker = cv2.TrackerKCF_create() elif trackerType == trackerTypes[3]: tracker = cv2.TrackerTLD_create() elif trackerType == trackerTypes[4]: tracker = cv2.TrackerMedianFlow_create() elif trackerType == trackerTypes[5]: tracker = cv2.TrackerGOTURN_create() elif trackerType == trackerTypes[6]: tracker = cv2.TrackerMOSSE_create() elif trackerType == trackerTypes[7]: tracker = cv2.TrackerCSRT_create() else: tracker = None print('Incorrect tracker name') print('Available trackers are:') for t in trackerTypes: print(t) return tracker if __name__ == '__main__': print("Default tracking algoritm is CSRT \n" "Available tracking algorithms are:\n") for t in trackerTypes: print(t) trackerType = "CSRT" # Set video to load videoPath = "video/run.mp4" # Create a video capture object to read videos cap = cv2.VideoCapture(videoPath) # Read first frame success, frame = cap.read() # quit if unable to read the video file if not success: print('Failed to read video') sys.exit(1) ## Select boxes bboxes = [] colors = [] # OpenCV's selectROI function doesn't work for selecting multiple objects in Python # So we will call this function in a loop till we are done selecting all objects while True: # draw bounding boxes over objects # selectROI's default behaviour is to draw box starting from the center # when fromCenter is set to false, you can draw box starting from top left corner bbox = cv2.selectROI('MultiTracker', frame) bboxes.append(bbox) colors.append((randint(64, 255), randint(64, 255), randint(64, 255))) print("Press q to quit selecting boxes and start tracking") print("Press any other key to select next object") k = cv2.waitKey(0) & 0xFF if (k == 113): # q is pressed break print('Selected bounding boxes {}'.format(bboxes)) ## Initialize MultiTracker # There are two ways you can initialize multitracker # 1. tracker = cv2.MultiTracker("CSRT") # All the trackers added to this multitracker # will use CSRT algorithm as default # 2. tracker = cv2.MultiTracker() # No default algorithm specified # Initialize MultiTracker with tracking algo # Specify tracker type # Create MultiTracker object multiTracker = cv2.MultiTracker_create() # Initialize MultiTracker for bbox in bboxes: multiTracker.add(createTrackerByName(trackerType), frame, bbox) # Process video and track objects while cap.isOpened(): success, frame = cap.read() if not success: break # get updated location of objects in subsequent frames success, boxes = multiTracker.update(frame) # draw tracked objects for i, newbox in enumerate(boxes): p1 = (int(newbox[0]), int(newbox[1])) p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3])) cv2.rectangle(frame, p1, p2, colors[i], 2, 1) # show frame cv2.imshow('MultiTracker', frame) # quit on ESC button if cv2.waitKey(1) & 0xFF == 27: # Esc pressed break
關(guān)于“python怎么使用OpenCV實(shí)現(xiàn)多目標(biāo)跟蹤”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識(shí),可以關(guān)注億速云行業(yè)資訊頻道,小編每天都會(huì)為大家更新不同的知識(shí)點(diǎn)。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。