您好,登錄后才能下訂單哦!
樹(shù)莓派智能小車(chē)結(jié)合攝像頭opencv進(jìn)行物體追蹤的示例分析,相信很多沒(méi)有經(jīng)驗(yàn)的人對(duì)此束手無(wú)策,為此本文總結(jié)了問(wèn)題出現(xiàn)的原因和解決方法,通過(guò)這篇文章希望你能解決這個(gè)問(wèn)題。
在幾天的資料整理之后發(fā)現(xiàn)是利用opencv和python實(shí)現(xiàn)的。那么今天告訴大家如何安裝opencv3.0和如何利用它實(shí)現(xiàn)我的小車(chē)追蹤。
之前確實(shí)安裝過(guò)幾次opencv都倒在了cmake編譯的路上,但有問(wèn)題就得解決。翻了好幾個(gè)帖子終于找到了一個(gè)靠譜的。用了一個(gè)下午的時(shí)間終于安裝成功了。安裝的教程篇幅過(guò)長(zhǎng)且容易被頭條認(rèn)為成抄襲所以就在發(fā)到評(píng)論區(qū)吧。然后問(wèn)題來(lái)了,opencv安裝好了,怎么實(shí)現(xiàn)物體追蹤呢。我開(kāi)始在github上找案列,找啊找啊找,輸入關(guān)鍵字 track car raspberry,找到一個(gè),打開(kāi)看看是樹(shù)莓派加arduino做的。還好arduino只是用來(lái)控制步進(jìn)電機(jī)的。我開(kāi)始把樹(shù)莓派gpio控制電機(jī)的部分移植到這個(gè)項(xiàng)目中。在一天的調(diào)試之后,改造版的樹(shù)莓派物體追蹤小車(chē)出爐了。怎么說(shuō)呢,這只是個(gè)雛形,因?yàn)樾≤?chē)轉(zhuǎn)向不夠靈敏,追蹤的功能需要進(jìn)一步優(yōu)化。個(gè)人水平有限,希望大家一起來(lái)研究。
來(lái)說(shuō)說(shuō)detect.py 小車(chē)物體追蹤的源碼。detect.py中物體追蹤是怎么實(shí)現(xiàn)的呢,首先它需要捕捉一個(gè)frame邊框并確定一個(gè)物體去追蹤。在確定了所要追蹤的物體之后,小車(chē)將保持對(duì)物體的追蹤。源碼中定義了前后左右和停止的動(dòng)作。當(dāng)被鎖定的物體移動(dòng)時(shí),小車(chē)則根據(jù)物體的位置作出響應(yīng)即追蹤物體前進(jìn)。
#導(dǎo)入一些必須的包
from picamera.array import PiRGBArray
from picamera import PiCamera
import cv2
import serial
import syslog
import time
import numpy as np
import RPi.GPIO as GPIO
# 定義捕捉的畫(huà)面尺寸
width = 320
height = 240
tracking_width = 40
tracking_height = 40
auto_mode = 0
#如下定義小車(chē)前后左右的功能函數(shù)
def t_stop():
GPIO.output(11, False)
GPIO.output(12, False)
GPIO.output(15, False)
GPIO.output(16, False)
def t_up():
GPIO.output(11, True)
GPIO.output(12, False)
GPIO.output(15, True)
GPIO.output(16, False)
time.sleep(0.05)
GPIO.output(11, False)
GPIO.output(12, False)
GPIO.output(15, False)
GPIO.output(16, False)
time.sleep(0.3)
def t_down():
GPIO.output(11, False)
GPIO.output(12, True)
GPIO.output(15, False)
GPIO.output(16, True)
def t_left():
GPIO.output(11, False)
GPIO.output(12, True)
GPIO.output(15, True)
GPIO.output(16, False)
time.sleep(0.05)
GPIO.output(11, False)
GPIO.output(12, False)
GPIO.output(15, False)
GPIO.output(16, False)
time.sleep(0.3)
def t_right():
GPIO.output(11, True)
GPIO.output(12, False)
GPIO.output(15, False)
GPIO.output(16, True)
time.sleep(0.05)
GPIO.output(11, False)
GPIO.output(12, False)
GPIO.output(15, False)
GPIO.output(16, False)
time.sleep(0.3)
def t_open():
GPIO.setup(22,GPIO.OUT)
GPIO.output(22,GPIO.LOW)
def t_close():
GPIO.setup(22,GPIO.IN)
def check_for_direction(position_x):
GPIO.setmode(GPIO.BOARD)
GPIO.setwarnings(False)
GPIO.setup(11,GPIO.OUT)
GPIO.setup(12,GPIO.OUT)
GPIO.setup(15,GPIO.OUT)
GPIO.setup(16,GPIO.OUT)
GPIO.setup(38,GPIO.OUT)
if position_x == 0 or position_x == width:
print 'out of bound'
t_stop()
if position_x <= ((width-tracking_width)/2 - tracking_width):
print 'move right!'
t_right()
elif position_x >= ((width-tracking_width)/2 + tracking_width):
print 'move left!'
t_left()
else:
# print 'move front'
t_up()
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
圖文無(wú)關(guān)
camera.resolution = (width, height)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(width, height))
rawCapture2 = PiRGBArray(camera, size=(width, height))
# allow the camera to warmup
time.sleep(0.1)
# set the ROI (Region of Interest)
c,r,w,h = (width/2 - tracking_width/2), (height/2 - tracking_height/2), tracking_width, tracking_height
track_window = (c,r,w,h)
# capture single frame of tracking image
camera.capture(rawCapture2, format='bgr')
# create mask and normalized histogram
roi = rawCapture2.array[r:r+h, c:c+w]
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array([0,30,32]), np.array([180,255,255]))
roi_hist = cv2.calcHist([hsv_roi], [0], mask, [180], [0,180])
cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 80, 1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format='bgr', use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# filtering for tracking algorithm
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
dst = cv2.calcBackProject([hsv], [0], roi_hist, [0,180], 1)
ret, track_window = cv2.meanShift(dst, track_window, term_crit)
x,y,w,h = track_window
cv2.rectangle(image, (x,y), (x+w,y+h), 255, 2)
cv2.putText(image, 'Tracked', (x-25, y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
# show the frame
cv2.imshow("Raspberry Pi RC Car", image)
key = cv2.waitKey(1) & 0xFF
check_for_direction(x)
time.sleep(0.01)
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
看完上述內(nèi)容,你們掌握樹(shù)莓派智能小車(chē)結(jié)合攝像頭opencv進(jìn)行物體追蹤的示例分析的方法了嗎?如果還想學(xué)到更多技能或想了解更多相關(guān)內(nèi)容,歡迎關(guān)注億速云行業(yè)資訊頻道,感謝各位的閱讀!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀(guān)點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。