您好,登錄后才能下訂單哦!
這篇文章將為大家詳細(xì)講解有關(guān)Python如何實(shí)現(xiàn)直播推流效果,小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,希望大家閱讀完這篇文章后可以有所收獲。
首先給出展示結(jié)果,大體就是檢測工業(yè)板子是否出現(xiàn)。采取檢測的方法比較簡單,用的OpenCV的模板檢測。
大體思路
opencv讀取視頻
將視頻分割為幀
對每一幀進(jìn)行處理(opencv模板匹配)
在將此幀寫入pipe管道
利用ffmpeg進(jìn)行推流直播
中間遇到的問題
在處理本地視頻時(shí),并沒有延時(shí)卡頓的情況。但對實(shí)時(shí)視頻流的時(shí)候,出現(xiàn)了卡頓延時(shí)的效果。在一頓度娘操作之后,采取了多線程的方法。
opencv讀取視頻
def run_opencv_camera(): video_stream_path = 0 # 當(dāng)video_stream_path = 0 會開啟計(jì)算機(jī) 默認(rèn)攝像頭 也可以為本地視頻文件的路徑 cap = cv2.VideoCapture(video_stream_path) while cap.isOpened(): is_opened, frame = cap.read() cv2.imshow('frame', frame) cv2.waitKey(1) cap.release()
OpenCV模板匹配
模板匹配就是在一幅圖像中尋找一個(gè)特定目標(biāo)的方法之一,這種方法的原理非常簡單,遍歷圖像中每一個(gè)可能的位置,比較各處與模板是否相似,當(dāng)相似度足夠高時(shí),就認(rèn)為找到了目標(biāo)。
def template_match(img_rgb): # 灰度轉(zhuǎn)換 img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY) # 模板匹配 res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED) # 設(shè)置閾值 threshold = 0.8 loc = np.where(res >= threshold) if len(loc[0]): # 這里直接固定區(qū)域 cv2.rectangle(img_rgb, (155, 515), (1810, 820), (0, 0, 255), 3) cv2.putText(img_rgb, category, (240, 600), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, Confidence, (240, 640), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, Precision, (240, 680), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, product_yield, (240, 720), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, result, (240, 780), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 255, 0), 5) return img_rgb
FFmpeg推流
在Ubuntu 14 上安裝 Nginx-RTMP 流媒體服務(wù)器
https://www.jb51.net/article/175121.htm
import subprocess as sp rtmpUrl = "" camera_path = "" cap = cv.VideoCapture(camera_path) # Get video information fps = int(cap.get(cv.CAP_PROP_FPS)) width = int(cap.get(cv.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv.CAP_PROP_FRAME_HEIGHT)) # ffmpeg command command = ['ffmpeg', '-y', '-f', 'rawvideo', '-vcodec','rawvideo', '-pix_fmt', 'bgr24', '-s', "{}x{}".format(width, height), '-r', str(fps), '-i', '-', '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-preset', 'ultrafast', '-f', 'flv', rtmpUrl] # 管道配置 p = sp.Popen(command, stdin=sp.PIPE) # read webcamera while(cap.isOpened()): ret, frame = cap.read() if not ret: print("Opening camera is failed") break # process frame # your code # process frame # write to pipe p.stdin.write(frame.tostring())
說明:rtmp是要接受視頻的服務(wù)器,服務(wù)器按照上面所給連接地址即可。
多線程處理
python mutilprocessing多進(jìn)程編程 https://www.jb51.net/article/134726.htm
def image_put(q): # 采取本地視頻驗(yàn)證 cap = cv2.VideoCapture("./new.mp4") # 采取視頻流的方式 # cap = cv2.VideoCapture(0) # cap.set(cv2.CAP_PROP_FRAME_WIDTH,1920) # cap.set(cv2.CAP_PROP_FRAME_HEIGHT,1080) if cap.isOpened(): print('success') else: print('faild') while True: q.put(cap.read()[1]) q.get() if q.qsize() > 1 else time.sleep(0.01) def image_get(q): while True: # start = time.time() #flag += 1 frame = q.get() frame = template_match(frame) # end = time.time() # print("the time is", end-start) cv2.imshow("frame", frame) cv2.waitKey(0) # pipe.stdin.write(frame.tostring()) #cv2.imwrite(save_path + "%d.jpg"%flag,frame) # 多線程執(zhí)行一個(gè)攝像頭 def run_single_camera(): # 初始化 mp.set_start_method(method='spawn') # init # 隊(duì)列 queue = mp.Queue(maxsize=2) processes = [mp.Process(target=image_put, args=(queue, )), mp.Process(target=image_get, args=(queue, ))] [process.start() for process in processes] [process.join() for process in processes] def run(): run_single_camera() # quick, with 2 threads pass
說明:使用Python3自帶的多線程模塊mutilprocessing模塊,創(chuàng)建一個(gè)隊(duì)列,線程A從通過rstp協(xié)議從視頻流中讀取出每一幀,并放入隊(duì)列中,線程B從隊(duì)列中將圖片取出,處理后進(jìn)行顯示。線程A如果發(fā)現(xiàn)隊(duì)列里有兩張圖片,即線程B的讀取速度跟不上線程A,那么線程A主動將隊(duì)列里面的舊圖片刪掉,換新圖片。
全部代碼展示
import time import multiprocessing as mp import numpy as np import random import subprocess as sp import cv2 import os # 定義opencv所需的模板 template_path = "./high_img_template.jpg" # 定義矩形框所要展示的變量 category = "Category: board" var_confidence = (np.random.randint(86, 98)) / 100 Confidence = "Confidence: " + str(var_confidence) var_precision = round(random.uniform(98, 99), 2) Precision = "Precision: " + str(var_precision) + "%" product_yield = "Product Yield: 100%" result = "Result: perfect" # 讀取模板并獲取模板的高度和寬度 template = cv2.imread(template_path, 0) h, w = template.shape[:2] # 定義模板匹配函數(shù) def template_match(img_rgb): # 灰度轉(zhuǎn)換 img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY) # 模板匹配 res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED) # 設(shè)置閾值 threshold = 0.8 loc = np.where(res >= threshold) if len(loc[0]): # 這里直接固定區(qū)域 cv2.rectangle(img_rgb, (155, 515), (1810, 820), (0, 0, 255), 3) cv2.putText(img_rgb, category, (240, 600), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, Confidence, (240, 640), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, Precision, (240, 680), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, product_yield, (240, 720), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) cv2.putText(img_rgb, result, (240, 780), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 255, 0), 5) return img_rgb # 視頻屬性 size = (1920, 1080) sizeStr = str(size[0]) + 'x' + str(size[1]) # fps = cap.get(cv2.CAP_PROP_FPS) # 30p/self # fps = int(fps) fps = 11 hz = int(1000.0 / fps) print ('size:'+ sizeStr + ' fps:' + str(fps) + ' hz:' + str(hz)) rtmpUrl = 'rtmp://localhost/hls/test' # 直播管道輸出 # ffmpeg推送rtmp 重點(diǎn) : 通過管道 共享數(shù)據(jù)的方式 command = ['ffmpeg', '-y', '-f', 'rawvideo', '-vcodec','rawvideo', '-pix_fmt', 'bgr24', '-s', sizeStr, '-r', str(fps), '-i', '-', '-c:v', 'libx264', '-pix_fmt', 'yuv420p', '-preset', 'ultrafast', '-f', 'flv', rtmpUrl] #管道特性配置 # pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8) pipe = sp.Popen(command, stdin=sp.PIPE) #,shell=False # pipe.stdin.write(frame.tostring()) def image_put(q): # 采取本地視頻驗(yàn)證 cap = cv2.VideoCapture("./new.mp4") # 采取視頻流的方式 # cap = cv2.VideoCapture(0) # cap.set(cv2.CAP_PROP_FRAME_WIDTH,1920) # cap.set(cv2.CAP_PROP_FRAME_HEIGHT,1080) if cap.isOpened(): print('success') else: print('faild') while True: q.put(cap.read()[1]) q.get() if q.qsize() > 1 else time.sleep(0.01) # 采取本地視頻的方式保存圖片 save_path = "./res_imgs" if os.path.exists(save_path): os.makedir(save_path) def image_get(q): while True: # start = time.time() #flag += 1 frame = q.get() frame = template_match(frame) # end = time.time() # print("the time is", end-start) cv2.imshow("frame", frame) cv2.waitKey(0) # pipe.stdin.write(frame.tostring()) #cv2.imwrite(save_path + "%d.jpg"%flag,frame) # 多線程執(zhí)行一個(gè)攝像頭 def run_single_camera(): # 初始化 mp.set_start_method(method='spawn') # init # 隊(duì)列 queue = mp.Queue(maxsize=2) processes = [mp.Process(target=image_put, args=(queue, )), mp.Process(target=image_get, args=(queue, ))] [process.start() for process in processes] [process.join() for process in processes] def run(): run_single_camera() # quick, with 2 threads pass if __name__ == '__main__': run()
關(guān)于“Python如何實(shí)現(xiàn)直播推流效果”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,使各位可以學(xué)到更多知識,如果覺得文章不錯(cuò),請把它分享出去讓更多的人看到。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。