您好,登錄后才能下訂單哦!
今天小編給大家分享一下Python怎么實現(xiàn)視頻自動打碼的相關(guān)知識點,內(nèi)容詳細,邏輯清晰,相信大部分人都還太了解這方面的知識,所以分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后有所收獲,下面我們一起來了解一下吧。
環(huán)境咱們還是使用 Python3.8 和 pycharm2021 即可
將視頻分為音頻和畫面;
畫面中出現(xiàn)人臉和目標比對,相應(yīng)人臉進行打碼;
處理后的視頻添加聲音;
手動安裝一下 cv2 模塊 ,pip install opencv-python 安裝
我們需要安裝一下 ffmpeg 音視頻轉(zhuǎn)碼工具
導(dǎo)入需要使用的模塊
import cv2 import face_recognition # 人臉識別庫 99.7% cmake dlib face_recognition import subprocess
將視頻轉(zhuǎn)為音頻
def video2mp3(file_name): """ :param file_name: 視頻文件路徑 :return: """ outfile_name = file_name.split('.')[0] + '.mp3' cmd = 'ffmpeg -i ' + file_name + ' -f mp3 ' + outfile_name print(cmd) subprocess.call(cmd, shell=False)
打碼
def mask_video(input_video, output_video, mask_path='mask.jpg'): """ :param input_video: 需打碼的視頻 :param output_video: 打碼后的視頻 :param mask_path: 打碼圖片 :return: """ # 讀取圖片 mask = cv2.imread(mask_path) # 讀取視頻 cap = cv2.VideoCapture(input_video) # 視頻 fps width height v_fps = cap.get(5) v_width = cap.get(3) v_height = cap.get(4) # 設(shè)置寫入視頻參數(shù) 格式MP4 # 畫面大小 size = (int(v_width), int(v_height)) fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') # 輸出視頻 out = cv2.VideoWriter(output_video, fourcc, v_fps, size) # 已知人臉 known_image = face_recognition.load_image_file('tmr.jpg') biden_encoding = face_recognition.face_encodings(known_image)[0] cap = cv2.VideoCapture(input_video) while (cap.isOpened()): ret, frame = cap.read() if ret: # 檢測人臉 # 人臉區(qū)域 face_locations = face_recognition.face_locations(frame) for (top_right_y, top_right_x, left_bottom_y, left_bottom_x) in face_locations: print((top_right_y, top_right_x, left_bottom_y, left_bottom_x)) unknown_image = frame[top_right_y - 50:left_bottom_y + 50, left_bottom_x - 50:top_right_x + 50] if face_recognition.face_encodings(unknown_image) != []: unknown_encoding = face_recognition.face_encodings(unknown_image)[0] # 對比人臉 results = face_recognition.compare_faces([biden_encoding], unknown_encoding) # [True] # 貼圖 if results == [True]: mask = cv2.resize(mask, (top_right_x - left_bottom_x, left_bottom_y - top_right_y)) frame[top_right_y:left_bottom_y, left_bottom_x:top_right_x] = mask out.write(frame) else: break
音頻添加到畫面
def video_add_mp3(file_name, mp3_file): """ :param file_name: 視頻畫面文件 :param mp3_file: 視頻音頻文件 :return: """ outfile_name = file_name.split('.')[0] + '-f.mp4' subprocess.call('ffmpeg -i ' + file_name + ' -i ' + mp3_file + ' -strict -2 -f mp4 ' + outfile_name, shell=False)
import cv2 import face_recognition # 人臉識別庫 99.7% cmake dlib face_recognition import subprocess def video2mp3(file_name): outfile_name = file_name.split('.')[0] + '.mp3' cmd = 'ffmpeg -i ' + file_name + ' -f mp3 ' + outfile_name print(cmd) subprocess.call(cmd, shell=False) def mask_video(input_video, output_video, mask_path='mask.jpg'): # 讀取圖片 mask = cv2.imread(mask_path) # 讀取視頻 cap = cv2.VideoCapture(input_video) # 視頻 fps width height v_fps = cap.get(5) v_width = cap.get(3) v_height = cap.get(4) # 設(shè)置寫入視頻參數(shù) 格式MP4 # 畫面大小 size = (int(v_width), int(v_height)) fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') # 輸出視頻 out = cv2.VideoWriter(output_video, fourcc, v_fps, size) # 已知人臉 known_image = face_recognition.load_image_file('tmr.jpg') biden_encoding = face_recognition.face_encodings(known_image)[0] cap = cv2.VideoCapture(input_video) while (cap.isOpened()): ret, frame = cap.read() if ret: # 檢測人臉 # 人臉區(qū)域 face_locations = face_recognition.face_locations(frame) for (top_right_y, top_right_x, left_bottom_y, left_bottom_x) in face_locations: print((top_right_y, top_right_x, left_bottom_y, left_bottom_x)) unknown_image = frame[top_right_y - 50:left_bottom_y + 50, left_bottom_x - 50:top_right_x + 50] if face_recognition.face_encodings(unknown_image) != []: unknown_encoding = face_recognition.face_encodings(unknown_image)[0] # 對比人臉 results = face_recognition.compare_faces([biden_encoding], unknown_encoding) # [True] # 貼圖 if results == [True]: mask = cv2.resize(mask, (top_right_x - left_bottom_x, left_bottom_y - top_right_y)) frame[top_right_y:left_bottom_y, left_bottom_x:top_right_x] = mask out.write(frame) else: break def video_add_mp3(file_name, mp3_file): outfile_name = file_name.split('.')[0] + '-f.mp4' subprocess.call('ffmpeg -i ' + file_name + ' -i ' + mp3_file + ' -strict -2 -f mp4 ' + outfile_name, shell=False) if __name__ == '__main__': # 1. video2mp3('cut.mp4') # 2. mask_video(input_video='cut.mp4',output_video='output.mp4') # 3. video_add_mp3(file_name='output.mp4',mp3_file='cut.mp3')
以上就是“Python怎么實現(xiàn)視頻自動打碼”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家閱讀完這篇文章都有很大的收獲,小編每天都會為大家更新不同的知識,如果還想學(xué)習(xí)更多的知識,請關(guān)注億速云行業(yè)資訊頻道。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。