溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

【python】matplotlib動(dòng)態(tài)顯示詳解

發(fā)布時(shí)間:2020-10-10 01:36:39 來(lái)源:腳本之家 閱讀:183 作者:CallMeJacky 欄目:開發(fā)技術(shù)

1.matplotlib動(dòng)態(tài)繪圖

python在繪圖的時(shí)候,需要開啟 interactive mode。核心代碼如下:

plt.ion(); #開啟interactive mode 成功的關(guān)鍵函數(shù)
  fig = plt.figure(1);
  
  for i in range(100):
    filepath="E:/Model/weights-improvement-" + str(i + 1) + ".hdf5";
    model.load_weights(filepath);
    #測(cè)試數(shù)據(jù)
    x_new = np.linspace(low, up, 1000);
    y_new = getfit(model,x_new);
    # 顯示數(shù)據(jù)
    plt.clf();
    plt.plot(x,y); 
    plt.scatter(x_sample, y_sample);
    plt.plot(x_new,y_new);
    
    ffpath = "E:/imgs/" + str(i) + ".jpg";
    plt.savefig(ffpath);
 
    plt.pause(0.01)       # 暫停0.01秒
    
  ani = animation.FuncAnimation(plt.figure(2), update,range(100),init_func=init, interval=500);
  ani.save("E:/test.gif",writer='pillow');
  
  plt.ioff()         # 關(guān)閉交互模式

2.實(shí)例

已知下面采樣自Sin函數(shù)的數(shù)據(jù):

  x y
1 0.093 -0.81
2 0.58 -0.45
3 1.04 -0.007
4 1.55 0.48
5 2.15 0.89
6 2.62 0.997
7 2.71 0.995
8 2.73 0.993
9 3.03 0.916
10 3.14 0.86
11 3.58 0.57
12 3.66 0.504
13 3.81 0.369
14 3.83 0.35
15 4.39 -0.199
16 4.44 -0.248
17 4.6 -0.399
18 5.39 -0.932
19 5.54 -0.975
20 5.76 -0.999

 通過(guò)一個(gè)簡(jiǎn)單的三層神經(jīng)網(wǎng)絡(luò)訓(xùn)練一個(gè)Sin函數(shù)的擬合器,并可視化模型訓(xùn)練過(guò)程的擬合曲線。

【python】matplotlib動(dòng)態(tài)顯示詳解

2.1 網(wǎng)絡(luò)訓(xùn)練實(shí)現(xiàn)

主要做的事情是定義一個(gè)三層的神經(jīng)網(wǎng)絡(luò),輸入層節(jié)點(diǎn)數(shù)為1,隱藏層節(jié)點(diǎn)數(shù)為10,輸出層節(jié)點(diǎn)數(shù)為1。

import math;
import random;
from matplotlib import pyplot as plt
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import Adam
import numpy as np
from keras.callbacks import ModelCheckpoint
import os
 
 
#采樣函數(shù)
def sample(low, up, num):
  data = [];
  for i in range(num):
    #采樣
    tmp = random.uniform(low, up);
    data.append(tmp);
  data.sort();
  return data;
 
#sin函數(shù)
def func(x):
  y = [];
  for i in range(len(x)):
    tmp = math.sin(x[i] - math.pi/3);
    y.append(tmp);
  return y;
 
#獲取模型擬合結(jié)果
def getfit(model,x):  
  y = [];
  for i in range(len(x)):
    tmp = model.predict([x[i]], 10);
    y.append(tmp[0][0]);
  return y;
 
#刪除同一目錄下的所有文件
def del_file(path):
  ls = os.listdir(path)
  for i in ls:
    c_path = os.path.join(path, i)
    if os.path.isdir(c_path):
      del_file(c_path)
    else:
      os.remove(c_path)
 
if __name__ == '__main__':  
  path = "E:/Model/";
  del_file(path);
  
  low = 0;
  up = 2 * math.pi;
  x = np.linspace(low, up, 1000);
  y = func(x);
  
  # 數(shù)據(jù)采樣
#   x_sample = sample(low,up,20);
  x_sample = [0.09326442022999694, 0.5812590520508311, 1.040490143783586, 1.5504427746047338, 2.1589557183817036, 2.6235357787018407, 2.712578091093361, 2.7379109336528167, 3.0339662651841186, 3.147676812083248, 3.58596337171837, 3.6621496731124314, 3.81130899864203, 3.833092859928872, 4.396611340802901, 4.4481080339256875, 4.609657879057151, 5.399731063412583, 5.54299720786794, 5.764084730699906];
  y_sample = func(x_sample);
  
  # callback
  filepath="E:/Model/weights-improvement-{epoch:00d}.hdf5";
  checkpoint= ModelCheckpoint(filepath, verbose=1, save_best_only=False, mode='max');
  callbacks_list= [checkpoint];
  
  # 建立順序神經(jīng)網(wǎng)絡(luò)層次模型
  model = Sequential(); 
  model.add(Dense(10, input_dim=1, init='uniform', activation='relu'));
  model.add(Dense(1, init='uniform', activation='tanh'));
  adam = Adam(lr = 0.05);
  model.compile(loss='mean_squared_error', optimizer=adam, metrics=['accuracy']);
  model.fit(x_sample, y_sample, nb_epoch=1000, batch_size=20,callbacks=callbacks_list);
  
  #測(cè)試數(shù)據(jù)
  x_new = np.linspace(low, up, 1000);
  y_new = getfit(model,x_new);
  
  # 數(shù)據(jù)可視化
  plt.plot(x,y); 
  plt.scatter(x_sample, y_sample);
  plt.plot(x_new,y_new);
  
  plt.show();

2.2 模型保存

 在神經(jīng)網(wǎng)絡(luò)訓(xùn)練的過(guò)程中,有一個(gè)非常重要的操作,就是將訓(xùn)練過(guò)程中模型的參數(shù)保存到本地,這是后面擬合過(guò)程可視化的基礎(chǔ)。訓(xùn)練過(guò)程中保存的模型文件,如下圖所示。

【python】matplotlib動(dòng)態(tài)顯示詳解

模型保存的關(guān)鍵在于fit函數(shù)中callback函數(shù)的設(shè)置,注意到,下面的代碼,每次迭代,算法都會(huì)執(zhí)行callbacks函數(shù)指定的函數(shù)列表中的方法。這里,我們的回調(diào)函數(shù)設(shè)置為ModelCheckpoint,其參數(shù)如下表所示:

參數(shù) 含義
filename 字符串,保存模型的路徑
verbose

信息展示模式,0或1

(Epoch 00001: saving model to ...)

mode ‘a(chǎn)uto',‘min',‘max'
monitor 需要監(jiān)視的值
save_best_only 當(dāng)設(shè)置為True時(shí),監(jiān)測(cè)值有改進(jìn)時(shí)才會(huì)保存當(dāng)前的模型。在save_best_only=True時(shí)決定性能最佳模型的評(píng)判準(zhǔn)則,例如,當(dāng)監(jiān)測(cè)值為val_acc時(shí),模式應(yīng)為max,當(dāng)監(jiān)測(cè)值為val_loss時(shí),模式應(yīng)為min。在auto模式下,評(píng)價(jià)準(zhǔn)則由被監(jiān)測(cè)值的名字自動(dòng)推斷
save_weights_only 若設(shè)置為True,則只保存模型權(quán)重,否則將保存整個(gè)模型(包括模型結(jié)構(gòu),配置信息等)
period CheckPoint之間的間隔的epoch數(shù)

 # callback
  filepath="E:/Model/weights-improvement-{epoch:00d}.hdf5";
  checkpoint= ModelCheckpoint(filepath, verbose=1, save_best_only=False, mode='max');
  callbacks_list= [checkpoint];
  
  # 建立順序神經(jīng)網(wǎng)絡(luò)層次模型
  model = Sequential(); 
  model.add(Dense(10, input_dim=1, init='uniform', activation='relu'));
  model.add(Dense(1, init='uniform', activation='tanh'));
  adam = Adam(lr = 0.05);
  model.compile(loss='mean_squared_error', optimizer=adam, metrics=['accuracy']);
  model.fit(x_sample, y_sample, nb_epoch=1000, batch_size=20,callbacks=callbacks_list);

2.3 擬合過(guò)程可視化實(shí)現(xiàn)

利用上述保存的模型,我們就可以通過(guò)matplotlib實(shí)時(shí)地顯示擬合過(guò)程。

import math;
import random;
from matplotlib import pyplot as plt
from keras.models import Sequential
from keras.layers.core import Dense
import numpy as np
import matplotlib.animation as animation
from PIL import Image
 
#定義kdd99數(shù)據(jù)預(yù)處理函數(shù)
def sample(low, up, num):
  data = [];
  for i in range(num):
    #采樣
    tmp = random.uniform(low, up);
    data.append(tmp);
  data.sort();
  return data;
 
def func(x):
  y = [];
  for i in range(len(x)):
    tmp = math.sin(x[i] - math.pi/3);
    y.append(tmp);
  return y;
 
def getfit(model,x):  
  y = [];
  for i in range(len(x)):
    tmp = model.predict([x[i]], 10);
    y.append(tmp[0][0]);
  return y;
 
def init():
  fpath = "E:/imgs/0.jpg";
  img = Image.open(fpath);
  plt.axis('off') # 關(guān)掉坐標(biāo)軸為 off
  return plt.imshow(img);
 
def update(i): 
  fpath = "E:/imgs/" + str(i) + ".jpg";
  img = Image.open(fpath);
  plt.axis('off') # 關(guān)掉坐標(biāo)軸為 off
  return plt.imshow(img);
 
if __name__ == '__main__':  
  low = 0;
  up = 2 * math.pi;
  x = np.linspace(low, up, 1000);
  y = func(x);
  
  # 數(shù)據(jù)采樣
#   x_sample = sample(low,up,20);
  x_sample = [0.09326442022999694, 0.5812590520508311, 1.040490143783586, 1.5504427746047338, 2.1589557183817036, 2.6235357787018407, 2.712578091093361, 2.7379109336528167, 3.0339662651841186, 3.147676812083248, 3.58596337171837, 3.6621496731124314, 3.81130899864203, 3.833092859928872, 4.396611340802901, 4.4481080339256875, 4.609657879057151, 5.399731063412583, 5.54299720786794, 5.764084730699906];
  y_sample = func(x_sample);
  
  # 建立順序神經(jīng)網(wǎng)絡(luò)層次模型
  model = Sequential(); 
  model.add(Dense(10, input_dim=1, init='uniform', activation='relu'));
  model.add(Dense(1, init='uniform', activation='tanh'));
    
  plt.ion(); #開啟interactive mode 成功的關(guān)鍵函數(shù)
  fig = plt.figure(1);
  
  for i in range(100):
    filepath="E:/Model/weights-improvement-" + str(i + 1) + ".hdf5";
    model.load_weights(filepath);
    #測(cè)試數(shù)據(jù)
    x_new = np.linspace(low, up, 1000);
    y_new = getfit(model,x_new);
    # 顯示數(shù)據(jù)
    plt.clf();
    plt.plot(x,y); 
    plt.scatter(x_sample, y_sample);
    plt.plot(x_new,y_new);
    
    ffpath = "E:/imgs/" + str(i) + ".jpg";
    plt.savefig(ffpath);
 
    plt.pause(0.01)       # 暫停0.01秒
    
  ani = animation.FuncAnimation(plt.figure(2), update,range(100),init_func=init, interval=500);
  ani.save("E:/test.gif",writer='pillow');
  
  plt.ioff()  

【python】matplotlib動(dòng)態(tài)顯示詳解

以上所述是小編給大家介紹的matplotlib動(dòng)態(tài)顯示詳解整合,希望對(duì)大家有所幫助,如果大家有任何疑問(wèn)請(qǐng)給我留言,小編會(huì)及時(shí)回復(fù)大家的。在此也非常感謝大家對(duì)億速云網(wǎng)站的支持!

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI