您好,登錄后才能下訂單哦!
這期內(nèi)容當(dāng)中小編將會給大家?guī)碛嘘P(guān)Pytorch網(wǎng)絡(luò)結(jié)構(gòu)可視化,文章內(nèi)容豐富且以專業(yè)的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。
Pytorch網(wǎng)絡(luò)結(jié)構(gòu)可視化:PyTorch是使用GPU和CPU優(yōu)化的深度學(xué)習(xí)張量庫。
安裝
可以通過以下的命令進行安裝
conda install pytorch-nightly -c pytorch conda install graphviz conda install torchvision conda install tensorwatch
基于以下的版本:
torchvision.__version__ '0.2.1' torch.__version__ '1.2.0.dev20190610' sys.version '3.6.8 |Anaconda custom (64-bit)| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0]'
載入庫
import sys import torch import tensorwatch as tw import torchvision.models
網(wǎng)絡(luò)結(jié)構(gòu)可視化
alexnet_model = torchvision.models.alexnet() tw.draw_model(alexnet_model, [1, 3, 224, 224])
載入alexnet,draw_model函數(shù)需要傳入三個參數(shù),第一個為model,第二個參數(shù)為input_shape,第三個參數(shù)為orientation,可以選擇'LR'或者'TB',分別代表左右布局與上下布局。
在notebook中,執(zhí)行完上面的代碼會顯示如下的圖,將網(wǎng)絡(luò)的結(jié)構(gòu)及各個層的name和shape進行了可視化。
統(tǒng)計網(wǎng)絡(luò)參數(shù)
可以通過model_stats方法統(tǒng)計各層的參數(shù)情況。
tw.model_stats(alexnet_model, [1, 3, 224, 224]) [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! alexnet_model.features Sequential( (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU(inplace=True) (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (9): ReLU(inplace=True) (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) ) alexnet_model.classifier Sequential( (0): Dropout(p=0.5) (1): Linear(in_features=9216, out_features=4096, bias=True) (2): ReLU(inplace=True) (3): Dropout(p=0.5) (4): Linear(in_features=4096, out_features=4096, bias=True) (5): ReLU(inplace=True) (6): Linear(in_features=4096, out_features=1000, bias=True) )
上述就是小編為大家分享的Pytorch網(wǎng)絡(luò)結(jié)構(gòu)可視化了,如果剛好有類似的疑惑,不妨參照上述分析進行理解。如果想知道更多相關(guān)知識,歡迎關(guān)注億速云行業(yè)資訊頻道。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。