您好,登錄后才能下訂單哦!
這篇文章主要介紹了怎么獲取Pytorch中間某一層權(quán)重或者特征,具有一定借鑒價(jià)值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
問題:訓(xùn)練好的網(wǎng)絡(luò)模型想知道中間某一層的權(quán)重或者看看中間某一層的特征,如何處理呢?
1、獲取某一層權(quán)重,并保存到excel中;
以resnet18為例說明:
import torch import pandas as pd import numpy as np import torchvision.models as models resnet18 = models.resnet18(pretrained=True) parm={} for name,parameters in resnet18.named_parameters(): print(name,':',parameters.size()) parm[name]=parameters.detach().numpy()
上述代碼將每個(gè)模塊參數(shù)存入parm字典中,parameters.detach().numpy()將tensor類型變量轉(zhuǎn)換成numpy array形式,方便后續(xù)存儲(chǔ)到表格中.輸出為:
conv1.weight : torch.Size([64, 3, 7, 7]) bn1.weight : torch.Size([64]) bn1.bias : torch.Size([64]) layer1.0.conv1.weight : torch.Size([64, 64, 3, 3]) layer1.0.bn1.weight : torch.Size([64]) layer1.0.bn1.bias : torch.Size([64]) layer1.0.conv2.weight : torch.Size([64, 64, 3, 3]) layer1.0.bn2.weight : torch.Size([64]) layer1.0.bn2.bias : torch.Size([64]) layer1.1.conv1.weight : torch.Size([64, 64, 3, 3]) layer1.1.bn1.weight : torch.Size([64]) layer1.1.bn1.bias : torch.Size([64]) layer1.1.conv2.weight : torch.Size([64, 64, 3, 3]) layer1.1.bn2.weight : torch.Size([64]) layer1.1.bn2.bias : torch.Size([64]) layer2.0.conv1.weight : torch.Size([128, 64, 3, 3]) layer2.0.bn1.weight : torch.Size([128]) layer2.0.bn1.bias : torch.Size([128]) layer2.0.conv2.weight : torch.Size([128, 128, 3, 3]) layer2.0.bn2.weight : torch.Size([128]) layer2.0.bn2.bias : torch.Size([128]) layer2.0.downsample.0.weight : torch.Size([128, 64, 1, 1]) layer2.0.downsample.1.weight : torch.Size([128]) layer2.0.downsample.1.bias : torch.Size([128]) layer2.1.conv1.weight : torch.Size([128, 128, 3, 3]) layer2.1.bn1.weight : torch.Size([128]) layer2.1.bn1.bias : torch.Size([128]) layer2.1.conv2.weight : torch.Size([128, 128, 3, 3]) layer2.1.bn2.weight : torch.Size([128]) layer2.1.bn2.bias : torch.Size([128]) layer3.0.conv1.weight : torch.Size([256, 128, 3, 3]) layer3.0.bn1.weight : torch.Size([256]) layer3.0.bn1.bias : torch.Size([256]) layer3.0.conv2.weight : torch.Size([256, 256, 3, 3]) layer3.0.bn2.weight : torch.Size([256]) layer3.0.bn2.bias : torch.Size([256]) layer3.0.downsample.0.weight : torch.Size([256, 128, 1, 1]) layer3.0.downsample.1.weight : torch.Size([256]) layer3.0.downsample.1.bias : torch.Size([256]) layer3.1.conv1.weight : torch.Size([256, 256, 3, 3]) layer3.1.bn1.weight : torch.Size([256]) layer3.1.bn1.bias : torch.Size([256]) layer3.1.conv2.weight : torch.Size([256, 256, 3, 3]) layer3.1.bn2.weight : torch.Size([256]) layer3.1.bn2.bias : torch.Size([256]) layer4.0.conv1.weight : torch.Size([512, 256, 3, 3]) layer4.0.bn1.weight : torch.Size([512]) layer4.0.bn1.bias : torch.Size([512]) layer4.0.conv2.weight : torch.Size([512, 512, 3, 3]) layer4.0.bn2.weight : torch.Size([512]) layer4.0.bn2.bias : torch.Size([512]) layer4.0.downsample.0.weight : torch.Size([512, 256, 1, 1]) layer4.0.downsample.1.weight : torch.Size([512]) layer4.0.downsample.1.bias : torch.Size([512]) layer4.1.conv1.weight : torch.Size([512, 512, 3, 3]) layer4.1.bn1.weight : torch.Size([512]) layer4.1.bn1.bias : torch.Size([512]) layer4.1.conv2.weight : torch.Size([512, 512, 3, 3]) layer4.1.bn2.weight : torch.Size([512]) layer4.1.bn2.bias : torch.Size([512]) fc.weight : torch.Size([1000, 512]) fc.bias : torch.Size([1000])
parm['layer1.0.conv1.weight'][0,0,:,:]
輸出為:
array([[ 0.05759342, -0.09511436, -0.02027232], [-0.07455588, -0.799308 , -0.21283598], [ 0.06557069, -0.09653367, -0.01211061]], dtype=float32)
利用如下函數(shù)將某一層的所有參數(shù)保存到表格中,數(shù)據(jù)維持卷積核特征大小,如3*3的卷積保存后還是3x3的.
def parm_to_excel(excel_name,key_name,parm): with pd.ExcelWriter(excel_name) as writer: [output_num,input_num,filter_size,_]=parm[key_name].size() for i in range(output_num): for j in range(input_num): data=pd.DataFrame(parm[key_name][i,j,:,:].detach().numpy()) #print(data) data.to_excel(writer,index=False,header=True,startrow=i*(filter_size+1),startcol=j*filter_size)
由于權(quán)重矩陣中有很多的值非常小,取出固定大小的值,并將全部權(quán)重寫入excel
counter=1 with pd.ExcelWriter('test1.xlsx') as writer: for key in parm_resnet50.keys(): data=parm_resnet50[key].reshape(-1,1) data=data[data>0.001] data=pd.DataFrame(data,columns=[key]) data.to_excel(writer,index=False,startcol=counter) counter+=1
2、獲取中間某一層的特性
重寫一個(gè)函數(shù),將需要輸出的層輸出即可.
def resnet_cifar(net,input_data): x = net.conv1(input_data) x = net.bn1(x) x = F.relu(x) x = net.layer1(x) x = net.layer2(x) x = net.layer3(x) x = net.layer4[0].conv1(x) #這樣就提取了layer4第一塊的第一個(gè)卷積層的輸出 x=x.view(x.shape[0],-1) return x model = models.resnet18() x = resnet_cifar(model,input_data)
1.PyTorch是相當(dāng)簡潔且高效快速的框架;2.設(shè)計(jì)追求最少的封裝;3.設(shè)計(jì)符合人類思維,它讓用戶盡可能地專注于實(shí)現(xiàn)自己的想法;4.與google的Tensorflow類似,F(xiàn)AIR的支持足以確保PyTorch獲得持續(xù)的開發(fā)更新;5.PyTorch作者親自維護(hù)的論壇 供用戶交流和求教問題6.入門簡單
感謝你能夠認(rèn)真閱讀完這篇文章,希望小編分享的“怎么獲取Pytorch中間某一層權(quán)重或者特征”這篇文章對(duì)大家有幫助,同時(shí)也希望大家多多支持億速云,關(guān)注億速云行業(yè)資訊頻道,更多相關(guān)知識(shí)等著你來學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。