您好,登錄后才能下訂單哦!
這篇文章給大家分享的是有關(guān)怎么搭建resnet18網(wǎng)絡(luò)并加載torchvision自帶權(quán)重的內(nèi)容。小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,一起跟隨小編過來看看吧。
直接搭建網(wǎng)絡(luò)必須與torchvision自帶的網(wǎng)絡(luò)的權(quán)重也就是pth文件的結(jié)構(gòu)、尺寸和變量命名完全一致,否則無法加載權(quán)重文件。
pytorch加載預(yù)訓(xùn)練模型與自己模型不匹配的解決方案
import torch import torchvision import cv2 as cv from utils.utils import letter_box from model.backbone import ResNet18 model1 = ResNet18(1) model2 = torchvision.models.resnet18(progress=False) fc = model2.fc model2.fc = torch.nn.Linear(512, 1) # print(model) model_dict1 = model1.state_dict() model_dict2 = torch.load('resnet18.pth') model_list1 = list(model_dict1.keys()) model_list2 = list(model_dict2.keys()) len1 = len(model_list1) len2 = len(model_list2) minlen = min(len1, len2) for n in range(minlen): if model_dict1[model_list1[n]].shape != model_dict2[model_list2[n]].shape: continue model_dict1[model_list1[n]] = model_dict2[model_list2[n]] model1.load_state_dict(model_dict1) missing, unspected = model2.load_state_dict(model_dict2) image = cv.imread('zhn1.jpg') image = letter_box(image, 224) image = image[:, :, ::-1].transpose(2, 0, 1) print('Network loading complete.') model1.eval() model2.eval() with torch.no_grad(): image = torch.tensor(image/256, dtype=torch.float32).unsqueeze(0) predict1 = model1(image) predict2 = model2(image) print('finished') # torch.save(model.state_dict(), 'resnet18.pth')
以上為全部程序,最終可測試原模型與加載了自帶權(quán)重的自定義模型的輸出是否相等。
補(bǔ)充:使用Pytorch搭建ResNet分類網(wǎng)絡(luò)并基于遷移學(xué)習(xí)訓(xùn)練
卷積處理是不會改變特征矩陣的高和寬
卷積中的參數(shù)bias置為False(有無偏置BN層的輸出都相同),BN層放在conv層和relu層的中間
Batch Norm 層是對每層數(shù)據(jù)歸一化后再進(jìn)行線性變換改善數(shù)據(jù)分布, 其中的線性變換是可學(xué)習(xí)的.
Batch Norm優(yōu)點(diǎn):減輕過擬合;改善梯度傳播(權(quán)重不會過高或過低)容許較高的學(xué)習(xí)率,能夠提高訓(xùn)練速度。減輕對初始化權(quán)重的強(qiáng)依賴,使得數(shù)據(jù)分布在激活函數(shù)的非飽和區(qū)域,一定程度上解決梯度消失問題。作為一種正則化的方式,在某種程度上減少對dropout的使用。
Batch Norm層擺放位置:在激活層(如 ReLU )之前還是之后,沒有一個(gè)統(tǒng)一的定論。
BN層與 Dropout 合作:Batch Norm的提出使得dropout的使用減少,但是Batch Norm不能完全取代dropout,保留較小的dropout率,如0.2可能效果更佳。
為什么要先normalize再通過γ,β線性變換恢復(fù)接近原來的樣子,這不是多此一舉嗎?
在一定條件下可以糾正原始數(shù)據(jù)的分布(方差,均值變?yōu)樾轮郸?β),當(dāng)原始數(shù)據(jù)分布足夠好時(shí)就是恒等映射,不改變分布。如果不做BN,方差和均值對前面網(wǎng)絡(luò)的參數(shù)有復(fù)雜的關(guān)聯(lián)依賴,具有復(fù)雜的非線性。在新參數(shù) γH′ + β 中僅由 γ,β 確定,與前邊網(wǎng)絡(luò)的參數(shù)無關(guān),因此新參數(shù)很容易通過梯度下降來學(xué)習(xí),能夠?qū)W習(xí)到較好的分布。
import torchvision.models.resnet#ctrl+鼠標(biāo)左鍵點(diǎn)擊即可下載權(quán)重 net = resnet34()#一開始不能設(shè)置全連接層的輸出種類為自己想要的,必須先將模型參數(shù)載入,再修改全連接層 # 官方提供載入預(yù)訓(xùn)練模型的方法 model_weight_path = "./resnet34-pre.pth"#權(quán)重路徑 missing_keys, unexpected_keys = net.load_state_dict(torch.load(model_weight_path), strict=False)#載入模型權(quán)重 inchannel = net.fc.in_features net.fc = nn.Linear(inchannel, 5)#重新確定全連接層
import torch.nn as nn import torch class BasicBlock(nn.Module):#對應(yīng)18層和34層所對應(yīng)的殘差結(jié)構(gòu)(既要有實(shí)線殘差結(jié)構(gòu)功能,也要有虛線殘差結(jié)構(gòu)功能) expansion = 1#殘差結(jié)構(gòu)主分支上的三個(gè)卷積層是否相同,相同為1,第三層是一二層四倍則為4 def __init__(self, in_channel, out_channel, stride=1, downsample=None):#downsample代表虛線殘差結(jié)構(gòu)選項(xiàng) super(BasicBlock, self).__init__() self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel, kernel_size=3, stride=stride, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(out_channel) self.relu = nn.ReLU() self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel, kernel_size=3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(out_channel) self.downsample = downsample def forward(self, x): identity = x if self.downsample is not None: identity = self.downsample(x)#得到捷徑分支的輸出 out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out += identity out = self.relu(out) return out#得到殘差結(jié)構(gòu)的最終輸出 class Bottleneck(nn.Module):#對應(yīng)50層、101層和152層所對應(yīng)的殘差結(jié)構(gòu) expansion = 4#第三層卷積核個(gè)數(shù)是第一層和第二層的四倍 def __init__(self, in_channel, out_channel, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel, kernel_size=1, stride=1, bias=False) self.bn1 = nn.BatchNorm2d(out_channel) self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel, kernel_size=3, stride=stride, bias=False, padding=1) self.bn2 = nn.BatchNorm2d(out_channel) self.conv3 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel*self.expansion, kernel_size=1, stride=1, bias=False) self.bn3 = nn.BatchNorm2d(out_channel*self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample def forward(self, x): identity = x if self.downsample is not None: identity = self.downsample(x) out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) out += identity out = self.relu(out) return out class ResNet(nn.Module):#定義整個(gè)網(wǎng)絡(luò)的框架部分 #blocks_num是殘差結(jié)構(gòu)的數(shù)目,是一個(gè)列表參數(shù),block對應(yīng)哪個(gè)殘差模塊 def __init__(self, block, blocks_num, num_classes=1000, include_top=True): super(ResNet, self).__init__() self.include_top = include_top self.in_channel = 64#通過第一個(gè)池化層后所得到的特征矩陣的深度 self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(self.in_channel) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, blocks_num[0]) self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2) self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2) self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2) if self.include_top: self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) # output size = (1, 1) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') def _make_layer(self, block, channel, block_num, stride=1):#channel:殘差結(jié)構(gòu)中,第一個(gè)卷積層所使用的卷積核的個(gè)數(shù) downsample = None if stride != 1 or self.in_channel != channel * block.expansion:#18層和34層會直接跳過這個(gè)if語句 downsample = nn.Sequential( nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(channel * block.expansion)) layers = [] layers.append(block(self.in_channel, channel, downsample=downsample, stride=stride)) self.in_channel = channel * block.expansion for _ in range(1, block_num): layers.append(block(self.in_channel, channel)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) if self.include_top:#默認(rèn)是true x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x def resnet34(num_classes=1000, include_top=True): return ResNet(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top) def resnet101(num_classes=1000, include_top=True): return ResNet(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)
import torch import torch.nn as nn from torchvision import transforms, datasets import json import matplotlib.pyplot as plt import os import torch.optim as optim from model import resnet34, resnet101 import torchvision.models.resnet#ctrl+鼠標(biāo)左鍵點(diǎn)擊即可下載權(quán)重 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) data_transform = { "train": transforms.Compose([transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),#和官網(wǎng)初始化方法保持一致 "val": transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])} data_root = os.path.abspath(os.path.join(os.getcwd(), "../..")) # get data root path image_path = data_root + "/data_set/flower_data/" # flower data set path train_dataset = datasets.ImageFolder(root=image_path+"train", transform=data_transform["train"]) train_num = len(train_dataset) # {'daisy':0, 'dandelion':1, 'roses':2, 'sunflower':3, 'tulips':4} flower_list = train_dataset.class_to_idx cla_dict = dict((val, key) for key, val in flower_list.items()) # write dict into json file json_str = json.dumps(cla_dict, indent=4) with open('class_indices.json', 'w') as json_file: json_file.write(json_str) batch_size = 16 train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=0) validate_dataset = datasets.ImageFolder(root=image_path + "val", transform=data_transform["val"]) val_num = len(validate_dataset) validate_loader = torch.utils.data.DataLoader(validate_dataset, batch_size=batch_size, shuffle=False, num_workers=0) net = resnet34()#一開始不能設(shè)置全連接層的輸出種類為自己想要的,必須先將模型參數(shù)載入,再修改全連接層 # 官方提供載入預(yù)訓(xùn)練模型的方法 model_weight_path = "./resnet34-pre.pth"#權(quán)重路徑 missing_keys, unexpected_keys = net.load_state_dict(torch.load(model_weight_path), strict=False)#載入模型權(quán)重 inchannel = net.fc.in_features net.fc = nn.Linear(inchannel, 5)#重新確定全連接層 net.to(device) loss_function = nn.CrossEntropyLoss() optimizer = optim.Adam(net.parameters(), lr=0.0001) best_acc = 0.0 save_path = './resNet34.pth' for epoch in range(3): # train net.train()#控制BN層狀態(tài) running_loss = 0.0 for step, data in enumerate(train_loader, start=0): images, labels = data optimizer.zero_grad() logits = net(images.to(device)) loss = loss_function(logits, labels.to(device)) loss.backward() optimizer.step() # print statistics running_loss += loss.item() # print train process rate = (step+1)/len(train_loader) a = "*" * int(rate * 50) b = "." * int((1 - rate) * 50) print("\rtrain loss: {:^3.0f}%[{}->{}]{:.4f}".format(int(rate*100), a, b, loss), end="") print() # validate net.eval()#控制BN層狀態(tài) acc = 0.0 # accumulate accurate number / epoch with torch.no_grad(): for val_data in validate_loader: val_images, val_labels = val_data outputs = net(val_images.to(device)) # eval model only have last output layer # loss = loss_function(outputs, test_labels) predict_y = torch.max(outputs, dim=1)[1] acc += (predict_y == val_labels.to(device)).sum().item() val_accurate = acc / val_num if val_accurate > best_acc: best_acc = val_accurate torch.save(net.state_dict(), save_path) print('[epoch %d] train_loss: %.3f test_accuracy: %.3f' % (epoch + 1, running_loss / step, val_accurate)) print('Finished Training')
import torch from model import resnet34 from PIL import Image from torchvision import transforms import matplotlib.pyplot as plt import json device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") data_transform = transforms.Compose( [transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])#采用和訓(xùn)練方法一樣的標(biāo)準(zhǔn)化處理 # load image img = Image.open("../aa.jpg") plt.imshow(img) # [N, C, H, W] img = data_transform(img) # expand batch dimension img = torch.unsqueeze(img, dim=0) # read class_indict try: json_file = open('./class_indices.json', 'r') class_indict = json.load(json_file) except Exception as e: print(e) exit(-1) # create model model = resnet34(num_classes=5) # load model weights model_weight_path = "./resNet34.pth" model.load_state_dict(torch.load(model_weight_path, map_location=device))#載入訓(xùn)練好的模型參數(shù) model.eval()#使用eval()模式 with torch.no_grad():#不跟蹤損失梯度 # predict class output = torch.squeeze(model(img))#壓縮batch維度 predict = torch.softmax(output, dim=0)#通過softmax得到概率分布 predict_cla = torch.argmax(predict).numpy()#尋找最大值所對應(yīng)的索引 print(class_indict[str(predict_cla)], predict[predict_cla].numpy())#打印類別信息和概率 plt.show()
感謝各位的閱讀!關(guān)于“怎么搭建resnet18網(wǎng)絡(luò)并加載torchvision自帶權(quán)重”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,讓大家可以學(xué)到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。