您好,登錄后才能下訂單哦!
這篇文章主要講解了“Pytorch多層感知機(jī)的實(shí)現(xiàn)方法”,文中的講解內(nèi)容簡單清晰,易于學(xué)習(xí)與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學(xué)習(xí)“Pytorch多層感知機(jī)的實(shí)現(xiàn)方法”吧!
import torch from torch import nn from torch.nn import init import numpy as np import sys import torchvision from torchvision import transforms num_inputs=784 num_outputs=10 num_hiddens=256 mnist_train = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=True, download=True, transform=transforms.ToTensor()) mnist_test = torchvision.datasets.FashionMNIST(root='~/Datasets/FashionMNIST', train=False, download=True, transform=transforms.ToTensor()) batch_size = 256 train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True) test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False) def evalute_accuracy(data_iter,net): acc_sum,n=0.0,0 for X,y in data_iter: acc_sum+=(net(X).argmax(dim=1)==y).float().sum().item() n+=y.shape[0] return acc_sum/n def train(net,train_iter,test_iter,loss,num_epochs,batch_size,params=None,lr=None,optimizer=None): for epoch in range(num_epochs): train_l_sum,train_acc_sum,n=0.0,0.0,0 for X,y in train_iter: y_hat=net(X) l=loss(y_hat,y).sum() if optimizer is not None: optimizer.zero_grad() elif params is not None and params[0].grad is not None: for param in params: param.grad.data.zero_() l.backward() optimizer.step() # “softmax回歸的簡潔實(shí)現(xiàn)”一節(jié)將用到 train_l_sum+=l.item() train_acc_sum+=(y_hat.argmax(dim=1)==y).sum().item() n+=y.shape[0] test_acc=evalute_accuracy(test_iter,net); print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f' % (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc)) class Faltten(nn.Module): def __init__(self): super(Faltten, self).__init__() def forward(self,x): return x.view(x.shape[0],-1) net =nn.Sequential( Faltten(), nn.Linear(num_inputs,num_hiddens), nn.ReLU(), nn.Linear(num_hiddens,num_outputs) ) for params in net.parameters(): init.normal_(params,mean=0,std=0.01) batch_size=256 loss=torch.nn.CrossEntropyLoss() optimizer=torch.optim.SGD(net.parameters(),lr=0.5) num_epochs=5 train(net,train_iter,test_iter,loss,num_epochs,batch_size,None,None,optimizer)
感謝各位的閱讀,以上就是“Pytorch多層感知機(jī)的實(shí)現(xiàn)方法”的內(nèi)容了,經(jīng)過本文的學(xué)習(xí)后,相信大家對(duì)Pytorch多層感知機(jī)的實(shí)現(xiàn)方法這一問題有了更深刻的體會(huì),具體使用情況還需要大家實(shí)踐驗(yàn)證。這里是億速云,小編將為大家推送更多相關(guān)知識(shí)點(diǎn)的文章,歡迎關(guān)注!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。