溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

PyTorch: Softmax多分類是什么

發(fā)布時(shí)間:2020-07-08 10:47:51 來源:億速云 閱讀:397 作者:清晨 欄目:開發(fā)技術(shù)

這篇文章將為大家詳細(xì)講解有關(guān)PyTorch: Softmax多分類是什么,小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,希望大家閱讀完這篇文章后可以有所收獲。

多分類一種比較常用的做法是在最后一層加softmax歸一化,值最大的維度所對(duì)應(yīng)的位置則作為該樣本對(duì)應(yīng)的類。本文采用PyTorch框架,選用經(jīng)典圖像數(shù)據(jù)集mnist學(xué)習(xí)一波多分類。

MNIST數(shù)據(jù)集

MNIST 數(shù)據(jù)集(手寫數(shù)字?jǐn)?shù)據(jù)集)來自美國(guó)國(guó)家標(biāo)準(zhǔn)與技術(shù)研究所, National Institute of Standards and Technology (NIST). 訓(xùn)練集 (training set) 由來自 250 個(gè)不同人手寫的數(shù)字構(gòu)成, 其中 50% 是高中學(xué)生, 50% 來自人口普查局 (the Census Bureau) 的工作人員. 測(cè)試集(test set) 也是同樣比例的手寫數(shù)字?jǐn)?shù)據(jù)。MNIST數(shù)據(jù)集下載地址:http://yann.lecun.com/exdb/mnist/。手寫數(shù)字的MNIST數(shù)據(jù)庫(kù)包括60,000個(gè)的訓(xùn)練集樣本,以及10,000個(gè)測(cè)試集樣本。

PyTorch: Softmax多分類是什么

其中:

train-images-idx3-ubyte.gz (訓(xùn)練數(shù)據(jù)集圖片)

train-labels-idx1-ubyte.gz (訓(xùn)練數(shù)據(jù)集標(biāo)記類別)

t10k-images-idx3-ubyte.gz: (測(cè)試數(shù)據(jù)集)

t10k-labels-idx1-ubyte.gz(測(cè)試數(shù)據(jù)集標(biāo)記類別)

PyTorch: Softmax多分類是什么

MNIST數(shù)據(jù)集是經(jīng)典圖像數(shù)據(jù)集,包括10個(gè)類別(0到9)。每一張圖片拉成向量表示,如下圖784維向量作為第一層輸入特征。

PyTorch: Softmax多分類是什么

Softmax分類

softmax函數(shù)的本質(zhì)就是將一個(gè)K 維的任意實(shí)數(shù)向量壓縮(映射)成另一個(gè)K維的實(shí)數(shù)向量,其中向量中的每個(gè)元素取值都介于(0,1)之間,并且壓縮后的K個(gè)值相加等于1(變成了概率分布)。在選用Softmax做多分類時(shí),可以根據(jù)值的大小來進(jìn)行多分類的任務(wù),如取權(quán)重最大的一維。softmax介紹和公式網(wǎng)上很多,這里不介紹了。下面使用Pytorch定義一個(gè)多層網(wǎng)絡(luò)(4個(gè)隱藏層,最后一層softmax概率歸一化),輸出層為10正好對(duì)應(yīng)10類。

PyTorch: Softmax多分類是什么

PyTorch實(shí)戰(zhàn)

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable

# Training settings
batch_size = 64

# MNIST Dataset
train_dataset = datasets.MNIST(root='./mnist_data/',
                train=True,
                transform=transforms.ToTensor(),
                download=True)

test_dataset = datasets.MNIST(root='./mnist_data/',
               train=False,
               transform=transforms.ToTensor())

# Data Loader (Input Pipeline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
                      batch_size=batch_size,
                      shuffle=True)

test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
                     batch_size=batch_size,
                     shuffle=False)
class Net(nn.Module):
  def __init__(self):
    super(Net, self).__init__()
    self.l1 = nn.Linear(784, 520)
    self.l2 = nn.Linear(520, 320)
    self.l3 = nn.Linear(320, 240)
    self.l4 = nn.Linear(240, 120)
    self.l5 = nn.Linear(120, 10)

  def forward(self, x):
    # Flatten the data (n, 1, 28, 28) --> (n, 784)
    x = x.view(-1, 784)
    x = F.relu(self.l1(x))
    x = F.relu(self.l2(x))
    x = F.relu(self.l3(x))
    x = F.relu(self.l4(x))
    return F.log_softmax(self.l5(x), dim=1)
    #return self.l5(x)
model = Net()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
def train(epoch):

  # 每次輸入barch_idx個(gè)數(shù)據(jù)
  for batch_idx, (data, target) in enumerate(train_loader):
    data, target = Variable(data), Variable(target)

    optimizer.zero_grad()
    output = model(data)
    # loss
    loss = F.nll_loss(output, target)
    loss.backward()
    # update
    optimizer.step()
    if batch_idx % 200 == 0:
      print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
        epoch, batch_idx * len(data), len(train_loader.dataset),
        100. * batch_idx / len(train_loader), loss.data[0]))
def test():
  test_loss = 0
  correct = 0
  # 測(cè)試集
  for data, target in test_loader:
    data, target = Variable(data, volatile=True), Variable(target)
    output = model(data)
    # sum up batch loss
    test_loss += F.nll_loss(output, target).data[0]
    # get the index of the max
    pred = output.data.max(1, keepdim=True)[1]
    correct += pred.eq(target.data.view_as(pred)).cpu().sum()

  test_loss /= len(test_loader.dataset)
  print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
    test_loss, correct, len(test_loader.dataset),
    100. * correct / len(test_loader.dataset)))

for epoch in range(1,6):
  train(epoch)
  test()

輸出結(jié)果:
Train Epoch: 1 [0/60000 (0%)]	Loss: 2.292192
Train Epoch: 1 [12800/60000 (21%)]	Loss: 2.289466
Train Epoch: 1 [25600/60000 (43%)]	Loss: 2.294221
Train Epoch: 1 [38400/60000 (64%)]	Loss: 2.169656
Train Epoch: 1 [51200/60000 (85%)]	Loss: 1.561276

Test set: Average loss: 0.0163, Accuracy: 6698/10000 (67%)

Train Epoch: 2 [0/60000 (0%)]	Loss: 0.993218
Train Epoch: 2 [12800/60000 (21%)]	Loss: 0.859608
Train Epoch: 2 [25600/60000 (43%)]	Loss: 0.499748
Train Epoch: 2 [38400/60000 (64%)]	Loss: 0.422055
Train Epoch: 2 [51200/60000 (85%)]	Loss: 0.413933

Test set: Average loss: 0.0065, Accuracy: 8797/10000 (88%)

Train Epoch: 3 [0/60000 (0%)]	Loss: 0.465154
Train Epoch: 3 [12800/60000 (21%)]	Loss: 0.321842
Train Epoch: 3 [25600/60000 (43%)]	Loss: 0.187147
Train Epoch: 3 [38400/60000 (64%)]	Loss: 0.469552
Train Epoch: 3 [51200/60000 (85%)]	Loss: 0.270332

Test set: Average loss: 0.0045, Accuracy: 9137/10000 (91%)

Train Epoch: 4 [0/60000 (0%)]	Loss: 0.197497
Train Epoch: 4 [12800/60000 (21%)]	Loss: 0.234830
Train Epoch: 4 [25600/60000 (43%)]	Loss: 0.260302
Train Epoch: 4 [38400/60000 (64%)]	Loss: 0.219375
Train Epoch: 4 [51200/60000 (85%)]	Loss: 0.292754

Test set: Average loss: 0.0037, Accuracy: 9277/10000 (93%)

Train Epoch: 5 [0/60000 (0%)]	Loss: 0.183354
Train Epoch: 5 [12800/60000 (21%)]	Loss: 0.207930
Train Epoch: 5 [25600/60000 (43%)]	Loss: 0.138435
Train Epoch: 5 [38400/60000 (64%)]	Loss: 0.120214
Train Epoch: 5 [51200/60000 (85%)]	Loss: 0.266199

Test set: Average loss: 0.0026, Accuracy: 9506/10000 (95%)
Process finished with exit code 0

隨著訓(xùn)練迭代次數(shù)的增加,測(cè)試集的精確度還是有很大提高的。并且當(dāng)?shù)螖?shù)為5時(shí),使用這種簡(jiǎn)單的網(wǎng)絡(luò)可以達(dá)到95%的精確度。

關(guān)于PyTorch: Softmax多分類是什么就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,可以學(xué)到更多知識(shí)。如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到。

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI