您好,登錄后才能下訂單哦!
1.簡介(torch.nn下的)
卷積層主要使用的有3類,用于處理不同維度的數(shù)據(jù)
參數(shù) Parameters:
in_channels(int) – 輸入信號的通道
out_channels(int) – 卷積產生的通道
kerner_size(int or tuple) - 卷積核的尺寸
stride(int or tuple, optional) - 卷積步長
padding (int or tuple, optional)- 輸入的每一條邊補充0的層數(shù)
dilation(int or tuple, `optional``) – 卷積核元素之間的間距
groups(int, optional) – 從輸入通道到輸出通道的阻塞連接數(shù)
bias(bool, optional) - 如果bias=True,添加偏置
class torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
一維卷積層。用于計算ECG等一維數(shù)據(jù)。
input: (N,C_in,L_in) N為批次,C_in即為in_channels,即一批內輸入一維數(shù)據(jù)個數(shù),L_in是是一維數(shù)據(jù)基數(shù)
output: (N,C_out,L_out) N為批次,C_in即為out_channels,即一批內輸出一維數(shù)據(jù)個數(shù),L_out是一維數(shù)據(jù)基數(shù)
class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
二維卷積層。用于計算CT斷層或MR斷層,或二維超聲圖像,自然圖像等二維數(shù)據(jù)。
self.conv1 = nn.Conv2d( # 1*28*28 -> 32*28*28 in_channels=1, out_channels=32, kernel_size=5, stride=1, padding=2 #padding是需要計算的,padding=(stride-1)/2 )
input: (N,C_in,H_in,W_in) N為批次,C_in即為in_channels,即一批內輸入二維數(shù)據(jù)個數(shù),H_in是二維數(shù)據(jù)行數(shù),W_in是二維數(shù)據(jù)的列數(shù)
output: (N,C_out,H_out,W_out) N為批次,C_out即為out_channels,即一批內輸出二維數(shù)據(jù)個數(shù),H_out是二維數(shù)據(jù)行數(shù),W_out是二維數(shù)據(jù)的列數(shù)
con2 = nn.Conv2d(1,16,5,1,2) # con2(np.empty([1,1,28,28])) 只能接受tensor/variable con2(torch.Tensor(1,1,28,28)) con2(Variable(torch.Tensor(1,1,28,28)))
class torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
三維卷積層。用于計算CT或MR等容積數(shù)據(jù),視頻數(shù)據(jù)等三維數(shù)據(jù)。
input: (N,C_in,D_in,H_in,W_in)
output: (N,C_out,D_out,H_out,W_out)
2.簡介(torch.nn.functional下的)
在torch.nn.functional下也有卷積層,但是和torch.nn下的卷積層的區(qū)別在于,functional下的是函數(shù),不是實際的卷積層,而是有卷積層功能的卷積層函數(shù),所以它并不會出現(xiàn)在網絡的圖結構中。
torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)
參數(shù):
- input – 輸入張量的形狀 (minibatch x in_channels x iW)
- weight – 過濾器的形狀 (out_channels, in_channels, kW)
- bias – 可選偏置的形狀 (out_channels)
- stride – 卷積核的步長,默認為1
>>> filters = autograd.Variable(torch.randn(33, 16, 3)) >>> inputs = autograd.Variable(torch.randn(20, 16, 50)) >>> F.conv1d(inputs, filters)
torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)
>>> # With square kernels and equal stride >>> filters = autograd.Variable(torch.randn(8,4,3,3)) >>> inputs = autograd.Variable(torch.randn(1,4,5,5)) >>> F.conv2d(inputs, filters, padding=1)
torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)
>>> filters = autograd.Variable(torch.randn(33, 16, 3, 3, 3)) >>> inputs = autograd.Variable(torch.randn(20, 16, 50, 10, 20)) >>> F.conv3d(inputs, filters)
以上這篇Pytorch之卷積層的使用詳解就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持億速云。
免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經查實,將立刻刪除涉嫌侵權內容。