您好,登錄后才能下訂單哦!
這篇文章主要為大家展示了如何使用keras根據(jù)層名稱來初始化網(wǎng)絡(luò),內(nèi)容簡(jiǎn)而易懂,希望大家可以學(xué)習(xí)一下,學(xué)習(xí)完之后肯定會(huì)有收獲的,下面讓小編帶大家一起來看看吧。
keras根據(jù)層名稱來初始化網(wǎng)絡(luò)
def get_model(input_shape1=[75, 75, 3], input_shape2=[1], weights=None): bn_model = 0 trainable = True # kernel_regularizer = regularizers.l2(1e-4) kernel_regularizer = None activation = 'relu' img_input = Input(shape=input_shape1) angle_input = Input(shape=input_shape2) # Block 1 x = Conv2D(64, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block1_conv1')(img_input) x = Conv2D(64, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block1_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x) # Block 2 x = Conv2D(128, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block2_conv1')(x) x = Conv2D(128, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block2_conv2')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x) # Block 3 x = Conv2D(256, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block3_conv1')(x) x = Conv2D(256, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block3_conv2')(x) x = Conv2D(256, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block3_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x) # Block 4 x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block4_conv1')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block4_conv2')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block4_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x) # Block 5 x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block5_conv1')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block5_conv2')(x) x = Conv2D(512, (3, 3), activation=activation, padding='same', trainable=trainable, kernel_regularizer=kernel_regularizer, name='block5_conv3')(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x) branch_1 = GlobalMaxPooling2D()(x) # branch_1 = BatchNormalization(momentum=bn_model)(branch_1) branch_2 = GlobalAveragePooling2D()(x) # branch_2 = BatchNormalization(momentum=bn_model)(branch_2) branch_3 = BatchNormalization(momentum=bn_model)(angle_input) x = (Concatenate()([branch_1, branch_2, branch_3])) x = Dense(1024, activation=activation, kernel_regularizer=kernel_regularizer)(x) # x = Dropout(0.5)(x) x = Dense(1024, activation=activation, kernel_regularizer=kernel_regularizer)(x) x = Dropout(0.6)(x) output = Dense(1, activation='sigmoid')(x) model = Model([img_input, angle_input], output) optimizer = Adam(lr=1e-5, beta_1=0.9, beta_2=0.999, epsilon=1e-8, decay=0.0) model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) if weights is not None: # 將by_name設(shè)置成True model.load_weights(weights, by_name=True) # layer_weights = h6py.File(weights, 'r') # for idx in range(len(model.layers)): # model.set_weights() print 'have prepared the model.' return model
補(bǔ)充知識(shí):keras.layers.Dense()方法
keras.layers.Dense()是定義網(wǎng)絡(luò)層的基本方法,執(zhí)行的操作是:output = activation(dot(input,kernel)+ bias。
其中activation是激活函數(shù),kernel是權(quán)重矩陣,bias是偏向量。如果層輸入大于2,在進(jìn)行初始點(diǎn)積之前會(huì)將其展平。
代碼如下:
class Dense(Layer): """Just your regular densely-connected NN layer. `Dense` implements the operation: `output = activation(dot(input, kernel) + bias)` where `activation` is the element-wise activation function passed as the `activation` argument, `kernel` is a weights matrix created by the layer, and `bias` is a bias vector created by the layer (only applicable if `use_bias` is `True`). Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with `kernel`. # Example ```python # as first layer in a sequential model: model = Sequential() model.add(Dense(32, input_shape=(16,))) # now the model will take as input arrays of shape (*, 16) # and output arrays of shape (*, 32) # after the first layer, you don't need to specify # the size of the input anymore: model.add(Dense(32)) ``` # Arguments units: Positive integer, dimensionality of the output space. activation: Activation function to use (see [activations](../activations.md)). If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the `kernel` weights matrix (see [initializers](../initializers.md)). bias_initializer: Initializer for the bias vector (see [initializers](../initializers.md)). kernel_regularizer: Regularizer function applied to the `kernel` weights matrix (see [regularizer](../regularizers.md)). bias_regularizer: Regularizer function applied to the bias vector (see [regularizer](../regularizers.md)). activity_regularizer: Regularizer function applied to the output of the layer (its "activation"). (see [regularizer](../regularizers.md)). kernel_constraint: Constraint function applied to the `kernel` weights matrix (see [constraints](../constraints.md)). bias_constraint: Constraint function applied to the bias vector (see [constraints](../constraints.md)). # Input shape nD tensor with shape: `(batch_size, ..., input_dim)`. The most common situation would be a 2D input with shape `(batch_size, input_dim)`. # Output shape nD tensor with shape: `(batch_size, ..., units)`. For instance, for a 2D input with shape `(batch_size, input_dim)`, the output would have shape `(batch_size, units)`. """ @interfaces.legacy_dense_support def __init__(self, units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs): if 'input_shape' not in kwargs and 'input_dim' in kwargs: kwargs['input_shape'] = (kwargs.pop('input_dim'),) super(Dense, self).__init__(**kwargs) self.units = units self.activation = activations.get(activation) self.use_bias = use_bias self.kernel_initializer = initializers.get(kernel_initializer) self.bias_initializer = initializers.get(bias_initializer) self.kernel_regularizer = regularizers.get(kernel_regularizer) self.bias_regularizer = regularizers.get(bias_regularizer) self.activity_regularizer = regularizers.get(activity_regularizer) self.kernel_constraint = constraints.get(kernel_constraint) self.bias_constraint = constraints.get(bias_constraint) self.input_spec = InputSpec(min_ndim=2) self.supports_masking = True def build(self, input_shape): assert len(input_shape) >= 2 input_dim = input_shape[-1] self.kernel = self.add_weight(shape=(input_dim, self.units), initializer=self.kernel_initializer, name='kernel', regularizer=self.kernel_regularizer, constraint=self.kernel_constraint) if self.use_bias: self.bias = self.add_weight(shape=(self.units,), initializer=self.bias_initializer, name='bias', regularizer=self.bias_regularizer, constraint=self.bias_constraint) else: self.bias = None self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim}) self.built = True def call(self, inputs): output = K.dot(inputs, self.kernel) if self.use_bias: output = K.bias_add(output, self.bias) if self.activation is not None: output = self.activation(output) return output def compute_output_shape(self, input_shape): assert input_shape and len(input_shape) >= 2 assert input_shape[-1] output_shape = list(input_shape) output_shape[-1] = self.units return tuple(output_shape) def get_config(self): config = { 'units': self.units, 'activation': activations.serialize(self.activation), 'use_bias': self.use_bias, 'kernel_initializer': initializers.serialize(self.kernel_initializer), 'bias_initializer': initializers.serialize(self.bias_initializer), 'kernel_regularizer': regularizers.serialize(self.kernel_regularizer), 'bias_regularizer': regularizers.serialize(self.bias_regularizer), 'activity_regularizer': regularizers.serialize(self.activity_regularizer), 'kernel_constraint': constraints.serialize(self.kernel_constraint), 'bias_constraint': constraints.serialize(self.bias_constraint) } base_config = super(Dense, self).get_config() return dict(list(base_config.items()) + list(config.items()))
參數(shù)說明如下:
units:正整數(shù),輸出空間的維數(shù)。
activation: 激活函數(shù)。如果未指定任何內(nèi)容,則不會(huì)應(yīng)用任何激活函數(shù)。即“線性”激活:a(x)= x)。
use_bias:Boolean,該層是否使用偏向量。
kernel_initializer:權(quán)重矩陣的初始化方法。
bias_initializer:偏向量的初始化方法。
kernel_regularizer:權(quán)重矩陣的正則化方法。
bias_regularizer:偏向量的正則化方法。
activity_regularizer:輸出層正則化方法。
kernel_constraint:權(quán)重矩陣約束函數(shù)。
bias_constraint:偏向量約束函數(shù)。
以上就是關(guān)于如何使用keras根據(jù)層名稱來初始化網(wǎng)絡(luò)的內(nèi)容,如果你們有學(xué)習(xí)到知識(shí)或者技能,可以把它分享出去讓更多的人看到。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。