您好,登錄后才能下訂單哦!
這篇文章主要介紹了TensorFlow如何搭建神經(jīng)網(wǎng)絡(luò),具有一定借鑒價值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
一、TensorFLow完整樣例
在MNIST數(shù)據(jù)集上,搭建一個簡單神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),一個包含ReLU單元的非線性化處理的兩層神經(jīng)網(wǎng)絡(luò)。在訓(xùn)練神經(jīng)網(wǎng)絡(luò)的時候,使用帶指數(shù)衰減的學(xué)習(xí)率設(shè)置、使用正則化來避免過擬合、使用滑動平均模型來使得最終的模型更加健壯。
程序?qū)⒂嬎闵窠?jīng)網(wǎng)絡(luò)前向傳播的部分單獨定義一個函數(shù)inference,訓(xùn)練部分定義一個train函數(shù),再定義一個主函數(shù)main。
完整程序:
#!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Thu May 25 08:56:30 2017 @author: marsjhao """ import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data INPUT_NODE = 784 # 輸入節(jié)點數(shù) OUTPUT_NODE = 10 # 輸出節(jié)點數(shù) LAYER1_NODE = 500 # 隱含層節(jié)點數(shù) BATCH_SIZE = 100 LEARNING_RETE_BASE = 0.8 # 基學(xué)習(xí)率 LEARNING_RETE_DECAY = 0.99 # 學(xué)習(xí)率的衰減率 REGULARIZATION_RATE = 0.0001 # 正則化項的權(quán)重系數(shù) TRAINING_STEPS = 10000 # 迭代訓(xùn)練次數(shù) MOVING_AVERAGE_DECAY = 0.99 # 滑動平均的衰減系數(shù) # 傳入神經(jīng)網(wǎng)絡(luò)的權(quán)重和偏置,計算神經(jīng)網(wǎng)絡(luò)前向傳播的結(jié)果 def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2): # 判斷是否傳入ExponentialMovingAverage類對象 if avg_class == None: layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1) return tf.matmul(layer1, weights2) + biases2 else: layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1)) return tf.matmul(layer1, avg_class.average(weights2))\ + avg_class.average(biases2) # 神經(jīng)網(wǎng)絡(luò)模型的訓(xùn)練過程 def train(mnist): x = tf.placeholder(tf.float32, [None,INPUT_NODE], name='x-input') y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input') # 定義神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)的參數(shù) weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1)) biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE])) weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1)) biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE])) # 計算非滑動平均模型下的參數(shù)的前向傳播的結(jié)果 y = inference(x, None, weights1, biases1, weights2, biases2) global_step = tf.Variable(0, trainable=False) # 定義存儲當(dāng)前迭代訓(xùn)練輪數(shù)的變量 # 定義ExponentialMovingAverage類對象 variable_averages = tf.train.ExponentialMovingAverage( MOVING_AVERAGE_DECAY, global_step) # 傳入當(dāng)前迭代輪數(shù)參數(shù) # 定義對所有可訓(xùn)練變量trainable_variables進(jìn)行更新滑動平均值的操作op variables_averages_op = variable_averages.apply(tf.trainable_variables()) # 計算滑動模型下的參數(shù)的前向傳播的結(jié)果 average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2) # 定義交叉熵?fù)p失值 cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( logits=y, labels=tf.argmax(y_, 1)) cross_entropy_mean = tf.reduce_mean(cross_entropy) # 定義L2正則化器并對weights1和weights2正則化 regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE) regularization = regularizer(weights1) + regularizer(weights2) loss = cross_entropy_mean + regularization # 總損失值 # 定義指數(shù)衰減學(xué)習(xí)率 learning_rate = tf.train.exponential_decay(LEARNING_RETE_BASE, global_step, mnist.train.num_examples / BATCH_SIZE, LEARNING_RETE_DECAY) # 定義梯度下降操作op,global_step參數(shù)可實現(xiàn)自加1運算 train_step = tf.train.GradientDescentOptimizer(learning_rate)\ .minimize(loss, global_step=global_step) # 組合兩個操作op train_op = tf.group(train_step, variables_averages_op) ''''' # 與tf.group()等價的語句 with tf.control_dependencies([train_step, variables_averages_op]): train_op = tf.no_op(name='train') ''' # 定義準(zhǔn)確率 # 在最終預(yù)測的時候,神經(jīng)網(wǎng)絡(luò)的輸出采用的是經(jīng)過滑動平均的前向傳播計算結(jié)果 correct_prediction = tf.equal(tf.argmax(average_y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 初始化回話sess并開始迭代訓(xùn)練 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 驗證集待喂入數(shù)據(jù) validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels} # 測試集待喂入數(shù)據(jù) test_feed = {x: mnist.test.images, y_: mnist.test.labels} for i in range(TRAINING_STEPS): if i % 1000 == 0: validate_acc = sess.run(accuracy, feed_dict=validate_feed) print('After %d training steps, validation accuracy' ' using average model is %f' % (i, validate_acc)) xs, ys = mnist.train.next_batch(BATCH_SIZE) sess.run(train_op, feed_dict={x: xs, y_:ys}) test_acc = sess.run(accuracy, feed_dict=test_feed) print('After %d training steps, test accuracy' ' using average model is %f' % (TRAINING_STEPS, test_acc)) # 主函數(shù) def main(argv=None): mnist = input_data.read_data_sets("MNIST_data", one_hot=True) train(mnist) # 當(dāng)前的python文件是shell文件執(zhí)行的入口文件,而非當(dāng)做import的python module。 if __name__ == '__main__': # 在模塊內(nèi)部執(zhí)行 tf.app.run() # 調(diào)用main函數(shù)并傳入所需的參數(shù)list
二、分析與改進(jìn)設(shè)計
1. 程序分析改進(jìn)
第一,計算前向傳播的函數(shù)inference中需要將所有的變量以參數(shù)的形式傳入函數(shù),當(dāng)神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)變得更加復(fù)雜、參數(shù)更多的時候,程序的可讀性將變得非常差。
第二,在程序退出時,訓(xùn)練好的模型就無法再利用,且大型神經(jīng)網(wǎng)絡(luò)的訓(xùn)練時間都比較長,在訓(xùn)練過程中需要每隔一段時間保存一次模型訓(xùn)練的中間結(jié)果,這樣如果在訓(xùn)練過程中程序死機,死機前的最新的模型參數(shù)仍能保留,杜絕了時間和資源的浪費。
第三,將訓(xùn)練和測試分成兩個獨立的程序,將訓(xùn)練和測試都會用到的前向傳播的過程抽象成單獨的庫函數(shù)。這樣就保證了在訓(xùn)練和預(yù)測兩個過程中所調(diào)用的前向傳播計算程序是一致的。
2. 改進(jìn)后程序設(shè)計
mnist_inference.py
該文件中定義了神經(jīng)網(wǎng)絡(luò)的前向傳播過程,其中的多次用到的weights定義過程又單獨定義成函數(shù)。
通過tf.get_variable函數(shù)來獲取變量,在神經(jīng)網(wǎng)絡(luò)訓(xùn)練時創(chuàng)建這些變量,在測試時會通過保存的模型加載這些變量的取值,而且可以在變量加載時將滑動平均值重命名。所以可以直接通過同樣的名字在訓(xùn)練時使用變量自身,在測試時使用變量的滑動平均值。
mnist_train.py
該程序給出了神經(jīng)網(wǎng)絡(luò)的完整訓(xùn)練過程。
mnist_eval.py
在滑動平均模型上做測試。
通過tf.train.get_checkpoint_state(mnist_train.MODEL_SAVE_PATH)獲取最新模型的文件名,實際是獲取checkpoint文件的所有內(nèi)容。
三、TensorFlow最佳實踐樣例
mnist_inference.py
import tensorflow as tf INPUT_NODE = 784 OUTPUT_NODE = 10 LAYER1_NODE = 500 def get_weight_variable(shape, regularizer): weights = tf.get_variable("weights", shape, initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: # 將權(quán)重參數(shù)的正則化項加入至損失集合 tf.add_to_collection('losses', regularizer(weights)) return weights def inference(input_tensor, regularizer): with tf.variable_scope('layer1'): weights = get_weight_variable([INPUT_NODE, LAYER1_NODE], regularizer) biases = tf.get_variable("biases", [LAYER1_NODE], initializer=tf.constant_initializer(0.0)) layer1 = tf.nn.relu(tf.matmul(input_tensor, weights) + biases) with tf.variable_scope('layer2'): weights = get_weight_variable([LAYER1_NODE, OUTPUT_NODE], regularizer) biases = tf.get_variable("biases", [OUTPUT_NODE], initializer=tf.constant_initializer(0.0)) layer2 = tf.matmul(layer1, weights) + biases return layer2
mnist_train.py
import os import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data import mnist_inference BATCH_SIZE = 100 LEARNING_RATE_BASE = 0.8 LEARNING_RATE_DECAY = 0.99 REGULARIZATION_RATE = 0.0001 TRAINING_STEPS = 10000 MOVING_AVERAGE_DECAY = 0.99 MODEL_SAVE_PATH = "Model_Folder/" MODEL_NAME = "model.ckpt" def train(mnist): # 定義輸入placeholder x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE], name='x-input') y_ = tf.placeholder(tf.float32, [None, mnist_inference.OUTPUT_NODE], name='y-input') # 定義正則化器及計算前向過程輸出 regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE) y = mnist_inference.inference(x, regularizer) # 定義當(dāng)前訓(xùn)練輪數(shù)及滑動平均模型 global_step = tf.Variable(0, trainable=False) variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) variables_averages_op = variable_averages.apply(tf.trainable_variables()) # 定義損失函數(shù) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1)) cross_entropy_mean = tf.reduce_mean(cross_entropy) loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses')) # 定義指數(shù)衰減學(xué)習(xí)率 learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY) # 定義訓(xùn)練操作,包括模型訓(xùn)練及滑動模型操作 train_step = tf.train.GradientDescentOptimizer(learning_rate)\ .minimize(loss, global_step=global_step) train_op = tf.group(train_step, variables_averages_op) # 定義Saver類對象,保存模型,TensorFlow持久化類 saver = tf.train.Saver() # 定義會話,啟動訓(xùn)練過程 with tf.Session() as sess: tf.global_variables_initializer().run() for i in range(TRAINING_STEPS): xs, ys = mnist.train.next_batch(BATCH_SIZE) _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys}) if i % 1000 == 0: print("After %d training step(s), loss on training batch is %g."\ % (step, loss_value)) # save方法的global_step參數(shù)可以讓每個被保存的模型的文件名末尾加上當(dāng)前訓(xùn)練輪數(shù) saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) def main(argv=None): mnist = input_data.read_data_sets("MNIST_data", one_hot=True) train(mnist) if __name__ == '__main__': tf.app.run()
mnist_eval.py
import time import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data import mnist_inference import mnist_train EVAL_INTERVAL_SECS = 10 def evaluate(mnist): with tf.Graph().as_default() as g: # 定義輸入placeholder x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE], name='x-input') y_ = tf.placeholder(tf.float32, [None, mnist_inference.OUTPUT_NODE], name='y-input') # 定義feed字典 validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels} # 測試時不加參數(shù)正則化損失 y = mnist_inference.inference(x, None) # 計算正確率 correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 加載滑動平均模型下的參數(shù)值 variable_averages = tf.train.ExponentialMovingAverage( mnist_train.MOVING_AVERAGE_DECAY) saver = tf.train.Saver(variable_averages.variables_to_restore()) # 每隔EVAL_INTERVAL_SECS秒啟動一次會話 while True: with tf.Session() as sess: ckpt = tf.train.get_checkpoint_state(mnist_train.MODEL_SAVE_PATH) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) # 取checkpoint文件中的當(dāng)前迭代輪數(shù)global_step global_step = ckpt.model_checkpoint_path\ .split('/')[-1].split('-')[-1] accuracy_score = sess.run(accuracy, feed_dict=validate_feed) print("After %s training step(s), validation accuracy = %g"\ % (global_step, accuracy_score)) else: print('No checkpoint file found') return time.sleep(EVAL_INTERVAL_SECS) def main(argv=None): mnist = input_data.read_data_sets("MNIST_data", one_hot=True) evaluate(mnist) if __name__ == '__main__': tf.app.run()
感謝你能夠認(rèn)真閱讀完這篇文章,希望小編分享的“TensorFlow如何搭建神經(jīng)網(wǎng)絡(luò)”這篇文章對大家有幫助,同時也希望大家多多支持億速云,關(guān)注億速云行業(yè)資訊頻道,更多相關(guān)知識等著你來學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。