您好,登錄后才能下訂單哦!
應(yīng)用Tensorflow2.0的Eager模式是怎么快速構(gòu)建神經(jīng)網(wǎng)絡(luò)的,相信很多沒有經(jīng)驗(yàn)的人對此束手無策,為此本文總結(jié)了問題出現(xiàn)的原因和解決方法,通過這篇文章希望你能解決這個(gè)問題。
import tensorflow as tf
a = tf.constant(3.0)
b = tf.placeholder(dtype = tf.float32)
c = tf.add(a,b)
sess = tf.Session() #創(chuàng)建會話對象
init = tf.global_variables_initializer()
sess.run(init) #初始化會話對象
feed = {
b: 2.0
} #對變量b賦值
c_res = sess.run(c, feed) #通過會話驅(qū)動(dòng)計(jì)算圖獲取計(jì)算結(jié)果
print(c_res)
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
def add(num1, num2):
a = tf.convert_to_tensor(num1) #將數(shù)值轉(zhuǎn)換為TF張量,這有利于加快運(yùn)算速度
b = tf.convert_to_tensor(num2)
c = a + b
return c.numpy() #將張量轉(zhuǎn)換為數(shù)值
add_res = add(3.0, 4.0)
print(add_res)
from sklearn import datasets, preprocessing, model_selection
data = datasets.load_iris() #加載數(shù)據(jù)到內(nèi)存
x = preprocessing.MinMaxScaler(feature_range = (-1, 1)).fit_transform(data['data']) #將數(shù)據(jù)數(shù)值預(yù)處理到(-1,1)之間方便網(wǎng)絡(luò)識別
#把不同分類的品種用向量表示,例如有三個(gè)不同品種,那么分別用(1,0,0),(0,1,0),(0,0,1)表示
y = preprocessing.OneHotEncoder(sparse = False).fit_transform(data['target'].reshape(-1, 1))
x_train, x_test, y_train, y_test = model_selection.train_test_split(x, y, test_size = 0.25, stratify = y) #將數(shù)據(jù)分成訓(xùn)練集合測試集
print(len(x_train))
class IrisClassifyModel(object): def __init__(self, hidden_unit, output_unit): #這里只構(gòu)建兩層網(wǎng)絡(luò),第一層是輸入數(shù)據(jù) self.hidden_layer = tf.keras.layers.Dense(units = hidden_unit, activation = tf.nn.tanh, use_bias = True, name="hidden_layer") self.output_layer = tf.keras.layers.Dense(units = output_unit, activation = None, use_bias = True, name="output_layer") def __call__(self, inputs): return self.output_layer(self.hidden_layer(inputs))
#構(gòu)造輸入數(shù)據(jù)檢驗(yàn)網(wǎng)絡(luò)是否正常運(yùn)行
model = IrisClassifyModel(10, 3)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
for x, y in tfe.Iterator(train_dataset.batch(32)):
output = model(x)
print(output.numpy())
break
def make_loss(model, inputs, labels):
return tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(logits = model(inputs), labels = labels))
opt = tf.train.AdamOptimizer(learning_rate = 0.01)
def train(model, x, y):
opt.minimize(lambda:make_loss(model, x, y))
accuracy = tfe.metrics.Accuracy()
def check_accuracy(model, x_batch, y_batch): #統(tǒng)計(jì)網(wǎng)絡(luò)判斷結(jié)果的準(zhǔn)確性
accuracy(tf.argmax(model(tf.constant(x_batch)), axis = 1), tf.argmax(tf.constant(y_batch), axis = 1))
return accuracy
import numpy as np
model = IrisClassifyModel(10, 3)
epochs = 50
acc_history = np.zeros(epochs)
for epoch in range(epochs):
for (x_batch, y_batch) in tfe.Iterator(train_dataset.shuffle(1000).batch(32)):
train(model, x_batch, y_batch)
acc = check_accuracy(model, x_batch, y_batch)
acc_history[epoch] = acc.result().numpy()
import matplotlib.pyplot as plt
plt.figure()
plt.plot(acc_history)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.show()
看完上述內(nèi)容,你們掌握應(yīng)用Tensorflow2.0的Eager模式是怎么快速構(gòu)建神經(jīng)網(wǎng)絡(luò)的的方法了嗎?如果還想學(xué)到更多技能或想了解更多相關(guān)內(nèi)容,歡迎關(guān)注億速云行業(yè)資訊頻道,感謝各位的閱讀!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。