您好,登錄后才能下訂單哦!
小編給大家分享一下python中xgboost怎么用,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!
1.數(shù)據(jù)讀取
利用原生xgboost庫讀取libsvm數(shù)據(jù)
import xgboost as xgb data = xgb.DMatrix(libsvm文件)
使用sklearn讀取libsvm數(shù)據(jù)
from sklearn.datasets import load_svmlight_file X_train,y_train = load_svmlight_file(libsvm文件)
使用pandas讀取完數(shù)據(jù)后在轉(zhuǎn)化為標(biāo)準(zhǔn)形式
2.模型訓(xùn)練過程
1.未調(diào)參基線模型
使用xgboost原生庫進行訓(xùn)練
import xgboost as xgb from sklearn.metrics import accuracy_score dtrain = xgb.DMatrix(f_train, label = l_train) dtest = xgb.DMatrix(f_test, label = l_test) param = {'max_depth':2, 'eta':1, 'silent':0, 'objective':'binary:logistic' } num_round = 2 bst = xgb.train(param, dtrain, num_round) train_preds = bst.predict(dtrain) train_predictions = [round(value) for value in train_preds] #進行四舍五入的操作--變成0.1(算是設(shè)定閾值的符號函數(shù)) train_accuracy = accuracy_score(l_train, train_predictions) #使用sklearn進行比較正確率 print ("Train Accuary: %.2f%%" % (train_accuracy * 100.0)) from xgboost import plot_importance #顯示特征重要性 plot_importance(bst)#打印重要程度結(jié)果。 pyplot.show()
使用XGBClassifier進行訓(xùn)練
# 未設(shè)定早停止, 未進行矩陣變換 from xgboost import XGBClassifier from sklearn.datasets import load_svmlight_file #用于直接讀取svmlight文件形式, 否則就需要使用xgboost.DMatrix(文件名)來讀取這種格式的文件 from sklearn.metrics import accuracy_score from matplotlib import pyplot num_round = 100 bst1 =XGBClassifier(max_depth=2, learning_rate=1, n_estimators=num_round, #弱分類樹太少的話取不到更多的特征重要性 silent=True, objective='binary:logistic') bst1.fit(f_train, l_train) train_preds = bst1.predict(f_train) train_accuracy = accuracy_score(l_train, train_preds) print ("Train Accuary: %.2f%%" % (train_accuracy * 100.0)) preds = bst1.predict(f_test) test_accuracy = accuracy_score(l_test, preds) print("Test Accuracy: %.2f%%" % (test_accuracy * 100.0)) from xgboost import plot_importance #顯示特征重要性 plot_importance(bst1)#打印重要程度結(jié)果。 pyplot.show()
2.兩種交叉驗證方式
使用cross_val_score進行交叉驗證
#利用model_selection進行交叉訓(xùn)練 from xgboost import XGBClassifier from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_val_score from sklearn.metrics import accuracy_score from matplotlib import pyplot param = {'max_depth':2, 'eta':1, 'silent':0, 'objective':'binary:logistic' } num_round = 100 bst2 =XGBClassifier(max_depth=2, learning_rate=0.1,n_estimators=num_round, silent=True, objective='binary:logistic') bst2.fit(f_train, l_train) kfold = StratifiedKFold(n_splits=10, random_state=7) results = cross_val_score(bst2, f_train, l_train, cv=kfold)#對數(shù)據(jù)進行十折交叉驗證--9份訓(xùn)練,一份測試 print(results) print("CV Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) from xgboost import plot_importance #顯示特征重要性 plot_importance(bst2)#打印重要程度結(jié)果。 pyplot.show()
使用GridSearchCV進行網(wǎng)格搜索
#使用sklearn中提供的網(wǎng)格搜索進行測試--找出最好參數(shù),并作為默認訓(xùn)練參數(shù) from xgboost import XGBClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import accuracy_score from matplotlib import pyplot params = {'max_depth':2, 'eta':0.1, 'silent':0, 'objective':'binary:logistic' } bst =XGBClassifier(max_depth=2, learning_rate=0.1, silent=True, objective='binary:logistic') param_test = { 'n_estimators': range(1, 51, 1) } clf = GridSearchCV(estimator = bst, param_grid = param_test, scoring='accuracy', cv=5)# 5折交叉驗證 clf.fit(f_train, l_train) #默認使用最優(yōu)的參數(shù) preds = clf.predict(f_test) test_accuracy = accuracy_score(l_test, preds) print("Test Accuracy of gridsearchcv: %.2f%%" % (test_accuracy * 100.0)) clf.cv_results_, clf.best_params_, clf.best_score_
3.早停止調(diào)參–early_stopping_rounds(查看的是損失是否變化)
#進行提早停止的單獨實例 import xgboost as xgb from xgboost import XGBClassifier from sklearn.metrics import accuracy_score from matplotlib import pyplot param = {'max_depth':2, 'eta':1, 'silent':0, 'objective':'binary:logistic' } num_round = 100 bst =XGBClassifier(max_depth=2, learning_rate=0.1, n_estimators=num_round, silent=True, objective='binary:logistic') eval_set =[(f_test, l_test)] bst.fit(f_train, l_train, early_stopping_rounds=10, eval_metric="error",eval_set=eval_set, verbose=True) #early_stopping_rounds--當(dāng)多少次的效果差不多時停止 eval_set--用于顯示損失率的數(shù)據(jù) verbose--顯示錯誤率的變化過程 # make prediction preds = bst.predict(f_test) test_accuracy = accuracy_score(l_test, preds) print("Test Accuracy: %.2f%%" % (test_accuracy * 100.0))
4.多數(shù)據(jù)觀察訓(xùn)練損失
#多參數(shù)順 import xgboost as xgb from xgboost import XGBClassifier from sklearn.metrics import accuracy_score from matplotlib import pyplot num_round = 100 bst =XGBClassifier(max_depth=2, learning_rate=0.1, n_estimators=num_round, silent=True, objective='binary:logistic') eval_set = [(f_train, l_train), (f_test, l_test)] bst.fit(f_train, l_train, eval_metric=["error", "logloss"], eval_set=eval_set, verbose=True) # make prediction preds = bst.predict(f_test) test_accuracy = accuracy_score(l_test, preds) print("Test Accuracy: %.2f%%" % (test_accuracy * 100.0))
5.模型保存與讀取
#模型保存 bst.save_model('demo.model') #模型讀取與預(yù)測 modelfile = 'demo.model' # 1 bst = xgb.Booster({'nthread':8}, model_file = modelfile) # 2 f_test1 = xgb.DMatrix(f_test) #盡量使用xgboost的自己的數(shù)據(jù)矩陣 ypred1 = bst.predict(f_test1) train_predictions = [round(value) for value in ypred1] test_accuracy1 = accuracy_score(l_test, train_predictions) print("Test Accuracy: %.2f%%" % (test_accuracy1 * 100.0))
以上是“python中xgboost怎么用”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對大家有所幫助,如果還想學(xué)習(xí)更多知識,歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。