溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

python機(jī)器學(xué)習(xí)理論與實(shí)戰(zhàn)(二)決策樹

發(fā)布時(shí)間:2020-09-29 07:10:58 來源:腳本之家 閱讀:215 作者:marvin521 欄目:開發(fā)技術(shù)

        決策樹也是有監(jiān)督機(jī)器學(xué)習(xí)方法。 電影《無恥混蛋》里有一幕游戲,在德軍小酒館里有幾個(gè)人在玩20問題游戲,游戲規(guī)則是一個(gè)設(shè)迷者在紙牌中抽出一個(gè)目標(biāo)(可以是人,也可以是物),而猜謎者可以提問題,設(shè)迷者只能回答是或者不是,在幾個(gè)問題(最多二十個(gè)問題)之后,猜謎者通過逐步縮小范圍就準(zhǔn)確的找到了答案。這就類似于決策樹的工作原理。(圖一)是一個(gè)判斷郵件類別的工作方式,可以看出判別方法很簡(jiǎn)單,基本都是閾值判斷,關(guān)鍵是如何構(gòu)建決策樹,也就是如何訓(xùn)練一個(gè)決策樹。

python機(jī)器學(xué)習(xí)理論與實(shí)戰(zhàn)(二)決策樹

(圖一)

構(gòu)建決策樹的偽代碼如下:

Check if every item in the dataset is in the same class:
    If so return the class label
    Else 
      find the best feature to split the data
       split the dataset 
       create a branch node
       for each split
          call create Branch and add the result to the branch node

      return branch node

         原則只有一個(gè),盡量使得每個(gè)節(jié)點(diǎn)的樣本標(biāo)簽盡可能少,注意上面?zhèn)未a中一句說:find the best feature to split the data,那么如何find thebest feature?一般有個(gè)準(zhǔn)則就是盡量使得分支之后節(jié)點(diǎn)的類別純一些,也就是分的準(zhǔn)確一些。如(圖二)中所示,從海洋中撈取的5個(gè)動(dòng)物,我們要判斷他們是否是魚,先用哪個(gè)特征?

python機(jī)器學(xué)習(xí)理論與實(shí)戰(zhàn)(二)決策樹

(圖二)

         為了提高識(shí)別精度,我們是先用“離開陸地能否存活”還是“是否有蹼”來判斷?我們必須要有一個(gè)衡量準(zhǔn)則,常用的有信息論、基尼純度等,這里使用前者。我們的目標(biāo)就是選擇使得分割后數(shù)據(jù)集的標(biāo)簽信息增益最大的那個(gè)特征,信息增益就是原始數(shù)據(jù)集標(biāo)簽基熵減去分割后的數(shù)據(jù)集標(biāo)簽熵,換句話說,信息增益大就是熵變小,使得數(shù)據(jù)集更有序。熵的計(jì)算如(公式一)所示:

python機(jī)器學(xué)習(xí)理論與實(shí)戰(zhàn)(二)決策樹

有了指導(dǎo)原則,那就進(jìn)入代碼實(shí)戰(zhàn)階段,先來看看熵的計(jì)算代碼:

def calcShannonEnt(dataSet): 
  numEntries = len(dataSet) 
  labelCounts = {} 
  for featVec in dataSet: #the the number of unique elements and their occurance 
    currentLabel = featVec[-1] 
    if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 
    labelCounts[currentLabel] += 1 #收集所有類別的數(shù)目,創(chuàng)建字典 
  shannonEnt = 0.0 
  for key in labelCounts: 
    prob = float(labelCounts[key])/numEntries 
    shannonEnt -= prob * log(prob,2) #log base 2 計(jì)算熵 
  return shannonEnt 

有了熵的計(jì)算代碼,接下來看依照信息增益變大的原則選擇特征的代碼:

def splitDataSet(dataSet, axis, value): 
  retDataSet = [] 
  for featVec in dataSet: 
    if featVec[axis] == value: 
      reducedFeatVec = featVec[:axis]   #chop out axis used for splitting 
      reducedFeatVec.extend(featVec[axis+1:]) 
      retDataSet.append(reducedFeatVec) 
  return retDataSet 
   
def chooseBestFeatureToSplit(dataSet): 
  numFeatures = len(dataSet[0]) - 1   #the last column is used for the labels 
  baseEntropy = calcShannonEnt(dataSet) 
  bestInfoGain = 0.0; bestFeature = -1 
  for i in range(numFeatures):    #iterate over all the features 
    featList = [example[i] for example in dataSet]#create a list of all the examples of this feature 
    uniqueVals = set(featList)    #get a set of unique values 
    newEntropy = 0.0 
    for value in uniqueVals: 
      subDataSet = splitDataSet(dataSet, i, value) 
      prob = len(subDataSet)/float(len(dataSet)) 
      newEntropy += prob * calcShannonEnt(subDataSet)    
    infoGain = baseEntropy - newEntropy   #calculate the info gain; ie reduction in entropy 
    if (infoGain > bestInfoGain):    #compare this to the best gain so far  #選擇信息增益最大的代碼在此 
      bestInfoGain = infoGain     #if better than current best, set to best 
      bestFeature = i 
  return bestFeature           #returns an integer 

        從最后一個(gè)if可以看出,選擇使得信息增益最大的特征作為分割特征,現(xiàn)在有了特征分割準(zhǔn)則,繼續(xù)進(jìn)入一下個(gè)環(huán)節(jié),如何構(gòu)建決策樹,其實(shí)就是依照最上面的偽代碼寫下去,采用遞歸的思想依次分割下去,直到執(zhí)行完成就構(gòu)建了決策樹。代碼如下:

def majorityCnt(classList): 
  classCount={} 
  for vote in classList: 
    if vote not in classCount.keys(): classCount[vote] = 0 
    classCount[vote] += 1 
  sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True) 
  return sortedClassCount[0][0] 
 
def createTree(dataSet,labels): 
  classList = [example[-1] for example in dataSet] 
  if classList.count(classList[0]) == len(classList):  
    return classList[0]#stop splitting when all of the classes are equal 
  if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet 
    return majorityCnt(classList) 
  bestFeat = chooseBestFeatureToSplit(dataSet) 
  bestFeatLabel = labels[bestFeat] 
  myTree = {bestFeatLabel:{}} 
  del(labels[bestFeat]) 
  featValues = [example[bestFeat] for example in dataSet] 
  uniqueVals = set(featValues) 
  for value in uniqueVals: 
    subLabels = labels[:]    #copy all of labels, so trees don't mess up existing labels 
    myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) 
  return myTree   

用圖二的樣本構(gòu)建的決策樹如(圖三)所示:

python機(jī)器學(xué)習(xí)理論與實(shí)戰(zhàn)(二)決策樹

(圖三)

有了決策樹,就可以用它做分類咯,分類代碼如下:

def classify(inputTree,featLabels,testVec): 
  firstStr = inputTree.keys()[0] 
  secondDict = inputTree[firstStr] 
  featIndex = featLabels.index(firstStr) 
  key = testVec[featIndex] 
  valueOfFeat = secondDict[key] 
  if isinstance(valueOfFeat, dict):  
    classLabel = classify(valueOfFeat, featLabels, testVec) 
  else: classLabel = valueOfFeat 
  return classLabel 

最后給出序列化決策樹(把決策樹模型保存在硬盤上)的代碼:

def storeTree(inputTree,filename): 
  import pickle 
  fw = open(filename,'w') 
  pickle.dump(inputTree,fw) 
  fw.close() 
   
def grabTree(filename): 
  import pickle 
  fr = open(filename) 
  return pickle.load(fr) 

優(yōu)點(diǎn):檢測(cè)速度快

缺點(diǎn):容易過擬合,可以采用修剪的方式來盡量避免

參考文獻(xiàn):machine learning in action

以上就是本文的全部?jī)?nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持億速云。

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI