您的位置:首页 > 编程语言 > Python开发

机器学习算法-决策树(续)Python实现

2017-07-24 10:40 381 查看
决策树算法的理论部分参考:决策树理论

决策树算法实现一共分为以下几个部分:

加载数据集部分

熵的计算

按照给定特征划分数据集

根据信息增益的最大值的属性作为划分属性

递归构建决策树

样本的分类

创建分支节点伪代码函数createBranch()如下所示:

检测数据集的每个子项是否属于同一类:
if so return 类标签;
else
寻找划分数据集的最好特征
划分数据集
创建分支节点
for 每个分支节点
调用函数createBranch并增加返回结果到分支节点中
return 分支节点
1
2
3
4
5
6
7
8
9


1
2
3
4
5
6
7
8
9
[/code]

下面来介绍每个部分如何实现。

1.加载数据

创建一个构造数据集的函数,所有的代码均写在一个py文件里面。

def createDataSet():
dataSet = [[1, 1, 'yes'],
[1, 1, 'yes'],
[1, 0, 'no'],
[0, 1, 'no'],
[0, 1, 'no']]
labels = ['no surfacing','flippers']   #the label of each feature
#change to discrete values
return dataSet, labels
1
2
3
4
5
6
7
8
9


1
2
3
4
5
6
7
8
9
[/code]

2.计算给定数据集的香农熵

def calcShannonEnt(dataSet):
n = len(dataSet) #calculate the size of dataset
labelCounts = {}
# create dictionary "count"
for featVec in dataSet: #the the number of unique elements and their occurance
currentLabel = featVec[-1]
if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0
labelCounts[currentLabel] += 1
shannonEnt = 0.0
for key in labelCounts:
prob = float(labelCounts[key])/n #notice transfering to float first
shannonEnt -= prob * log(prob,2) #log base 2
return shannonEnt
1
2
3
4
5
6
7
8
9
10
11
12
13


1
2
3
4
5
6
7
8
9
10
11
12
13
[/code]

3.按照给定特征划分数据集

def splitDataSet(dataSet, axis, value):
retDataSet = []
for featVec in dataSet:
if featVec[axis] == value:
reducedFeatVec = featVec[:axis]     #chop out axis used for splitting
reducedFeatVec.extend(featVec[axis+1:])
retDataSet.append(reducedFeatVec)
return retDataSet
1
2
3
4
5
6
7
8


1
2
3
4
5
6
7
8
[/code]



可以根据这个来统计出某个特征的正样本和负样本的个数。

4.选择最好的数据集划分方式

计算出每种特征的信息增益值,然后选择出信息增益最大的作为划分属性。

def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1      #the last column is used for the labels
baseEntropy = calcShannonEnt(dataSet)  #calculate the info of dataSet
bestInfoGain = 0.0; bestFeature = -1
for i in range(numFeatures):        #iterate over all the features
featList = [example[i] for example in dataSet]#create a list of all the examples of this feature
uniqueVals = set(featList)       #get a set of unique values
newEntropy = 0.0
for value in uniqueVals:         # calculate the info of each feature
subDataSet = splitDataSet(dataSet, i, value)
prob = len(subDataSet)/float(len(dataSet))
newEntropy += prob * calcShannonEnt(subDataSet)
infoGain = baseEntropy - newEntropy     #calculate the info gain; ie reduction in entropy
if (infoGain > bestInfoGain):       #compare this to the best gain so far
bestInfoGain = infoGain         #if better than current best, set to best
bestFeature = i
return bestFeature                      #returns an integer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[/code]

这个函数会返回一个最佳的特征,作为划分的特征。下面根据这个特征来构建数,然后再迭代计算信息增益,获得新的特征,进行新的划分。



选择出最好的划分特征。

5.递归构建树

创建树的函数代码

def majorityCnt(classList):
classCount={}
for vote in classList:
if vote not in classCount.keys(): classCount[vote] = 0
classCount[vote] += 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]

def createTree(dataSet,labels):
classList = [example[-1] for example in dataSet]
if classList.count(classList[0]) == len(classList):
return classList[0]#stop splitting when all of the classes are equal
if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet
return majorityCnt(classList)
bestFeat = chooseBestFeatureToSplit(dataSet)
bestFeatLabel = labels[bestFeat]
myTree = {bestFeatLabel:{}}
del(labels[bestFeat])
featValues = [example[bestFeat] for example in dataSet]
uniqueVals = set(featValues)
for value in uniqueVals:
subLabels = labels[:]       #copy all of labels, so trees don't mess up existing labels
myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels)
return myTree
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[/code]



6.执行数据分类

使用决策树的分类函数

def classify(inputTree,featLabels,testVec):
firstStr = inputTree.keys()[0]
secondDict = inputTree[firstStr]
featIndex = featLabels.index(firstStr)
key = testVec[featIndex]
valueOfFeat = secondDict[key]
if isinstance(valueOfFeat, dict):
classLabel = classify(valueOfFeat, featLabels, testVec)
else: classLabel = valueOfFeat
return classLabel
1
2
3
4
5
6
7
8
9
10


1
2
3
4
5
6
7
8
9
10
[/code]

测试



附件:决策树源码

原文链接:http://blog.csdn.net/dream_angel_z/article/details/45965463
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: