决策树
2017-07-03 17:03
120 查看
决策树性质
优点:计算复杂度不高,输出结果易于理解,对中间值的缺失不敏感,可以处理不相关特征数据。缺点:可能会产生过度匹配问题。
使用数据类型:数值型和标称型。
决策树伪代码
输入:训练集 D={(x1,y1),(x2,y2),...,(xn,yn)}属性集 A={a1,a2,...,ad}
输出: 以node为根结点的一颗决策树
过程:createBranch() 检测数据集中的每个子项是否属于同一分类: if so return 类标签; else 寻找划分数据集的最好特征 划分数据集 创建分支节点 for 每个划分的子集 调用函数createBranch并增加返回结果到分支界结点中 return 分支结点
几点说明
最好的特征是指使得当前数据集信息增益最大的特征。
一般而言,信息增益越大,则意味着使用属性a来进行划分所获得的“纯度提升”越大。
决策树算法有可能出现过拟合现象。
数据集
from http://archive.ics.uci.edu/ml/machine-learning-databases/lenses/lenses.namesNumber of Instances: 24
Number of Attributes: 4 (all nominal)
Attribute Information:
age of the patient:
(1) young
(2) pre-presbyopic
(3) presbyopic
spectacle prescription:
(1) myope
(2) hypermetrope
astigmatic:
(1) no
(2) yes
tear production rate:
(1) reduced
(2) normal
Class Distribution:
hard contact lenses: 4
soft contact lenses: 5
no contact lenses: 15
决策树算法python实现
main.py
import trees import treePlotter fr = open('lenses.txt') lenses = [inst.strip().split('\t') for inst in fr.readlines()] lensesLabels = ['age', 'prescript', 'astigmatic', 'tearRate'] lensesTree = trees.createTree(lenses, lensesLabels) treePlotter.createPlot(lensesTree)
trees.createTree
#input:DataSet, Attributes #output;decision tree def createTree(dataSet,labels): classList = [example[-1] for example in dataSet] if classList.count(classList[0]) == len(classList): return classList[0] #stop splitting when all of the classes are equal if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet return majorityCnt(classList) bestFeat = chooseBestFeatureToSplit(dataSet) bestFeatLabel = labels[bestFeat] myTree = {bestFeatLabel:{}} del(labels[bestFeat]) featValues = [example[bestFeat] for example in dataSet] uniqueVals = set(featValues) for value in uniqueVals: subLabels = labels[:] #copy all of labels, so trees don't mess up existing labels myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels) return myTree
trees.majorityCnt
#input:list #output:the most frequent num in the list #algorithm:hash def majorityCnt(classList): classCount={} for vote in classList: if vote not in classCount.keys(): classCount[vote] = 0 classCount[vote] += 1 sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True) return sortedClassCount[0][0]
trees.chooseBestFeatureToSplit
信息增益: Gain(D,a)=Ent(D)−∑Vv=1|Dv|DEnt(Dv)
D:待划分的数据集合;
a:划分当前数据集的特征;
V:当前特征的离散取值个数。
#input: dataSet #output: current bestFeature #algorithm: find the feature with best information gain def chooseBestFeatureToSplit(dataSet): numFeatures = len(dataSet[0]) - 1 #the last column is used for the labels baseEntropy = calcShannonEnt(dataSet) bestInfoGain = 0.0; bestFeature = -1 for i in range(numFeatures): #iterate over all the features featList = [example[i] for example in dataSet]#create a list of all the examples of this feature uniqueVals = set(featList) #get a set of unique values newEntropy = 0.0 for value in uniqueVals: subDataSet = splitDataSet(dataSet, i, value) prob = len(subDataSet)/float(len(dataSet)) newEntropy += prob * calcShannonEnt(subDataSet) infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy if (infoGain > bestInfoGain): #compare this to the best gain so far bestInfoGain = infoGain #if better than current best, set to best bestFeature = i return bestFeature #returns an integer
trees.calcShannonEnt
信息熵: Ent(D)=−∑k=1|y|pklog2pk
|y|: 数据集D中,样本种类的个数。
#input: current dataSet #output: the information gain of current dataSet #algorithm: Ent(D) def calcShannonEnt(dataSet): numEntries = len(dataSet) labelCounts = {} for featVec in dataSet: #the the number of unique elements and their occurance currentLabel = featVec[-1] if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 labelCounts[currentLabel] += 1 shannonEnt = 0.0 for key in labelCounts: prob = float(labelCounts[key])/numEntries shannonEnt -= prob * log(prob,2) #log base 2 return shannonEnt
trees.splitDataSet
假定离散属性 a 有 V 个可能的取值 a1,a2,...,aV,若使用a来对样本集D进行划分则会产生 V 个分支结点,其中第 v 个分支结点包含了 D 中所有在属性 a 上取值为 aV 的样本,记为 Dv。
''' input: dataSet: the dataset we'll split; axis: the feature we'll split on; value: the value of the feature. output: dataset splitting on a given feature. ''' def splitDataSet(dataSet, axis, value): retDataSet = [] for featVec in dataSet: if featVec[axis] == value: reducedFeatVec = featVec[:axis] #chop out axis used for splitting reducedFeatVec.extend(featVec[axis+1:]) retDataSet.append(reducedFeatVec) return retDataSet
决策树可视化
使用Matplotlib注解绘制决策树Decision Tree
存在的问题及解决方式
过拟合
上面提到决策树算法中可能出现“过拟合”问题,决策树算法中解决过拟合问题通常使用剪枝的方法。决策树剪枝策略有“预剪枝”和“后剪枝”。
预剪枝
预剪枝是指在决策树生成过程中,对每个结点在划分前先进行估计,若当前结点的划分不能带来决策树泛化性能提升,则停止划分并将当前结点标记为叶结点。
后剪枝
后剪枝则是从训练集生成一颗完整的决策树,然后自底向上地对非叶结点进行考察,若将该结点对应的子树替换为叶结点能带来决策树泛化性能提升,则将该子树替换为叶结点。
相关文章推荐
- 机器学习算法疗程(决策树)
- 决策树-信息论
- 决策树与随机森林相关概念
- 决策树,decision的pyton代码和注释(机器学习实战)
- 统计学习-决策树
- 机器学习02-决策树ID3算法
- 数据挖掘-决策树ID3分类算法的C++实现
- 机器学习(十二)决策树
- 决策树 (Decision Tree) 原理简述及相关算法(ID3,C4.5)
- 决策树--从原理到实现
- 转载]决策树ID3、C4.5、CART科普
- 《机器学习实战》——决策树代码
- 《机器学习实战》—决策树
- 【Machine Learning】决策树案例:基于python的商品购买能力预测系统
- 决策树(含python源代码)
- 机器学习之-决策树-具体怎么实现及应用
- 分类决策树原理及实现(一)
- 机器学习2-决策树的可视化
- c50决策树借款风险
- 机器学习(三)决策树学习