您的位置:首页 > 编程语言 > Python开发

随机森林算法的简单总结及python实现

2017-01-10 16:22 288 查看
 随机森林是数据挖掘中非常常用的分类预测算法,以分类或回归的决策树为基分类器。算法的一些基本要点:

  *对大小为m的数据集进行样本量同样为m的有放回抽样;

        *对K个特征进行随机抽样,形成特征的子集,样本量的确定方法可以有平方根、自然对数等;

        *每棵树完全生成,不进行剪枝;

        *每个样本的预测结果由每棵树的预测投票生成(回归的时候,即各棵树的叶节点的平均)

  著名的Python机器学习包scikit
learn的文档对此算法有比较详尽的介绍: http://scikit-learn.org/stable/modules/ensemble.html#random-forests

  出于个人研究和测试的目的,基于经典的Kaggle 101 泰坦尼克号乘客的数据集,建立模型并进行评估。比赛页面及相关数据集的下载:https://www.kaggle.com/c/titanic

  泰坦尼克号的沉没,是历史上非常著名的海难。突然感到,自己面对的不再是冷冰冰的数据,而是用数据挖掘的方法,去研究具体的历史问题,也是饶有兴趣。言归正传,模型的主要的目标,是希望根据每个乘客的一系列特征,如性别、年龄、舱位、上船地点等,对其是否能生还进行预测,是非常典型的二分类预测问题。数据集的字段名及实例如下:

PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
103Braund, Mr. Owen Harrismale2210A/5 211717.25 S
211Cumings, Mrs. John Bradley (Florence Briggs Thayer)female3810PC 1759971.2833C85C
313Heikkinen, Miss. Lainafemale2600STON/O2. 31012827.925 S
411Futrelle, Mrs. Jacques Heath (Lily May Peel)female351011380353.1C123S
503Allen, Mr. William Henrymale3500
ec0f
3734508.05 S
值得说明的是,SibSp是指sister brother spouse,即某个乘客随行的兄弟姐妹、丈夫、妻子的人数,Parch指parents,children

下面给出整个数据处理及建模过程,基于ubuntu+python 3.4 ( anaconda科学计算环境已经集成一系列常用包,pandas numpy sklearn等,这里强烈推荐)

懒得切换输入法,写的时候主要的注释都是英文,中文的注释是后来补充的 :-)

# -*- coding: utf-8 -*-
"""
@author: kim
"""

from model import * #载入基分类器的代码

#ETL:same procedure to training set and test set
training=pd.read_csv('train.csv',index_col=0)
test=pd.read_csv('test.csv',index_col=0)
SexCode=pd.DataFrame([1,0],index=['female','male'],columns=['Sexcode'])  #将性别转化为01
training=training.join(SexCode,how='left',on=training.Sex)
training=training.drop(['Name','Ticket','Embarked','Cabin','Sex'],axis=1) #删去几个不参与建模的变量,包括姓名、船票号,船舱号
test=test.join(SexCode,how='left',on=test.Sex)
test=test.drop(['Name','Ticket','Embarked','Cabin','Sex'],axis=1)
print('ETL IS DONE!')

#MODEL FITTING
#===============PARAMETER AJUSTMENT============
min_leaf=1
min_dec_gini=0.0001
n_trees=5
n_fea=int(math.sqrt(len(training.columns)-1))
#==============================================

'''
BEST SCORE:0.83
min_leaf=30
min_dec_gini=0.001
n_trees=20
'''

#ESSEMBLE BY RANDOM FOREST
FOREST={}
tmp=list(training.columns)
tmp.pop(tmp.index('Survived'))
feaList=pd.Series(tmp)
for t in range(n_trees):
# fea=[]
feasample=feaList.sample(n=n_fea,replace=False)#select feature
fea=feasample.tolist()
fea.append('Survived')
# feaNew=fea.append(target)
subset=training.sample(n=len(training),replace=True) #generate the dataset with replacement
subset=subset[fea]
# print(str(t)+' Classifier built on feature:')
# print(list(fea))
FOREST[t]=tree_grow(subset,'Survived',min_leaf,min_dec_gini) #save the tree

#MODEL PREDICTION
#======================
currentdata=training
output='submission_rf_20151116_30_0.001_20'
#======================

prediction={}
for r in currentdata.index:#a row
prediction_vote={1:0,0:0}
row=currentdata.get(currentdata.index==r)
for n in range(n_trees):
tree_dict=FOREST
#a tree
p=model_prediction(tree_dict,row)
prediction_vote[p]+=1
vote=pd.Series(prediction_vote)
prediction[r]=list(vote.order(ascending=False).index)[0]#the vote result
result=pd.Series(prediction,name='Survived_p')
#del prediction_vote
#del prediction

#result.to_csv(output)

t=training.join(result,how='left')
accuracy=round(len(t[t['Survived']==t['Survived_p']])/len(t),5)
print(accuracy)
上述是随机森林的代码,如上所述,随机森林是一系列决策树的组合,决策树每次分裂,用Gini系数衡量当前节点的“不纯净度”,如果按照某个特征的某个分裂点对数据集划分后,能够让数据集的Gini下降最多(显著地减少了数据集输出变量的不纯度),则选为当前最佳的分割特征及分割点。代码如下:

# -*- coding: utf-8 -*-
"""
@author: kim
"""

import pandas as pd
import numpy as np
#import sklearn as sk
import math

def tree_grow(dataframe,target,min_leaf,min_dec_gini):

tree={} #renew a tree
is_not_leaf=(len(dataframe)>min_leaf)
if is_not_leaf:
fea,sp,gd=best_split_col(dataframe,target)
if gd>min_dec_gini:
tree['fea']=fea
tree['val']=sp
# dataframe.drop(fea,axis=1) #1116 modified
l,r=dataSplit(dataframe,fea,sp)
l.drop(fea,axis=1)
r.drop(fea,axis=1)
tree['left']=tree_grow(l,target,min_leaf,min_dec_gini)
tree['right']=tree_grow(r,target,min_leaf,min_dec_gini)
else:#return a leaf
return leaf(dataframe[target])
else:
return leaf(dataframe[target])

return tree

def leaf(class_lable):

tmp={}
for i in class_lable:
if i in tmp:
tmp[i]+=1
else:
tmp[i]=1
s=pd.Series(tmp)
s.sort(ascending=False)

return s.index[0]

def gini_cal(class_lable):

p_1=sum(class_lable)/len(class_lable)
p_0=1-p_1
gini=1-(pow(p_0,2)+pow(p_1,2))

return gini

def dataSplit(dataframe,split_fea,split_val):

left_node=dataframe[dataframe[split_fea]<=split_val]
right_node=dataframe[dataframe[split_fea]>split_val]

return left_node,right_node

def best_split_col(dataframe,target_name):
best_fea=''#modified 1116
best_split_point=0
col_list=list(dataframe.columns)
col_list.remove(target_name)
gini_0=gini_cal(dataframe[target_name])
n=len(dataframe)
gini_dec=-99999999
for col in col_list:
node=dataframe[[col,target_name]]
unique=node.groupby(col).count().index
for split_point in unique: #unique value
left_node,right_node=dataSplit(node,col,split_point)
if len(left_node)>0 and len(right_node)>0:
gini_col=gini_cal(left_node[target_name])*(len(left_node)/n)+gini_cal(right_node[target_name])*(len(right_node)/n)
if (gini_0-gini_col)>gini_dec:
gini_dec=gini_0-gini_col#decrease of impurity
best_fea=col
best_split_point=split_point
#print(col,split_point,gini_0-gini_col)

return best_fea,best_split_point,gini_dec

def model_prediction(model,row): #row is a df

fea=model['fea']
val=model['val']
left=model['left']
right=model['right']
if row[fea].tolist()[0]<=val:#get the value
branch=left
else:
branch=right
if ('dict' in str( type(branch) )):
prediction=model_prediction(branch,row)
else:
prediction=branch

return prediction


实际上,上面的代码还有很大的效率提升的空间,数据集不是很大的情况下,如果选择一个较大的输入参数,例如生成100棵树,就会显著地变慢;同时,将预测结果提交至kaggle进行评测,发现在测试集上的正确率不是很高,比使用sklearn里面相应的包进行预测的正确率(0.77512)要稍低一点 :-(  如果要提升准确率,两个大方向: 构造新的特征;调整现有模型的参数。

这里是抛砖引玉,欢迎大家对我的建模思路和算法的实现方法提出修改意见。

转自:http://blog.csdn.net/lo_cima/article/details/50533010
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息