您的位置:首页 > 其它

kaggle房价预测/Ridge/RandomForest/cross_validation

2017-06-02 09:59 776 查看
kaggle房价预测比赛官方地址

实验平台:Windows10 64位 + sublime text 3 + anaconda 2 64位(Python2) + numpy + pandas + matplotlib + sklearn

Step 0:引入相关的包

# coding:utf-8
# 注意读取文件时,Windows系统的\\和Linux系统的/的区别

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor


Step 1:读取数据

文件的组织形式是house price文件夹下面放house_price.py和input文件夹。

input文件夹下面放的是从https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data下载的train.csv test.csv sample_submission.csv 和 data_description.txt 四个文件。

# 将csv数据转换为DataFrame数据,方便用pandas进行数据预处理
# 注意将print的注释打开,可以查看输出结果
train_df = pd.read_csv(".\\input\\train.csv",index_col = 0)
test_df = pd.read_csv('.\\input\\test.csv',index_col = 0)
# print train_df.shape
# print test_df.shape
# print train_df.head()  # 默认展示前五行 这里是5行,80列
# print test_df.head()   # 这里是5行,79列


Step 2:合并数据

这么做主要是为了用DF进行数据预处理的时候更加方便。等所有的需要的预处理进行完之后,我们再把他们分隔开。实际项目中,不会这样做。首先,SalePrice作为我们的训练目标,只会出现在训练集中,不会在测试集中。所以,我们先把SalePrice这一列给拿出来,不让它碍事儿。

# 看看SalePrice的形状和用log1p处理后的形状
prices = pd.DataFrame({'price':train_df['SalePrice'],'log(price+1)':np.log1p(train_df['SalePrice'])})
# ps = prices.hist()
# plt.plot()
# plt.show()

# log1p即log(1+x),可以让label平滑化
y_train = np.log1p(train_df.pop('SalePrice'))
all_df = pd.concat((train_df,test_df),axis = 0)
# print all_df.shape
# print y_train.head()


Step 3:变量转化

正确化变量属性:MSSubClass 的值其实应该是一个category,但是Pandas是不会懂这些事儿的。使用DF的时候,这类数字符号会被默认记成数字。这种东西就很有误导性,我们需要把它变回成string

print all_df['MSSubClass'].dtypes
all_df['MSSubClass'] = all_df['MSSubClass'].astype(str)
print all_df['MSSubClass'].dtypes
print all_df['MSSubClass'].value_counts()


把category的变量转变成numerical表达形式:当我们用numerical来表达categorical的时候,要注意,数字本身有大小的含义,所以乱用数字会给之后的模型学习带来麻烦。于是我们可以用One-Hot的方法来表达category。pandas自带的get_dummies方法,可以帮你一键做到One-Hot。

print pd.get_dummies(all_df['MSSubClass'],prefix = 'MSSubClass').head()
all_dummy_df = pd.get_dummies(all_df)
print all_dummy_df.head()


处理好numerical变量:比如,有一些数据是缺失的

print all_dummy_df.isnull().sum().sort_values(ascending = False).head(11)
# 我们这里用mean填充
mean_cols = all_dummy_df.mean()
print mean_cols.head(10)
all_dummy_df = all_dummy_df.fillna(mean_cols)
print all_dummy_df.isnull().sum().sum()


标准化numerical数据:一般来说,regression的分类器都比较傲娇,最好是把源数据给放在一个标准分布内。不要让数据间的差距太大。这里,我们当然不需要把One-Hot的那些0/1数据给标准化。我们的目标应该是那些本来就是numerical的数据

numeric_cols = all_df.columns[all_df.dtypes != 'object']
print numeric_cols
numeric_col_means = all_dummy_df.loc[:,numeric_cols].mean()
numeric_col_std = all_dummy_df.loc[:,numeric_cols].std()
all_dummy_df.loc[:,numeric_cols] = (all_dummy_df.loc[:,numeric_cols] - numeric_col_means) / numeric_col_std


Step 4: 建立模型

# 把数据处理之后,送回训练集和测试集
dummy_train_df = all_dummy_df.loc[train_df.index]
dummy_test_df = all_dummy_df.loc[test_df.index]
print dummy_train_df.shape,dummy_test_df.shape

# 将DF数据转换成Numpy Array的形式,更好地配合sklearn
X_train = dummy_train_df.values
X_test = dummy_test_df.values


Ridge Regression

alphas = np.logspace(-3,2,50)
test_scores = []
for alpha in alphas:
clf = Ridge(alpha)
test_score = np.sqrt(-cross_val_score(clf,X_train,y_train,cv = 10,scoring = 'neg_mean_squared_error'))
test_scores.append(np.mean(test_score))
plt.plot(alphas,test_scores)
plt.title('Alpha vs CV Error')
plt.show()


大概alpha=10~20的时候,可以把score达到0.135左右。

Random Forest

max_features = [.1,.3,.5,.7,.9,.99]
test_scores = []
for max_feat in max_features:
clf = RandomForestRegressor(n_estimators = 200,max_features = max_feat)
test_score = np.sqrt(-cross_val_score(clf,X_train,y_train,cv = 5,scoring = 'neg_mean_squared_error'))
test_scores.append(np.mean(test_score))
plt.plot(max_features,test_scores)
plt.title('Max Features vs CV Error')
plt.show()


max_features=0.3时,达到了最优0.137

Step 5: Ensemble

这里我们用一个Stacking的思维来汲取两种或者多种模型的优点

首先,我们把最好的parameter拿出来,做成我们最终的model

ridge = Ridge(alpha = 15)
rf = RandomForestRegressor(n_estimators = 500,max_features = .3)
ridge.fit(X_train,y_train)
rf.fit(X_train,y_train)

y_ridge = np.expm1(ridge.predict(X_test))
y_rf = np.expm1(rf.predict(X_test))

y_final = (y_ridge + y_rf) / 2


Step 6: 提交结果

要十分注意提交的格式!包括大小写、索引、列头等小细节。

submission_df = pd.DataFrame(data = {'Id':test_df.index,'SalePrice':y_final})
print submission_df.head(10)
submission_df.to_csv('.\\input\\submission.csv',columns = ['Id','SalePrice'],index = False)


全部code放在下面:

# coding:utf-8
# 注意Windows系统的\\和Linux系统的/的区别

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor

# 文件的组织形式是house price文件夹下面放house_price.py和input文件夹
# input文件夹下面放的是从https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data下载的train.csv test.csv sample_submission.csv 和 data_description.txt 四个文件

# step1 检查源数据集,读入数据,将csv数据转换为DataFrame数据
train_df = pd.read_csv(".\\input\\train.csv",index_col = 0)
test_df = pd.read_csv('.\\input\\test.csv',index_col = 0)
# print train_df.shape
# print test_df.shape
# print train_df.head() # 默认展示前五行 这里是5行,80列
# print test_df.head() # 这里是5行,79列

# step2 合并数据,进行数据预处理
prices = pd.DataFrame({'price':train_df['SalePrice'],'log(price+1)':np.log1p(train_df['SalePrice'])})
# ps = prices.hist()
# plt.plot()
# plt.show()

y_train = np.log1p(train_df.pop('SalePrice'))
all_df = pd.concat((train_df,test_df),axis = 0)
# print all_df.shape
# print y_train.head()

# step3 变量转化
print all_df['MSSubClass'].dtypes all_df['MSSubClass'] = all_df['MSSubClass'].astype(str) print all_df['MSSubClass'].dtypes print all_df['MSSubClass'].value_counts()
# 把category的变量转变成numerical表达形式
# get_dummies方法可以帮你一键one-hot
print pd.get_dummies(all_df['MSSubClass'],prefix = 'MSSubClass').head() all_dummy_df = pd.get_dummies(all_df) print all_dummy_df.head()

# 处理好numerical变量
print all_dummy_df.isnull().sum().sort_values(ascending = False).head(11) # 我们这里用mean填充 mean_cols = all_dummy_df.mean() print mean_cols.head(10) all_dummy_df = all_dummy_df.fillna(mean_cols) print all_dummy_df.isnull().sum().sum()

# 标准化numerical数据
numeric_cols = all_df.columns[all_df.dtypes != 'object'] print numeric_cols numeric_col_means = all_dummy_df.loc[:,numeric_cols].mean() numeric_col_std = all_dummy_df.loc[:,numeric_cols].std() all_dummy_df.loc[:,numeric_cols] = (all_dummy_df.loc[:,numeric_cols] - numeric_col_means) / numeric_col_std

# step4 建立模型
# 把数据处理之后,送回训练集和测试集
dummy_train_df = all_dummy_df.loc[train_df.index]
dummy_test_df = all_dummy_df.loc[test_df.index]
print dummy_train_df.shape,dummy_test_df.shape

# 将DF数据转换成Numpy Array的形式,更好地配合sklearn

X_train = dummy_train_df.values
X_test = dummy_test_df.values

# Ridge Regression
# alphas = np.logspace(-3,2,50)
# test_scores = []
# for alpha in alphas:
# clf = Ridge(alpha)
# test_score = np.sqrt(-cross_val_score(clf,X_train,y_train,cv = 10,scoring = 'neg_mean_squared_error'))
# test_scores.append(np.mean(test_score))
# plt.plot(alphas,test_scores)
# plt.title('Alpha vs CV Error')
# plt.show()

# random forest
# max_features = [.1,.3,.5,.7,.9,.99]
# test_scores = []
# for max_feat in max_features:
# clf = RandomForestRegressor(n_estimators = 200,max_features = max_feat)
# test_score = np.sqrt(-cross_val_score(clf,X_train,y_train,cv = 5,scoring = 'neg_mean_squared_error'))
# test_scores.append(np.mean(test_score))
# plt.plot(max_features,test_scores)
# plt.title('Max Features vs CV Error')
# plt.show()

# Step 5: ensemble
# 用stacking的思维来汲取两种或者多种模型的优点

ridge = Ridge(alpha = 15) rf = RandomForestRegressor(n_estimators = 500,max_features = .3) ridge.fit(X_train,y_train) rf.fit(X_train,y_train) y_ridge = np.expm1(ridge.predict(X_test)) y_rf = np.expm1(rf.predict(X_test)) y_final = (y_ridge + y_rf) / 2

# Step 6: 提交结果
submission_df = pd.DataFrame(data = {'Id':test_df.index,'SalePrice':y_final}) print submission_df.head(10) submission_df.to_csv('.\\input\\submission.csv',columns = ['Id','SalePrice'],index = False)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: