您的位置:首页 > 编程语言

关于机器学习中特征工程的一些实战经验与可直接利用代码的分享

2017-08-02 00:00 691 查看

特征选择(分为两类,一类根据自身信息选择,一类借助模型选择)

1.根据特征自身信息方差选择

选出方差阈值大于0.9的特征, X 为特征矩阵

From sklearn.feature_selection import VarianceThreshold

threshold = 0.90

vt = VarianceThreshold().fit(X)

# Find feature names

feat_var_threshold = data.columns[vt.variances_ > threshold * (1-threshold)]

print(feat_var_threshold)

2.根据模型算法来选择特征,例如使用RF

model = RandomForestClassifier()

model.fit(X, Y)

feature_imp=pd.DataFrame(model.feature_importances_,index=X.columns,columns=["importance"])

feat_imp_20 = feature_imp.sort_values("importance", ascending=False).head(20).index

print(feat_imp_20)

将树结构模型评估出来的特征重要度可视化

names=list(x_train.columns)

# sort importances

indices = np.argsort(model.feature_importances_)

# plot as bar chart

plt.barh(np.arange(len(names)), model.feature_importances_[indices])

plt.yticks(np.arange(len(names)) + 0.25, np.array(names)[indices])

_ = plt.xlabel('Relative importance')

#plt.show()

3.通过SelectKBest chi2 test来选择特征,但是特征的取值必须为正

from sklearn.feature_selection import VarianceThreshold, RFE, SelectKBest, chi2

from sklearn.preprocessing import MinMaxScaler

X_minmax = MinMaxScaler(feature_range=(0,1)).fit_transform(X)

X_scored = SelectKBest(score_func=chi2, k='all').fit(X_minmax, Y)

feature_scoring = pd.DataFrame({'feature': X.columns, 'score': X_scored.scores_})

feat_scored_20=feature_scoring.sort_values('score',ascending=False).head(20)['feature'].values

print(feat_scored_20)

4.应用某一种模型来选择特征

借用某个模型,来选出超过阈值的特征,可以指定你需要的特征数或者阈值

from sklearn.datasets import load_boston

from sklearn.feature_selection importSelectFromModel

from sklearn.linear_model import LassoCV

# Load the boston dataset.

boston = load_boston()

X, y = boston['data'], boston['target']

# We use the base estimator LassoCV since the L1 norm promotes sparsity of features.

clf = LassoCV()

# Set a minimum threshold of 0.25

sfm = SelectFromModel(clf, threshold=0.25)

sfm.fit(X, y)

n_features = sfm.transform(X).shape[1]

1. # Reset the threshold till the number of features equals two.

# Note that the attribute can be set directly instead of repeatedly

# fitting the metatransformer.

while n_features > 2:

sfm.threshold += 0.1

X_transform = sfm.transform(X)

n_features = X_transform.shape[1]

# Plot the selected two features from X.

plt.title(

"Features selected from Boston using SelectFromModel with "

"threshold %0.3f." % sfm.threshold)

feature1 = X_transform[:, 0]

feature2 = X_transform[:, 1]

plt.plot(feature1, feature2, 'r.')

plt.xlabel("Feature number 1")

plt.ylabel("Feature number 2")

plt.ylim([np.min(feature2), np.max(feature2)])

plt.show()

5. RFE with model

借用一个线性模型,利用给予每个特征的系数权重的大小比较,帅选出最优的特征

即递归特征消除的方法,反复构建模型训练,选出最好的特征,然后拿出来放在一边,继续筛选剩下的特征,择优,逐步择优,对特征进行排序。

from sklearn.feature_selection import RFE

rfe = RFE(LogisticRegression(), 20)

rfe.fit(X, Y)

feature_rfe_scoring = pd.DataFrame({

'feature': X.columns,

'score': rfe.ranking_

})

feat_rfe_20 = feature_rfe_scoring[feature_rfe_scoring['score'] == 1]['feature'].values

print(feat_rfe_20)

6.稳定性特征选择方法

是一基于二次抽样和特征选择算法相结合的方法,主要思想是在不同的数据子集以及特征子集中运行算法,不断重复,每一次利用模型的反馈选中好的特征,最后可以统计每个特征被选为优秀特征的次数除以所在特征集被用到的次数作为分数,选出好的特征。

可以利用一下模型实现:
from sklearn.linear_model import RandomizedLogisticRegression as RLR


Model=RLR(C=1, scaling=0.5,

sample_fraction=0.75,

n_resampling=200, selection_threshold=0.25)

RLR.fit(x,y)

Print(RLR.get_support())

7.将各个方法选出来的特征融合在一起

features = np.hstack([

feat_var_threshold,

feat_imp_20,

feat_scored_20,

feat_rfe_20

])

features = np.unique(features)

print(features)

6.特征评估

如果将特征进行分组后,例如数值型特征以及字符型特征,或者稀疏型与稠密型特征,可以分别用不同的特征集合训练模型,考察模型的效果

比如有人说稠密型的特征比较适合数模型,例如gbdt,xgb,稀疏特征则适合于线性模型,例如lr,lasso。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息