信用卡欺诈案例(终结)
2018-01-14 09:51
721 查看
该案例主要包含着:
1、不平衡样本的采样方法 2、sklearn中进行模型训练的整个过程(从单一模块组合到优化方法都包括了)
import pandas as pd import matplotlib.pyplot as plt import numpy as np %matplotlib inline
data = pd.read_csv("creditcard.csv") data.head()
Time | V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | … | V21 | V22 | V23 | V24 | V25 | V26 | V27 | V28 | Amount | Class | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.0 | -1.359807 | -0.072781 | 2.536347 | 1.378155 | -0.338321 | 0.462388 | 0.239599 | 0.098698 | 0.363787 | … | -0.018307 | 0.277838 | -0.110474 | 0.066928 | 0.128539 | -0.189115 | 0.133558 | -0.021053 | 149.62 | 0 |
一、 按照类别统计数目,观察样本是否平衡
count_classes = pd.value_counts(data['Class'], sort = True).sort_index() count_classes.plot(kind = 'bar') plt.title("Fraud class histogram") plt.xlabel("Class") plt.ylabel("Frequency")
二、规约化震荡数据,生成新的特征
from sklearn.preprocessing import StandardScaler data['normAmount'] = StandardScaler().fit_transform(data['Amount'].reshape(-1, 1)) data = data.drop(['Time','Amount'],axis=1) data.head()
V1 | V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 | … | V21 | V22 | V23 | V24 | V25 | V26 | V27 | V28 | Class | normAmount | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | -1.359807 | -0.072781 | 2.536347 | 1.378155 | -0.338321 | 0.462388 | 0.239599 | 0.098698 | 0.363787 | 0.090794 | … | -0.018307 | 0.277838 | -0.110474 | 0.066928 | 0.128539 | -0.189115 | 0.133558 | -0.021053 | 0 | 0.244964 |
下采样过程(始终关注于下标)
X = data.ix[:, data.columns != 'Class'] y = data.ix[:, data.columns == 'Class'] # Number of data points in the minority class number_records_fraud = len(data[data.Class == 1]) fraud_indices = np.array(data[data.Class == 1].index) # Picking the indices of the normal classes normal_indices = data[data.Class == 0].index # Out of the indices we picked, randomly select "x" number (number_records_fraud) random_normal_indices = np.random.choice(normal_indices, number_records_fraud, replace = False) random_normal_indices = np.array(random_normal_indices) # Appending the 2 indices under_sample_indices = np.concatenate([fraud_indices,random_normal_indices]) # Under sample dataset under_sample_data = data.iloc[under_sample_indices,:] X_undersample = under_sample_data.ix[:, under_sample_data.columns != 'Class'] y_undersample = under_sample_data.ix[:, under_sample_data.columns == 'Class'] # Showing ratio print("Percentage of normal transactions: ", len(under_sample_data[under_sample_data.Class == 0])/len(under_sample_data)) print("Percentage of fraud transactions: ", len(under_sample_data[under_sample_data.Class == 1])/len(under_sample_data)) print("Total number of transactions in resampled data: ", len(under_sample_data))
三、数据集切分
from sklearn.cross_validation import train_test_split # Whole dataset X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3, random_state = 0) print("Number transactions train dataset: ", len(X_train)) print("Number transactions test dataset: ", len(X_test)) print("Total number of transactions: ", len(X_train)+len(X_test)) # Undersampled dataset X_train_undersample, X_test_undersample, y_train_undersample, y_test_undersample = train_test_split(X_undersample ,y_undersample ,test_size = 0.3 ,random_state = 0) print("") print("Number transactions train dataset: ", len(X_train_undersample)) print("Number transactions test dataset: ", len(X_test_undersample)) print("Total number of transactions: ", len(X_train_undersample)+len(X_test_undersample))
('Number transactions train dataset: ', 199364) ('Number transactions test dataset: ', 85443) ('Total number of transactions: ', 284807) ('Number transactions train dataset: ', 688) ('Number transactions test dataset: ', 296) ('Total number of transactions: ', 984)
#Recall = TP/(TP+FN) from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import KFold, cross_val_score from sklearn.model_selection import cross_validate from sklearn.metrics import confusion_matrix,recall_score,classification_report
三、(1)传统的模型选择方法
def select_model_by_traditional(x_train_data,y_train_data): fold = KFold(len(y_train_data),5,shuffle = False) # Different C parameters c_param_range = [0.01,0.1,1,10,100] results_table = pd.DataFrame(index = range(len(c_param_range),2), columns = ['C_parameter','Mean recall score']) # the k-fold will give 2 lists: train_indices = indices[0], test_indices = indices[1] j = 0 for c_param in c_param_range: print('-------------------------------------------') print('C parameter: ', c_param) print('-------------------------------------------') print('') recall_accs = [] # enumerate(a,b) # list1 = ["这", "是", "一个", "测试"] #for index, item in enumerate(list1, 1): #print index, item #1 这 #2 是 #3 一个 for iteration, indices in enumerate(fold,start=1): lr = LogisticRegression(C = c_param, penalty = 'l1') # Use the training data to fit the model. In this case, we use the portion of the fold to train the model # with indices[0]. We then predict on the portion assigned as the 'test cross validation' with indices[1] lr.fit(x_train_data.iloc[indices[0],:],y_train_data.iloc[indices[0],:].values.ravel()) # Predict values using the test indices in the training data y_pred_undersample = lr.predict(x_train_data.iloc[indices[1],:].values) # Calculate the recall score and append it to a list for recall scores representing the current c_parameter recall_acc = recall_score(y_train_data.iloc[indices[1],:].values,y_pred_undersample) recall_accs.append(recall_acc) print('Iteration ', iteration,': recall score = ', recall_acc) # The mean value of those recall scores is the metric we want to save and get hold of. results_table.loc[j,'C_parameter'] = c_param results_table.loc[j,'Mean recall score'] = np.mean(recall_accs) print results_table.iloc[j] j += 1 print('') print('Mean recall score ', np.mean(recall_accs)) print('') best_c = results_table.iloc[results_table['Mean recall score'].idxmax()]['C_parameter'] # Finally, we can check which C parameter is the best amongst the chosen. print('*********************************************************************************') print('Best model to choose from cross validation is with C parameter = ', best_c) print('*********************************************************************************') return best_c
best_c = select_model_by_traditional(X_train_undersample,y_train_undersample)
------------------------------------------- ('C parameter: ', 0.01) ------------------------------------------- ('Iteration ', 1, ': recall score = ', 0.93150684931506844) ('Iteration ', 2, ': recall score = ', 0.9178082191780822) ('Iteration ', 3, ': recall score = ', 1.0) ('Iteration ', 4, ': recall score = ', 0.97297297297297303) ('Iteration ', 5, ': recall score = ', 0.95454545454545459) C_parameter 0.01 Mean recall score 0.955367 Name: 0, dtype: object ('Mean recall score ', 0.95536669920231565) ------------------------------------------- ('C parameter: ', 0.1) ------------------------------------------- ('Iteration ', 1, ': recall score = ', 0.84931506849315064) ('Iteration ', 2, ': recall score = ', 0.86301369863013699) ('Iteration ', 3, ': recall score = ', 0.94915254237288138) ('Iteration ', 4, ': recall score = ', 0.94594594594594594) ('Iteration ', 5, ': recall score = ', 0.90909090909090906) C_parameter 0.1 Mean recall score 0.903304 Name: 1, dtype: object ('Mean recall score ', 0.90330363290660487) ------------------------------------------- ('C parameter: ', 1) ------------------------------------------- ('Iteration ', 1, ': recall score = ', 0.84931506849315064) ('Iteration ', 2, ': recall score = ', 0.87671232876712324) ('Iteration ', 3, ': recall score = ', 0.98305084745762716) ('Iteration ', 4, ': recall score = ', 0.94594594594594594) ('Iteration ', 5, ': recall score = ', 0.90909090909090906) C_parameter 1 Mean recall score 0.912823 Name: 2, dtype: object ('Mean recall score ', 0.91282301995095116) ------------------------------------------- ('C parameter: ', 10) ------------------------------------------- ('Iteration ', 1, ': recall score = ', 0.86301369863013699) ('Iteration ', 2, ': recall score = ', 0.87671232876712324) ('Iteration ', 3, ': recall score = ', 0.98305084745762716) ('Iteration ', 4, ': recall score = ', 0.94594594594594594) ('Iteration ', 5, ': recall score = ', 0.90909090909090906) C_parameter 10 Mean recall score 0.915563 Name: 3, dtype: object ('Mean recall score ', 0.91556274597834852) ------------------------------------------- ('C parameter: ', 100) ------------------------------------------- ('Iteration ', 1, ': recall score = ', 0.86301369863013699) ('Iteration ', 2, ': recall score = ', 0.87671232876712324) ('Iteration ', 3, ': recall score = ', 0.98305084745762716) ('Iteration ', 4, ': recall score = ', 0.94594594594594594) ('Iteration ', 5, ': recall score = ', 0.90909090909090906) C_parameter 100 Mean recall score 0.915563 Name: 4, dtype: object ('Mean recall score ', 0.91556274597834852) ********************************************************************************* ('Best model to choose from cross validation is with C parameter = ', 0.01) *********************************************************************************
(2)通过cross_validate进行模型选择
def select_model_by_cross_validate(x_train_data,y_train_data): fold = KFold(len(y_train_data),5,shuffle = False) # Different C parameters c_param_range = [0.01,0.1,1,10] result_table = pd.DataFrame(index=range(len(c_param_range),2),columns=['C_parameter','Recall_score']) i=0 for c_param in c_param_range: print('-------------------------------------------') print('C parameter: ', c_param) print('-------------------------------------------') print('') lr = LogisticRegression(C = c_param,penalty='l1') # 核心方法 scores = cross_validate(lr,x_train_data,y_train_data,scoring='recall',cv=fold,return_train_score=False) mean_score = np.array(sorted(scores['test_score'])).mean() print mean_score result_table.loc[i,'C_parameter'] = c_param result_table.loc[i,'Recall_score'] = mean_score i+=1 best_sc = result_table.iloc[result_table['Recall_score'].idxmax()]['C_parameter'] #print result_table.head() print ("the best C is",best_sc) return best_sc
select_model_by_cross_validate(X_train_undersample,y_train_undersample)
------------------------------------------- ('C parameter: ', 0.01) ------------------------------------------- 0.955366699202 ------------------------------------------- ('C parameter: ', 0.1) ------------------------------------------- 0.903303632907 ------------------------------------------- ('C parameter: ', 1) ------------------------------------------- 0.912823019951 ------------------------------------------- ('C parameter: ', 10) ------------------------------------------- 0.915562745978 ('the best C is', 0.01) 0.01
(3)使用GridSearchCV()
from sklearn.model_selection import GridSearchCV
def select_model_by_gridSearchCV(x_train_data,y_train_data): fold = KFold(len(y_train_data),5,shuffle = False) c_param_range = {'C':[0.01,0.1,1,10]} lr = LogisticRegression(penalty='l1') grid = GridSearchCV(lr, c_param_range, cv=fold, scoring="recall") grid.fit(x_train_data, y_train_data) print grid.best_score_ #查看最佳分数(此处为f1_score) print grid.best_params_ print grid.best_estimator_ return grid.best_params_
select_model_by_gridSearchCV(X_train_undersample,y_train_undersample)
0.955342302358 {'C': 0.01} LogisticRegression(C=0.01, class_weight=None, dual=False, fit_intercept=True,intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,penalty='l1', random_state=None, solver='liblinear', tol=0.0001,verbose=0, warm_start=False) {'C': 0.01}
四、绘制混淆矩阵
我们观察混淆矩阵的时候关注于准确率和精度,也就是混淆矩阵右上角的值。
#draw the confusion_matrix def plot_confusion_matrix(cm, classes, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. """ plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=0) plt.yticks(tick_marks, classes) thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label')
绘制混淆矩阵
import itertools lr = LogisticRegression(C = best_c, penalty = 'l1') lr.fit(X_train_undersample,y_train_undersample.values.ravel()) y_pred_undersample = lr.predict(X_test_undersample.values) # Compute confusion matrix cnf_matrix = confusion_matrix(y_test_undersample,y_pred_undersample) np.set_printoptions(precision=2) print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1])) # Plot non-normalized confusion matrix class_names = [0,1] plt.figure() plot_confusion_matrix(cnf_matrix d2b2 , classes=class_names , title='Confusion matrix') plt.show()
Recall metric in the testing dataset: 0.931972789116
best_c = printing_Kfold_scores(X_train,y_train)
------------------------------------------- C parameter: 0.01 ------------------------------------------- Iteration 1 : recall score = 0.492537313433 Iteration 2 : recall score = 0.602739726027 Iteration 3 : recall score = 0.683333333333 Iteration 4 : recall score = 0.569230769231 Iteration 5 : recall score = 0.45 Mean recall score 0.559568228405 ------------------------------------------- C parameter: 0.1 ------------------------------------------- Iteration 1 : recall score = 0.567164179104 Iteration 2 : recall score = 0.616438356164 Iteration 3 : recall score = 0.683333333333 Iteration 4 : recall score = 0.584615384615 Iteration 5 : recall score = 0.525 Mean recall score 0.595310250644 ------------------------------------------- C parameter: 1 ------------------------------------------- Iteration 1 : recall score = 0.55223880597 Iteration 2 : recall score = 0.616438356164 Iteration 3 : recall score = 0.716666666667 Iteration 4 : recall score = 0.615384615385 Iteration 5 : recall score = 0.5625 Mean recall score 0.612645688837 ------------------------------------------- C parameter: 10 ------------------------------------------- Iteration 1 : recall score = 0.55223880597 Iteration 2 : recall score = 0.616438356164 Iteration 3 : recall score = 0.733333333333 Iteration 4 : recall score = 0.615384615385 Iteration 5 : recall score = 0.575 Mean recall score 0.61847902217 ------------------------------------------- C parameter: 100 ------------------------------------------- Iteration 1 : recall score = 0.55223880597 Iteration 2 : recall score = 0.616438356164 Iteration 3 : recall score = 0.733333333333 Iteration 4 : recall score = 0.615384615385 Iteration 5 : recall score = 0.575 Mean recall score 0.61847902217 ********************************************************************************* Best model to choose from cross validation is with C parameter = 10.0 *********************************************************************************
五、返回概率值并设置阈值,通过设置阈值来进行分类划分。
lr = LogisticRegression(C = 0.01, penalty = 'l1') lr.fit(X_train_undersample,y_train_undersample.values.ravel()) y_pred_undersample_proba = lr.predict_proba(X_test_undersample.values) thresholds = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9] plt.figure(figsize=(10,10)) j = 1 for i in thresholds: y_test_predictions_high_recall = y_pred_undersample_proba[:,1] > i plt.subplot(3,3,j) j += 1 # Compute confusion matrix cnf_matrix = confusion_matrix(y_test_undersample,y_test_predictions_high_recall) np.set_printoptions(precision=2) print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1])) # Plot non-normalized confusion matrix class_names = [0,1] plot_confusion_matrix(cnf_matrix , classes=class_names , title='Threshold >= %s'%i)
Recall metric in the testing dataset: 1.0
Recall metric in the testing dataset: 1.0
Recall metric in the testing dataset: 1.0
Recall metric in the testing dataset: 0.986394557823
Recall metric in the testing dataset: 0.931972789116Recall metric in the testing dataset: 0.884353741497
Recall metric in the testing dataset: 0.836734693878
Recall metric in the testing dataset: 0.748299319728
Recall metric in the testing dataset: 0.571428571429
上采样
MOTE算法讲解博文:import pandas as pd from imblearn.over_sampling import SMOTE from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split
credit_cards=pd.read_csv('creditcard.csv') columns=credit_cards.columns # The labels are in the last column ('Class'). Simply remove it to obtain features columns features_columns=columns.delete(len(columns)-1) features=credit_cards[features_columns] labels=credit_cards['Class']
features_train, features_test, labels_train, labels_test = train_test_split(features, labels, test_size=0.2, random_state=0)
oversampler=SMOTE(random_state=0) os_features,os_labels=oversampler.fit_sample(features_train,labels_train)
len(os_labels[os_labels==1])
227454
os_features = pd.DataFrame(os_features) os_labels = pd.DataFrame(os_labels) best_c = printing_Kfold_scores(os_features,os_labels)
------------------------------------------- C parameter: 0.01 ------------------------------------------- Iteration 1 : recall score = 0.890322580645 Iteration 2 : recall score = 0.894736842105 Iteration 3 : recall score = 0.968861347792 Iteration 4 : recall score = 0.957595541926 Iteration 5 : recall score = 0.958430881173 Mean recall score 0.933989438728 ------------------------------------------- C parameter: 0.1 ------------------------------------------- Iteration 1 : recall score = 0.890322580645 Iteration 2 : recall score = 0.894736842105 Iteration 3 : recall score = 0.970410534469 Iteration 4 : recall score = 0.959980655302 Iteration 5 : recall score = 0.960178498807 Mean recall score 0.935125822266 ------------------------------------------- C parameter: 1 ------------------------------------------- Iteration 1 : recall score = 0.890322580645 Iteration 2 : recall score = 0.894736842105 Iteration 3 : recall score = 0.970454796946 Iteration 4 : recall score = 0.96014552489 Iteration 5 : recall score = 0.960596168431 Mean recall score 0.935251182603 ------------------------------------------- C parameter: 10 ------------------------------------------- Iteration 1 : recall score = 0.890322580645 Iteration 2 : recall score = 0.894736842105 Iteration 3 : recall score = 0.97065397809 Iteration 4 : recall score = 0.960343368396 Iteration 5 : recall score = 0.960530220596 Mean recall score 0.935317397966 ------------------------------------------- C parameter: 100 ------------------------------------------- Iteration 1 : recall score = 0.890322580645 Iteration 2 : recall score = 0.894736842105 Iteration 3 : recall score = 0.970543321899 Iteration 4 : recall score = 0.960211472725 Iteration 5 : recall score = 0.960903924995 Mean recall score 0.935343628474 ********************************************************************************* Best model to choose from cross validation is with C parameter = 100.0 *********************************************************************************
lr = LogisticRegression(C = best_c, penalty = 'l1') lr.fit(os_features,os_labels.values.ravel()) y_pred = lr.predict(features_test.values) # Compute confusion matrix cnf_matrix = confusion_matrix(labels_test,y_pred) np.set_printoptions(precision=2) print("Recall metric in the testing dataset: ", cnf_matrix[1,1]/(cnf_matrix[1,0]+cnf_matrix[1,1])) # Plot non-normalized confusion matrix class_names = [0,1] plt.figure() plot_confusion_matrix(cnf_matrix , classes=class_names , title='Confusion matrix') plt.show()
Recall metric in the testing dataset: 0.90099009901
相关文章推荐
- 机器学习案例实战:信用卡欺诈检测
- 机器学习案例实战之信用卡欺诈检测【人工智能工程师--AI转型必修课】
- 机器学习案例实战-信用卡欺诈检测
- 机器学习——信用卡反欺诈案例
- 机器学习案例实战-信用卡欺诈检测
- 机器学习案例实战-信用卡欺诈检测
- 机器学习实战之信用卡欺诈检测
- Python机器学习(二):Logistic回归建模分类实例——信用卡欺诈监测(上)
- 如何判断一笔交易是否属于欺诈 数据挖掘算法与现实生活中的应用案例
- python 机器学习实战:信用卡欺诈异常值检测
- 基于卷积神经网络的信用卡欺诈侦测
- 预测信用卡欺诈
- 海外支付:抵御信用卡欺诈的CyberSource
- 如何避免PayPal投诉和信用卡欺诈的纠纷
- 逻辑回归进行信用卡欺诈检测
- “伪装网站”的欺诈方法介绍及案例分析
- IIS企业案例系列之九:TMG发布多个HTTPS站点终结篇
- 大数据架构和模式(五):利用大数据识别保险行业中的欺诈业务案例
- 某银行信用卡中心——大数据反欺诈应用案例 2017-06-23 10:54 本篇案例为数据猿推出的大型“金融大数据主题策划”活动(查看详情)第一部分的系列案例/征文;感谢 百融金服 的投递 作为整体