您的位置:首页 > 其它

机器学习实践之手写数字识别- 数据阶段分析总结

2016-06-30 23:33 507 查看
机器学习实践之手写数字识别
- 数据初识

2. 机器学习实践之手写数字识别
- 初步特征选择及线性识别

前面两章对数据进行了简单的特征提取及线性回归分析。识别率已经达到了85%, 完成了数字识别的第一步:数据探测。
这一章要做的就各种常用机器学习算法来对数据进行测试,并总结规律,为后续进一步提供准确率作准备。
这单选取的算法有:(后面有时间再对每个算法单独作分析总结介绍):

线性回归
支持向量机
决策树
朴素贝叶斯
KNN算法
逻辑回归

以测试样本的最后一千个数作为测试样例,其它的作为训练样例。 数据结果为测试样例的识别结果。

使用到的统计概念有:precision,recall及f1-score(参见文章机器学习结果统计-准确率、召回率,F1-score
先看测试总结数据,后面会提供完整代码及数据提供。使用的机器学习算法库为:http://scikit-learn.org/

算法测试数据
对算法进行测试,测试数据如下:

测试算法中文precisionrecallf1-score样本数所花时间(秒)
LinearRegression线性回归0.840.840.8410002.402
SVC支持向量机0.850.850.85100022.72
DecisionTree决策树0.810.810.8110000.402
Bayes朴素贝叶斯0.780.770.7710000.015
KNNKNN算法0.860.860.8610000.374
LogisticRegression逻辑回归0.820.820.8210002.419
总结:
算法识别结果想差不多,除了贝叶斯之外都在80%以上。 
时间花销上SVC算法花得最多。而贝叶斯花得最小。 
在手写识别上,如果样本量够多,相比之下KNN算法会更确。

分析:
由于贝叶期是基于概率统计的算法。 计算时间会很快。而由于缺乏对特征之间的关心的支持,识别率会弱一些。

各数字识别统计
 线性回归支持向量机决策树基本贝叶斯算法KNN算法逻辑回归  
数字recallprecisionrecallprecisionrecallprecisionrecallprecisionrecallprecisionrecallprecisionrecall 平均precision平均
00.930.830.930.880.920.90.920.880.950.870.950.890.9330.875
10.960.910.970.970.960.960.970.840.990.980.940.940.9650.933
20.820.820.840.760.80.740.590.730.810.810.730.760.7650.770
30.790.720.750.740.710.740.710.610.810.710.730.670.7500.698
40.680.870.730.840.70.770.640.830.750.850.680.780.6970.823
50.730.780.750.80.660.730.670.630.780.840.80.80.7320.763
60.930.930.920.880.890.880.910.840.920.910.890.910.9100.892
70.890.830.880.850.780.720.740.780.870.880.860.820.8370.813
80.830.830.830.850.780.790.820.770.810.90.830.80.8170.823
90.820.860.820.850.860.830.720.80.840.810.790.80.8080.825
数据总结:
整体数据中,1,6的识别率最高,而2, 3, 4, 5识别率最低。说明数据中比较难区分2, 3, 4, 5的属性。
注意线性回归中,0的recall 大于precision10个百分点,意味着把别的数据错误识别成0成的较多。
4中 recall小于precision 20%面分则说明识别4的特征不明显。

终合上面分析结果得到后续计划:
继续分析数据,寻找更有用的特征值。
使用KNN来作为数据分析算法。

测试代码

import numpy as np
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVC
import functools
from datetime import datetime
from time import clock

from sklearn import metrics
from tools import load_data, load_source, show_source

def log_time(fn):
@functools.wraps(fn)
def wrapper():
start = clock()
ret = fn()
end = clock()
print("{}  use time: {:.3f} s" .format (fn.__name__,  end-start))
return ret
return wrapper

#加载训练数据
data_x, data_y = load_data("train.txt")

# 加载原始数据
source_data = load_source("train.csv")

# 打印数据长度
print("len", len(data_x), len(data_y))

# 设置测试数据数量
LEN = -1000
# 划分训练数据和测试数据 注: 当前测试中用到测试数据训练集(train.csv)的数据, 而暂时没有用到测试数据集的数据(test.csv)
x_train, y_train = data_x[:LEN], data_y[:LEN]
x_test, y_test = data_x[LEN:], data_y[LEN:]

@log_time
def tran_LinearRegression():
# 定义45个线性分类器,并训练数据,每个分类器只对两个数字进行识别
RegressionDict = {}
for i in range(10):
for j in range(i+1, 10):
regr = LinearRegression()
RegressionDict["{}-{}".format(i, j)] = regr
x_train_tmp = np.array([x_train[index] for index,  y in enumerate(y_train) if y in [i, j]])
y_train_tmp = np.array([0 if y == i else 1 for y in y_train if y in [i, j]])
regr.fit(x_train_tmp, y_train_tmp)

# 初始化计数器
ret_counter = []
for i in range(len(x_test)):
ret_counter.append({})
# 预测数据,并把结果放到计数器中
tmp_dict = {}
for key, regression in RegressionDict.items():
a, b = key.split('-')
y_test_predict = regression.predict(x_test)
tmp_dict[key] = [a if item <= 0.5 else b for item in y_test_predict]
for i, item in enumerate(tmp_dict[key]):
ret_counter[i][item] = ret_counter[i].get(item, 0) + 1

predict = []
for i, item in enumerate(y_test):
predict.append(int(sorted(ret_counter[i].items(), key=lambda x:x[1],reverse=True)[0][0]))

return predict

print("\nLinearRegression")
predicted = tran_LinearRegression()
print(metrics.classification_report(y_test, predicted))
print("count:", len(y_test), "ok:", sum([1 for item in range(len(y_test)) if y_test[item]== predicted[item]]))

# 其它常用分类器测试
map_predictor = {
"LogisticRegression":LogisticRegression(),
"bayes":GaussianNB(),
"KNN": KNeighborsClassifier(),
"DecisionTree":DecisionTreeClassifier(),
"SVC":SVC()
}

for key, model in map_predictor.items():

start = clock()
print("start: ",start, key, datetime.utcnow())
model.fit(x_train, y_train)
end = clock()
print("end: ",end,  key, datetime.utcnow())
print("{}  use time: {:.3f} s".format(key,  end-start))
predicted = model.predict(x_test)
print(metrics.classification_report(y_test, predicted))
print("count:", len(y_test), "ok:", sum([1 for item in range(len(y_test)) if y_test[item]== predicted[item]]))

输出结果

len 42000 42000

LinearRegression
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/linalg/basic.py:884: RuntimeWarning: internal gelsd driver lwork query error, required iwork dimension not returned. This is likely the result of LAPACK bug 0038, fixed in LAPACK 3.2.2 (released July 21, 2010). Falling back to 'gelss' driver.
warnings.warn(mesg, RuntimeWarning)
tran_LinearRegression  use time: 2.402 s
precision    recall  f1-score   support

0.0       0.84      0.93      0.89        92
1.0       0.91      0.96      0.93       127
2.0       0.79      0.84      0.81        97
3.0       0.73      0.77      0.75        95
4.0       0.87      0.68      0.77       111
5.0       0.79      0.73      0.76        85
6.0       0.92      0.94      0.93       103
7.0       0.83      0.89      0.86       101
8.0       0.83      0.83      0.83        88
9.0       0.87      0.82      0.85       101

avg / total       0.84      0.84      0.84      1000

count: 1000 ok: 843
start:  3.398754 SVC 2016-06-30 11:06:01.412944
end:  26.118339 SVC 2016-06-30 11:06:24.145873
SVC  use time: 22.720 s
precision    recall  f1-score   support

0.0       0.88      0.93      0.91        92
1.0       0.97      0.97      0.97       127
2.0       0.76      0.84      0.80        97
3.0       0.74      0.75      0.74        95
4.0       0.84      0.73      0.78       111
5.0       0.80      0.75      0.78        85
6.0       0.88      0.92      0.90       103
7.0       0.85      0.88      0.86       101
8.0       0.85      0.83      0.84        88
9.0       0.85      0.82      0.83       101

avg / total       0.85      0.85      0.85      1000

count: 1000 ok: 846
start:  26.90291 KNN 2016-06-30 11:06:24.930844
end:  27.276969 KNN 2016-06-30 11:06:25.304922
KNN  use time: 0.374 s
precision    recall  f1-score   support

0.0       0.87      0.95      0.91        92
1.0       0.98      0.99      0.98       127
2.0       0.81      0.81      0.81        97
3.0       0.71      0.81      0.75        95
4.0       0.85      0.75      0.79       111
5.0       0.84      0.78      0.80        85
6.0       0.91      0.92      0.92       103
7.0       0.88      0.87      0.88       101
8.0       0.90      0.81      0.85        88
9.0       0.81      0.84      0.83       101

avg / total       0.86      0.86      0.86      1000

count: 1000 ok: 857
start:  27.368384 DecisionTree 2016-06-30 11:06:25.396350
end:  27.770018 DecisionTree 2016-06-30 11:06:25.798082
DecisionTree  use time: 0.402 s
precision    recall  f1-score   support

0.0       0.88      0.92      0.90        92
1.0       0.98      0.97      0.97       127
2.0       0.77      0.75      0.76        97
3.0       0.77      0.74      0.75        95
4.0       0.75      0.69      0.72       111
5.0       0.75      0.71      0.73        85
6.0       0.86      0.90      0.88       103
7.0       0.71      0.78      0.75       101
8.0       0.80      0.78      0.79        88
9.0       0.83      0.85      0.84       101

avg / total       0.81      0.81      0.81      1000

count: 1000 ok: 815
start:  27.772331 LogisticRegression 2016-06-30 11:06:25.800391
end:  30.191443 LogisticRegression 2016-06-30 11:06:28.219996
LogisticRegression  use time: 2.419 s
precision    recall  f1-score   support

0.0       0.89      0.95      0.92        92
1.0       0.94      0.94      0.94       127
2.0       0.76      0.73      0.75        97
3.0       0.67      0.73      0.70        95
4.0       0.78      0.68      0.73       111
5.0       0.80      0.80      0.80        85
6.0       0.91      0.89      0.90       103
7.0       0.82      0.86      0.84       101
8.0       0.80      0.83      0.82        88
9.0       0.80      0.79      0.80       101

avg / total       0.82      0.82      0.82      1000

count: 1000 ok: 822
start:  30.19377 bayes 2016-06-30 11:06:28.222098
end:  30.209151 bayes 2016-06-30 11:06:28.237481
bayes  use time: 0.015 s
precision    recall  f1-score   support

0.0       0.88      0.92      0.90        92
1.0       0.84      0.97      0.90       127
2.0       0.73      0.59      0.65        97
3.0       0.61      0.71      0.65        95
4.0       0.83      0.64      0.72       111
5.0       0.63      0.67      0.65        85
6.0       0.84      0.91      0.87       103
7.0       0.78      0.74      0.76       101
8.0       0.77      0.82      0.80        88
9.0       0.80      0.72      0.76       101

avg / total       0.78      0.77      0.77      1000

count: 1000 ok: 774


代码及测试数据下载
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息