您的位置:首页 > 其它

sklearn朴素贝叶斯类库使用小结

2017-05-26 09:18 387 查看
在scikit-learn中,提供了3中朴素贝叶斯分类算法:GaussianNB(高斯朴素贝叶斯)、MultinomialNB(多项式朴素贝叶斯)、BernoulliNB(伯努利朴素贝叶斯)

1、高斯朴素贝叶斯:sklearn.naive_bayes.GaussianNB(priors=None)

①利用GaussianNB类建立简单模型

In [1]: import numpy as np
   ...: from sklearn.naive_bayes import GaussianNB
   ...: X = np.array([[-1, -1], [-2, -2], [-3, -3],[-4,-4],[-5,-5], [1, 1], [2,
   ...:   2], [3, 3]])
   ...: y = np.array([1, 1, 1,1,1, 2, 2, 2])
   ...: clf = GaussianNB()#默认priors=None
   ...: clf.fit(X,y)
   ...:
Out[1]: GaussianNB(priors=None)

②经过训练集训练后,观察各个属性值

In [2]: clf.priors#无返回值,因priors=None

In [3]: clf.set_params(priors=[0.625, 0.375])#设置priors参数值
Out[3]: GaussianNB(priors=[0.625, 0.375])

In [4]: clf.priors#返回各类标记对应先验概率组成的列表
Out[4]: [0.625, 0.375]


priors属性:获取各个类标记对应的先验概率

In [5]: clf.class_prior_
Out[5]: array([ 0.625,  0.375])

In [6]: type(clf.class_prior_)
Out[6]: numpy.ndarray

class_prior_属性:同priors一样,都是获取各个类标记对应的先验概率,区别在于priors属性返回列表,class_prior_返回的是数组
In [7]: clf.class_count_
Out[7]: array([ 5.,  3.])

class_count_属性:获取各类标记对应的训练样本数
In [8]: clf.theta_
Out[8]:
array([[-3., -3.],
[ 2.,  2.]])

theta_属性:获取各个类标记在各个特征上的均值
In [9]: clf.sigma_
Out[9]:
array([[ 2.00000001,  2.00000001],
[ 0.66666667,  0.66666667]])

sigma_属性:获取各个类标记在各个特征上的方差
③方法

get_params(deep=True):返回priors与其参数值组成字典

In [10]: clf.get_params(deep=True)
Out[10]: {'priors': [0.625, 0.375]}

In [11]: clf.get_params()
Out[11]: {'priors': [0.625, 0.375]}
set_params(**params):设置估计器priors参数
In [3]: clf.set_params(priors=[ 0.625,  0.375])
Out[3]: GaussianNB(priors=[0.625, 0.375])
fit(X, y, sample_weight=None):训练样本,X表示特征向量,y类标记,sample_weight表各样本权重数组
In [12]: clf.fit(X,y,np.array([0.05,0.05,0.1,0.1,0.1,0.2,0.2,0.2]))#设置样本不同的权重
Out[12]: GaussianNB(priors=[0.625, 0.375])

In [13]: clf.theta_
Out[13]:
array([[-3.375, -3.375],
[ 2.   ,  2.   ]])

In [14]: clf.sigma_
Out[14]:
array([[ 1.73437501,  1.73437501],
[ 0.66666667,  0.66666667]])

对于不平衡样本,类标记1在特征1均值及方差计算过程:

均值= ((-1*0.05)+(-2*0.05)+(-3*0.1)+(-4*0.1+(-5*0.1)))/(0.05+0.05+0.1+0.1+0.1)=-3.375

方差=((-1+3.375)**2*0.05 +(-2+3.375)**2*0.05+(-3+3.375)**2*0.1+(-4+3.375)**2*0.1+(-5+3.375)**2*0.1)/(0.05+0.05+0.1+0.1+0.1)=1.73437501

partial_fit(X, y, classes=None, sample_weight=None):增量式训练,当训练数据集数据量非常大,不能一次性全部载入内存时,可以将数据集划分若干份,重复调用partial_fit在线学习模型参数,在第一次调用partial_fit函数时,必须制定classes参数,在随后的调用可以忽略
In [18]: import numpy as np
...: from sklearn.naive_bayes import GaussianNB
...: X = np.array([[-1, -1], [-2, -2], [-3, -3],[-4,-4],[-5,-5], [1, 1], [2
...: ,  2], [3, 3]])
...: y = np.array([1, 1, 1,1,1, 2, 2, 2])
...: clf = GaussianNB()#默认priors=None
...: clf.partial_fit(X,y,classes=[1,2],sample_weight=np.array([0.05,0.05,0.
...: 1,0.1,0.1,0.2,0.2,0.2]))
...:
Out[18]: GaussianNB(priors=None)

In [19]: clf.class_prior_
Out[19]: array([ 0.4,  0.6])

predict(X):直接输出测试集预测的类标记
In [20]: clf.predict([[-6,-6],[4,5]])
Out[20]: array([1, 2])

predict_proba(X):输出测试样本在各个类标记预测概率值
In [21]: clf.predict_proba([[-6,-6],[4,5]])
Out[21]:
array([[  1.00000000e+00,   4.21207358e-40],
[  1.12585521e-12,   1.00000000e+00]])

predict_log_proba(X):输出测试样本在各个类标记上预测概率值对应对数值
In [22]: clf.predict_log_proba([[-6,-6],[4,5]])
Out[22]:
array([[  0.00000000e+00,  -9.06654487e+01],
[ -2.75124782e+01,  -1.12621024e-12]])

score(X, y, sample_weight=None):返回测试样本映射到指定类标记上的得分(准确率)

In [23]: clf.score([[-6,-6],[-4,-2],[-3,-4],[4,5]],[1,1,2,2])
Out[23]: 0.75

In [24]: clf.score([[-6,-6],[-4,-2],[-3,-4],[4,5]],[1,1,2,2],sample_weight=[0.3
...: ,0.2,0.4,0.1])
Out[24]: 0.59999999999999998

2、多项式朴素贝叶斯:sklearn.naive_bayes.MultinomialNB(alpha=1.0, fit_prior=True, class_prior=None)主要用于离散特征分类,例如文本分类单词统计,以出现的次数作为特征值

参数说明:

alpha:浮点型,可选项,默认1.0,添加拉普拉修/Lidstone平滑参数

fit_prior:布尔型,可选项,默认True,表示是否学习先验概率,参数为False表示所有类标记具有相同的先验概率

class_prior:类似数组,数组大小为(n_classes,),默认None,类先验概率

①利用MultinomialNB建立简单模型

In [2]: import numpy as np
   ...: from sklearn.naive_bayes import MultinomialNB
   ...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
   ...: 6]])
   ...: y = np.array([1,1,4,2,3,3])
   ...: clf = MultinomialNB(alpha=2.0)
   ...: clf.fit(X,y)
   ...:
Out[2]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)
②经过训练后,观察各个属性值

class_log_prior_:各类标记的平滑先验概率对数值,其取值会受fit_prior和class_prior参数的影响
a、若指定了class_prior参数,不管fit_prior为True或False,class_log_prior_取值是class_prior转换成log后的结果

In [4]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True,class_prior=[0.3,0.1,0.3,0
...: .2])
...: clf.fit(X,y)
...: print(clf.class_log_prior_)
...: print(np.log(0.3),np.log(0.1),np.log(0.3),np.log(0.2))
...: clf1 = MultinomialNB(alpha=2.0,fit_prior=False,class_prior=[0.3,0.1,0.3
...: ,0.2])
...: clf1.fit(X,y)
...: print(clf1.class_log_prior_)
...:
[-1.2039728  -2.30258509 -1.2039728  -1.60943791]
-1.20397280433 -2.30258509299 -1.20397280433 -1.60943791243
[-1.2039728  -2.30258509 -1.2039728  -1.60943791]
b、若fit_prior参数为False,class_prior=None,则各类标记的先验概率相同等于类标记总个数N分之一
In [5]: import numpy as np
   ...: from sklearn.naive_bayes import MultinomialNB
   ...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
   ...: 6]])
   ...: y = np.array([1,1,4,2,3,3])
   ...: clf = MultinomialNB(alpha=2.0,fit_prior=False)
   ...: clf.fit(X,y)
   ...: print(clf.class_log_prior_)
   ...: print(np.log(1/4))
   ...:
[-1.38629436 -1.38629436 -1.38629436 -1.38629436]
-1.38629436112
c、若fit_prior参数为True,class_prior=None,则各类标记的先验概率相同等于各类标记个数除以各类标记个数之和
In [6]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...: print(clf.class_log_prior_)#按类标记1、2、3、4的顺序输出
...: print(np.log(2/6),np.log(1/6),np.log(2/6),np.log(1/6))
...:
[-1.09861229 -1.79175947 -1.09861229 -1.79175947]
-1.09861228867 -1.79175946923 -1.09861228867 -1.79175946923

intercept_:将多项式朴素贝叶斯解释的class_log_prior_映射为线性模型,其值和class_log_propr相同
In [7]: clf.class_log_prior_
Out[7]: array([-1.09861229, -1.79175947, -1.09861229, -1.79175947])

In [8]: clf.intercept_
Out[8]: array([-1.09861229, -1.79175947, -1.09861229, -1.79175947])

feature_log_prob_:指定类的各特征概率(条件概率)对数值,返回形状为(n_classes, n_features)数组

In [9]: clf.feature_log_prob_
Out[9]:
array([[-2.01490302, -1.45528723, -1.2039728 , -1.09861229],
[-1.87180218, -1.31218639, -1.178655  , -1.31218639],
[-1.74919985, -1.43074612, -1.26369204, -1.18958407],
[-1.79175947, -1.38629436, -1.23214368, -1.23214368]])
特征条件概率计算过程,以类为1各个特征对应的条件概率为例
In [9]: clf.feature_log_prob_
Out[9]:
array([[-2.01490302, -1.45528723, -1.2039728 , -1.09861229],
[-1.87180218, -1.31218639, -1.178655  , -1.31218639],
[-1.74919985, -1.43074612, -1.26369204, -1.18958407],
[-1.79175947, -1.38629436, -1.23214368, -1.23214368]])

In [10]: print(np.log((1+1+2)/(1+2+3+4+1+3+4+4+4*2)),np.log((2+3+2)/(1+2+3+4+1+
...: 3+4+4+4*2)),np.log((3+4+2)/(1+2+3+4+1+3+4+4+4*2)),np.log((4+4+2)/(1+2+
...: 3+4+1+3+4+4+4*2)))
-2.01490302054 -1.45528723261 -1.20397280433 -1.09861228867
特征的条件概率=(指定类下指定特征出现的次数+alpha)/(指定类下所有特征出现次数之和+类的可能取值个数*alpha)
coef_:将多项式朴素贝叶斯解释feature_log_prob_映射成线性模型,其值和feature_log_prob相同
In [11]: clf.coef_
Out[11]:
array([[-2.01490302, -1.45528723, -1.2039728 , -1.09861229],
[-1.87180218, -1.31218639, -1.178655  , -1.31218639],
[-1.74919985, -1.43074612, -1.26369204, -1.18958407],
[-1.79175947, -1.38629436, -1.23214368, -1.23214368]])

class_count_:训练样本中各类别对应的样本数,按类的顺序排序输出

In [12]: clf.class_count_
Out[12]: array([ 2.,  1.,  2.,  1.])

feature_count_:各类别各个特征出现的次数,返回形状为(n_classes, n_features)数组

In [13]: clf.feature_count_
Out[13]:
array([[  2.,   5.,   7.,   8.],
       [  2.,   5.,   6.,   5.],
       [  6.,   9.,  11.,  12.],
       [  2.,   4.,   5.,   5.]])

In [14]: print([(1+1),(2+3),(3+4),(4+4)])#以类别1为例
[2, 5, 7, 8]


③方法

fit(X, y, sample_weight=None):根据X、y训练模型

In [15]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[15]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)

get_params(deep=True):获取分类器的参数,以各参数字典形式返回

In [16]: clf.get_params(True)
Out[16]: {'alpha': 2.0, 'class_prior': None, 'fit_prior': True}

partial_fit(X, y, classes=None, sample_weight=None):对于数据量大时,提供增量式训练,在线学习模型参数,参数X可以是类似数组或稀疏矩阵,在第一次调用函数,必须制定classes参数,随后调用时可以忽略
In [17]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.partial_fit(X,y)
...: clf.partial_fit(X,y,classes=[1,2])
...:
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-17-b512d165c9a0> in <module>()
4 y = np.array([1,1,4,2,3,3])
5 clf = MultinomialNB(alpha=2.0,fit_prior=True)
----> 6 clf.partial_fit(X,y)
7 clf.partial_fit(X,y,classes=[1,2])

ValueError: classes must be passed on the first call to partial_fit.

In [18]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.partial_fit(X,y,classes=[1,2])
...: clf.partial_fit(X,y)
...:
...:
Out[18]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)

predict(X):在测试集X上预测,输出X对应目标值

In [19]: clf.predict([[1,3,5,6],[3,4,5,4]])
Out[19]: array([1, 1])

predict_log_proba(X):测试样本划分到各个类的概率对数值

In [22]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6
...: ,6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[22]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)

In [23]: clf.predict_log_proba([[3,4,5,4],[1,3,5,6]])
Out[23]:
array([[-1.27396027, -1.69310891, -1.04116963, -1.69668527],
[-0.78041614, -2.05601551, -1.28551649, -1.98548389]])

predict_proba(X):输出测试样本划分到各个类别的概率值

In [1]: import numpy as np
...: from sklearn.naive_bayes import MultinomialNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5],[2,5,6,5],[3,4,5,6],[3,5,6,
...: 6]])
...: y = np.array([1,1,4,2,3,3])
...: clf = MultinomialNB(alpha=2.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[1]: MultinomialNB(alpha=2.0, class_prior=None, fit_prior=True)

In [2]: clf.predict_proba([[3,4,5,4],[1,3,5,6]])
Out[2]:
array([[ 0.27972165,  0.18394676,  0.35304151,  0.18329008],
[ 0.45821529,  0.12796282,  0.27650773,  0.13731415]])

score(X, y, sample_weight=None):输出对测试样本的预测准确率的平均值

In [3]: clf.score([[3,4,5,4],[1,3,5,6]],[1,1])
Out[3]: 0.5


set_params(**params):设置估计器的参数

In [4]: clf.set_params(alpha=1.0)
Out[4]: MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)
3、伯努利朴素贝叶斯:sklearn.naive_bayes.BernoulliNB(alpha=1.0, binarize=0.0, fit_prior=True,class_prior=None)类似于多项式朴素贝叶斯,也主要用户离散特征分类,和MultinomialNB的区别是:MultinomialNB以出现的次数为特征值,BernoulliNB为二进制或布尔型特性
参数说明:
binarize:将数据特征二值化的阈值
①利用BernoulliNB建立简单模型
In [5]: import numpy as np
...: from sklearn.naive_bayes import BernoulliNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5]])
...: y = np.array([1,1,2])
...: clf = BernoulliNB(alpha=2.0,binarize = 3.0,fit_prior=True)
...: clf.fit(X,y)
...:
Out[5]: BernoulliNB(alpha=2.0, binarize=3.0, class_prior=None, fit_prior=True)


经过binarize = 3.0二值化处理,相当于输入的X数组为
In [7]: X = np.array([[0,0,0,1],[0,0,1,1],[0,1,1,1]])

In [8]: X
Out[8]:
array([[0, 0, 0, 1],
[0, 0, 1, 1],
[0, 1, 1, 1]])
②训练后查看各属性值

class_log_prior_:类先验概率对数值,类先验概率等于各类的个数/类的总个数

In [9]: clf.class_log_prior_
Out[9]: array([-0.40546511, -1.09861229])

feature_log_prob_ :指定类的各特征概率(条件概率)对数值,返回形状为(n_classes, n_features)数组

Out[10]:
array([[-1.09861229, -1.09861229, -0.69314718, -0.40546511],
[-0.91629073, -0.51082562, -0.51082562, -0.51082562]])
上述结果计算过程:
假设X对应的四个特征为A1、A2、A3、A4,类别为y1,y2,类别为y1时,特征A1的概率为:P(A1|y=y1) = P(A1=0|y=y1)*A1+P(A1=1|y=y1)*A1
In [11]: import numpy as np
...: from sklearn.naive_bayes import BernoulliNB
...: X = np.array([[1,2,3,4],[1,3,4,4],[2,4,5,5]])
...: y = np.array([1,1,2])
...: clf = BernoulliNB(alpha=2.0,binarize = 3.0,fit_prior=True)
...: clf.fit(X,y)
...: print(clf.feature_log_prob_)
...: print([np.log((2+2)/(2+2*2))*0+np.log((0+2)/(2+2*2))*1,np.log((2+2)/(2
...: +2*2))*0+np.log((0+2)/(2+2*2))*1,np.log((1+2)/(2+2*2))*0+np.log((1+2)/
...: (2+2*2))*1,np.log((0+2)/(2+2*2))*0+np.log((2+2)/(2+2*2))*1])
...:
[[-1.09861229 -1.09861229 -0.69314718 -0.40546511]
[-0.91629073 -0.51082562 -0.51082562 -0.51082562]]
[-1.0986122886681098, -1.0986122886681098, -0.69314718055994529, -0.405465108108
16444]

class_count_:按类别顺序输出其对应的个数

In [12]: clf.class_count_
Out[12]: array([ 2.,  1.])

feature_count_:各类别各特征值之和,按类的顺序输出,返回形状为[n_classes, n_features] 的数组

In [13]: clf.feature_count_
Out[13]:
array([[ 0.,  0.,  1.,  2.],
[ 0.,  1.,  1.,  1.]])
③方法:同MultinomialNB的方法类似
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息