您的位置:首页 > 其它

[EverString收录]机器学习中分类评估方法简介 - 1

2016-05-13 01:49 363 查看
转自:http://blog.csdn.net/xiongtao00/article/details/51333197

----------


1. Classification problem

Classification is a kind of supervised learning, which leverages label data (ground truth) to guide the model training. Different from regression, the label here is categorical
(qualitative) rather than numeric (quantitative).

The most general classification is multi-label classification. It means there are multiple classes in the ground truth (class means label’s value) and one data point (i.e. observation or record) can belong to more than one class.

More specific, if we restrain that one data point belongs to only one class, it becomes multi-class classification or multinomial classification. In the most practical cases, we encounter multi-class classification.

Moreover, if we restrain that there are only two classes in the ground truth, then it becomes binary classification, which is the most common case.

The above definitions can be summarized by the following table:
type of classificationmultiple classes (in the ground truth)two classes (in the ground truth)
multiple labels (one record has)multi-label classification
single label (one record has)multi-class classificationbinary classification
Binary classification is simple and popular, since we usually encounter detection problems that determine whether a signal exists or not, e.g. in face detection, whether a face exists in an image, or in lead generation,
whether a company is qualified as a lead. Therefore, we start our introduction from binary classification.


2. Metrics for binary classification


2.1. Binary classification

In binary classification, given a data point x with
its label y (y∈{0,1}),
the classifier scores the data point as f (we
assume f∈[0,1]).
By comparing to a threshold t,
we can get the predicted label ŷ .
If f≥t,
then ŷ =1 (positive),
otherwise, ŷ =0 (negative).

For a set of N data
points X with
labels y,
the corresponding predicted scores and labels are f and ŷ ,
respectively.


2.2. Examples

Here we illustrate 3 example values that we try to predict i.e. y1, y2, y3,
we assume that we have 10 data points i=[0...9] having
scores fi in
descending order with index i.
To get ŷ ,
we use 0.5 as threshold.
i0123456789
y1,i1111100000
y2,i1010100110
y3,i0000011111
fi0.960.910.750.620.580.520.450.280.170.13
ŷ i1111110000
From the data above, we can see that fi predicts y1,i the
best because fi value
is above 0.5 when y1,i=1;
however, fi predicts y3,i poorly.


2.3. Metrics based on labels and predicted labels

Based on labels, the data points can be divided into positive and negative. Based on predicted labels, the data points can be divided into predicted positive and negative. We make the following basic definitions:

P:
the number of positive points, i.e. #(y=1)
N:
the number of negative points, i.e. #(y=0)
P̂ :
the number of predicted positive points, i.e. #(ŷ =1)
N̂ :
the number of predicted negative points, i.e. #(ŷ =0)
TP:
the number of predicted positive points that are actually positive, i.e. #(y=1,ŷ =1) (aka. True
Positive, Hit)

FP:
the number of predicted positive points that are actually negative, i.e. #(y=0,ŷ =1) (aka. False
Positive, False Alarm, Type I Error)
TN:
the number of predicted negative points that are actually negative, i.e. #(y=0,ŷ =0) (aka. True
Negative, Correct Rejection)
FN:
the number of predicted negative points that are actually positive, i.e. #(y=1,ŷ =0) (aka. False
Negative, Miss, Type II Error)

The above definitions can be summarized as the following confusion matrix:
confusion matrix
PTPFN
NFPTN
Based on TP, FP, FN, TN,
we can define the following metrics:

Recall: recall=TPP=TPTP+FN,
the percentage of positive points that are predicted as positive (aka. Hit Rate, Sensitivity, True Positive Rate, TPR)
Precision: precision=TPP̂ =TPTP+FP,
the percentage of predicted positive points that are actually positive (aka. Positive Predictive value, PPV)
False Alarm Rate: fa=FPN=FPFP+TN,
the precentage of negative points that are predicted as positive (aka. False Positive Rate, FPR)
F score: f1=2⋅precision⋅recallprecision+recall,
the harmonic mean of precision and recall. It is a special case of fβ score,
when β=1.
(aka. F1 Score)
Accuracy: accuracy=TP+TNP+N,
the percentage of correct predicted points out of all points
Matthews Correlation Coefficient: MCC=TP⋅TN−FP⋅FN(TP+FP)⋅(TP+FN)⋅(TN+FP)⋅(TN+FN)√ (aka. MCC)
Mean Consequential Error: MCE=1n∑yi≠ŷ i1=FP+FNP+N=1−accuracy

In the above 3 examples, we can calculate these metrics as follows:
MetricsTPFPTNFNrecallprecisionf1faaccuracyMCCMCE
1514010.8330.9090.20.90.8170.1
233220.60.50.5450.60.500.5
315040.20.1670.18210.1-0.8170.9
The following table lists the value range of each metric
MetricsTPFPTNFNrecallprecisionf1faaccuracyMCCMCE
Range[0,P][0,N][0,N][0,P][0,1][0,1][0,1][0,1][0,1][-1,1][0,1]
BestP0N01110110
Worst*0P0N00010-11
* There are different ways to define the worst case. For example, the prediction that equals to random guess can be defined as worst, since it doesn’t provide any useful information. Here we define the worst case as
the prediction is totally opposite to the ground truth.


2.4. Metrics based on labels and predicted scores

The above metrics depend on threshold, i.e. if threshold varies, the above metrics varies accordingly. To build threshold-free measurement, we can define some metrics based on labels and predicted scores (instead of
predicted labels).

The rationale is that we can measure the overall performance when the threshold goes through its value range. Here we introduce 2 curves:

ROC Curve: recall (y-axis)
vs. fa (x-axis)
curve as threshold varies
Precision-Recall Curve: precision (y-axis)
vs. recall (x-axis)
curve as threshold varies

Then we can define the following metrics:

AUC: area under ROC (Receiver Operating Characteristic) curve
Average Precision: area under precision-recall curve
Precision-Recall Breakeven Point: precision (or recall or f1 score) when precision=recall

To describe the characteristics of these curves and metrics, let’s do some case study first.

Example 1:
index0123456789
fi0.960.910.750.620.580.520.450.280.170.13
y1,i1111100000
threshold range(0.96, 1](0.91, 0.96](0.75, 0.91](0.62, 0.75](0.58, 0.62](0.52, 0.58](0.45, 0.52](0.28, 0.45](0.17, 0.28](0.13, 0.17][0, 0.13]
TP01234555555
FP00000012345
TN55555543210
FN54321000000
recall00.20.40.60.8111111
precisionNaN111110.8330.7140.6250.5560.5
f1NaN0.3330.5710.750.88910.9090.8330.7690.7140.667
fa0000000.20.40.60.81
accuracy0.50.60.70.80.910.90.80.70.60.5
MCCNaN0.3330.50.6550.81610.8160.6550.50.333NaN
MCE0.50.40.30.20.100.10.20.30.40.5
Example 2:
index0123456789
fi0.960.910.750.620.580.520.450.280.170.13
y2,i1111100000
threshold range(0.96, 1](0.91, 0.96](0.75, 0.91](0.62, 0.75](0.58, 0.62](0.52, 0.58](0.45, 0.52](0.28, 0.45](0.17, 0.28](0.13, 0.17][0, 0.13]
TP01122333455
FP00112234445
TN55443321110
FN54433222100
recall00.20.20.40.40.60.60.60.811
precisionNaN10.50.6670.50.60.50.4290.50.5560.5
f1NaN0.3330.2860.50.4440.60.5450.50.6150.7140.667
fa000.20.20.40.40.60.80.80.81
accuracy0.50.60.50.60.50.60.50.40.50.60.5
MCCNaN0.33300.21800.20-0.21800.333NaN
MCE0.50.40.50.40.50.40.50.60.50.40.5
Example 3:
index0123456789
fi0.960.910.750.620.580.520.450.280.170.13
y3,i1111100000
threshold range(0.96, 1](0.91, 0.96](0.75, 0.91](0.62, 0.75](0.58, 0.62](0.52, 0.58](0.45, 0.52](0.28, 0.45](0.17, 0.28](0.13, 0.17][0, 0.13]
TP00000012345
FP01234555555
TN54321000000
FN55555543210
recall0000000.20.40.60.81
precisionNaN000000.1670.2860.3750.4440.5
f1NaNNaNNaNNaNNaNNaN0.1820.3330.4620.5710.667
fa00.20.40.60.8111111
accuracy0.50.40.30.20.100.10.20.30.40.5
MCCNaN-0.333-0.5-0.655-0.816-1-0.816-0.655-0.5-0.333NaN
MCE0.50.60.70.80.910.90.80.70.60.5
The above three tables list the calculation details of the recall, precision, fa and
other metrics under different thresholds for three examples, respectively. The corresponding ROC curve and precision-recall curve are plotted as follows:





From the above figures, we can summarize the characteristics of precision recall curve:

the curve is usually not monotonous
sometimes there is no definition at recall = 0, since precision is NaN (the data point with highest score is positive)
usually, as recall increases, the precision decreases with fluctuation
the curve has an intersection with line precision=recall
in the ideal case, the area under the curve is 1

The characteristics of ROC curve can be summarized as follows:

the curve is always monotonous (flat or increase)
in the best case (positive data points have higher score than negative data points), the area under the curve is 1
in the worst case (positive data points have lower score than negative data points), the area under the curve is 0
in the random case (random scoring), the area under the curve is 0.5

The following table summarizes auc, average precision and breakeven point in these three examples
metricsaucaverage precisionbreakeven point
11.0001.0001.000
20.5650.4670.600
30.0000.3040.000
The following table lists the value range of each metric
metricsaucaverage precisionbreakeven point
Range[0,1][0,1][0,1]
Best111
Worst00*0
* in the case that there is no positive sample, average precison can achieve 0.


2.5. Metrics selection

We have introduced several metrics to measure the performance of binary classification. In a practical case, what metrics should we adopt and what metrics should be avoided?

Usually, there are two cases we can encounter: balanced and unbalanced.

In balanced case, the number of positive samples is close to that of negative samples
In unbalanced case, there are orders of magnitude difference between numbers of positive and negative samples.

In practical case, positive number is usually less than negative number

The conclusions are that

In balanced case, all the above metrics can be used
In unbalanced case, precision, recall, f1 score, average precision and breakeven point are preferred rather than fa, accuracy, MCC, MCE, auc, ATOP*

*ATOP is another metric, which is similar to AUC that also cares about the order of positive and negative data points.

The main reason is that

precision, recall, f1 score, average precision, breakeven point focus on the correctness of the positive samples (related to TP,
but not TN)
fa, accuracy, MCC, MCE, auc, ATOP are related to the correctness of the negative samples (TN)

In unbalanced case, TN is
usually huge, comparing to TP.
Therefore, fa≈0, accuracy≈1, MCC≈1, MCE≈0, auc≈1, ATOP≈1.
However, these “amazing” values don’t make any sense.

Let consider the following 4 examples:
<code class="language-python hljs  has-numbering" style="display: block; padding: 0px; background-color: transparent; color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', monospace;font-size:undefined; white-space: pre; border-top-left-radius: 0px; border-top-right-radius: 0px; border-bottom-right-radius: 0px; border-bottom-left-radius: 0px; word-wrap: normal; background-position: initial initial; background-repeat: initial initial;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">import</span> numpy <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">as</span> np
<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">from</span> sklearn.metrics <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">import</span> roc_auc_score, average_precision_score
<span class="hljs-function" style="box-sizing: border-box;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">def</span> <span class="hljs-title" style="box-sizing: border-box;">atop</span><span class="hljs-params" style="color: rgb(102, 0, 102); box-sizing: border-box;">(y_sorted)</span>:</span>
num = len(y_sorted)
index = range(num)
atop = <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span> - float(sum(y_sorted * index)) / sum(y_sorted) / num
<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">return</span> atop

y1 = np.array([<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>] + [<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>] * <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">10</span>)
f1 = np.array([i / float(len(y1)) <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> i <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">in</span> range(len(y1), <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>, -<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>)])
auc1 = roc_auc_score(y1, f1)
ap1 = average_precision_score(y1, f1)
atop1 = atop(y1)
<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">print</span> auc1, atop1, ap1

y2 = np.array([<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>,<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>] + [<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>] * <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">990</span>)
f2 = np.array([i / float(len(y2)) <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> i <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">in</span> range(len(y2), <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>, -<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>)])
auc2 = roc_auc_score(y2, f2)
atop2 = atop(y2)
ap2 = average_precision_score(y2, f2)
<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">print</span> auc2, atop2, ap2

y3 = np.array([<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>] * <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">100</span> + [<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>] * <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">100</span> + [<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>] * <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">999800</span>)
f3 = np.array([i / float(len(y3)) <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> i <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">in</span> range(len(y3), <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>, -<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>)])
auc3 = roc_auc_score(y3, f3)
atop3 = atop(y3)
ap3 = average_precision_score(y3, f3)
<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">print</span> auc3, atop3, ap3

y4 = np.array([<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>] * <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">100</span> + [<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>] * <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">999900</span>)
f4 = np.array([i / float(len(y4)) <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">for</span> i <span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">in</span> range(len(y4), <span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">0</span>, -<span class="hljs-number" style="color: rgb(0, 102, 102); box-sizing: border-box;">1</span>)])
auc4 = roc_auc_score(y4, f4)
atop4 = atop(y4)
ap4 = average_precision_score(y4, f4)
<span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">print</span> auc4, atop4, ap4</code><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; background-color: rgb(238, 238, 238); top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right;"><li style="box-sizing: border-box; padding: 0px 5px;">1</li><li style="box-sizing: border-box; padding: 0px 5px;">2</li><li style="box-sizing: border-box; padding: 0px 5px;">3</li><li style="box-sizing: border-box; padding: 0px 5px;">4</li><li style="box-sizing: border-box; padding: 0px 5px;">5</li><li style="box-sizing: border-box; padding: 0px 5px;">6</li><li style="box-sizing: border-box; padding: 0px 5px;">7</li><li style="box-sizing: border-box; padding: 0px 5px;">8</li><li style="box-sizing: border-box; padding: 0px 5px;">9</li><li style="box-sizing: border-box; padding: 0px 5px;">10</li><li style="box-sizing: border-box; padding: 0px 5px;">11</li><li style="box-sizing: border-box; padding: 0px 5px;">12</li><li style="box-sizing: border-box; padding: 0px 5px;">13</li><li style="box-sizing: border-box; padding: 0px 5px;">14</li><li style="box-sizing: border-box; padding: 0px 5px;">15</li><li style="box-sizing: border-box; padding: 0px 5px;">16</li><li style="box-sizing: border-box; padding: 0px 5px;">17</li><li style="box-sizing: border-box; padding: 0px 5px;">18</li><li style="box-sizing: border-box; padding: 0px 5px;">19</li><li style="box-sizing: border-box; padding: 0px 5px;">20</li><li style="box-sizing: border-box; padding: 0px 5px;">21</li><li style="box-sizing: border-box; padding: 0px 5px;">22</li><li style="box-sizing: border-box; padding: 0px 5px;">23</li><li style="box-sizing: border-box; padding: 0px 5px;">24</li><li style="box-sizing: border-box; padding: 0px 5px;">25</li><li style="box-sizing: border-box; padding: 0px 5px;">26</li><li style="box-sizing: border-box; padding: 0px 5px;">27</li><li style="box-sizing: border-box; padding: 0px 5px;">28</li><li style="box-sizing: border-box; padding: 0px 5px;">29</li><li style="box-sizing: border-box; padding: 0px 5px;">30</li><li style="box-sizing: border-box; padding: 0px 5px;">31</li><li style="box-sizing: border-box; padding: 0px 5px;">32</li><li style="box-sizing: border-box; padding: 0px 5px;">33</li><li style="box-sizing: border-box; padding: 0px 5px;">34</li><li style="box-sizing: border-box; padding: 0px 5px;">35</li></ul><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; background-color: rgb(238, 238, 238); top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right;"><li style="box-sizing: border-box; padding: 0px 5px;">1</li><li style="box-sizing: border-box; padding: 0px 5px;">2</li><li style="box-sizing: border-box; padding: 0px 5px;">3</li><li style="box-sizing: border-box; padding: 0px 5px;">4</li><li style="box-sizing: border-box; padding: 0px 5px;">5</li><li style="box-sizing: border-box; padding: 0px 5px;">6</li><li style="box-sizing: border-box; padding: 0px 5px;">7</li><li style="box-sizing: border-box; padding: 0px 5px;">8</li><li style="box-sizing: border-box; padding: 0px 5px;">9</li><li style="box-sizing: border-box; padding: 0px 5px;">10</li><li style="box-sizing: border-box; padding: 0px 5px;">11</li><li style="box-sizing: border-box; padding: 0px 5px;">12</li><li style="box-sizing: border-box; padding: 0px 5px;">13</li><li style="box-sizing: border-box; padding: 0px 5px;">14</li><li style="box-sizing: border-box; padding: 0px 5px;">15</li><li style="box-sizing: border-box; padding: 0px 5px;">16</li><li style="box-sizing: border-box; padding: 0px 5px;">17</li><li style="box-sizing: border-box; padding: 0px 5px;">18</li><li style="box-sizing: border-box; padding: 0px 5px;">19</li><li style="box-sizing: border-box; padding: 0px 5px;">20</li><li style="box-sizing: border-box; padding: 0px 5px;">21</li><li style="box-sizing: border-box; padding: 0px 5px;">22</li><li style="box-sizing: border-box; padding: 0px 5px;">23</li><li style="box-sizing: border-box; padding: 0px 5px;">24</li><li style="box-sizing: border-box; padding: 0px 5px;">25</li><li style="box-sizing: border-box; padding: 0px 5px;">26</li><li style="box-sizing: border-box; padding: 0px 5px;">27</li><li style="box-sizing: border-box; padding: 0px 5px;">28</li><li style="box-sizing: border-box; padding: 0px 5px;">29</li><li style="box-sizing: border-box; padding: 0px 5px;">30</li><li style="box-sizing: border-box; padding: 0px 5px;">31</li><li style="box-sizing: border-box; padding: 0px 5px;">32</li><li style="box-sizing: border-box; padding: 0px 5px;">33</li><li style="box-sizing: border-box; padding: 0px 5px;">34</li><li style="box-sizing: border-box; padding: 0px 5px;">35</li></ul>

metricsaucatopaverage precision
10.853330.790000.62508
20.997790.995800.62508
30.999900.999850.30685
41.000000.999951.00000
Consider example 1 and 2. The former is balanced and the latter is unbalanced. The orders of the positive samples are the same in both examples.

However, auc and atop change a lot (0.85333 vs. 0.99779, 0.79000 vs. 0.99580). The more unbalanced the case is, auc and atop tend to be higher.
average precisions in both examples are the same (0.62508 vs. 0.62508)

Consider example 3 and 4. Both of them are extremely unbalanced. The difference is that the positive samples in example 3 are ordered from 100~199 while in example 4 ordered from 0~99.

However, auc and atop are nearly the same in both cases (0.99990 vs. 1.0, 0.99985 vs. 0.99995)
the average precision is able to distinguish this difference obviously (0.30685 vs. 1.0)

-------

作者简介:熊涛,资深数据科学家,现任EverString数据科学团队中国负责人,前Hulu数据科学主管。

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  机器学习