您的位置:首页 > 其它

tensorflow中precision,recall和F1

2017-02-28 20:40 351 查看
我一直找precision和recall怎么计算,因为一直调用函数的关系,我以为tensorflow已经封装写好了这样的调用的方法,一直没找到,看来是自己太懒惰了,失去了动手的欲望,在下面的地址中看到别人写的代码,虽然不能照搬使用,能有所启发也是很重要啊。

https://gist.github.com/Mistobaan/337222ac3acbfc00bdac

def tf_confusion_metrics(model, actual_classes, session, feed_dict):
predictions = tf.argmax(model, 1)
actuals = tf.argmax(actual_classes, 1)

ones_like_actuals = tf.ones_like(actuals)
zeros_like_actuals = tf.zeros_like(actuals)
ones_like_predictions = tf.ones_like(predictions)
zeros_like_predictions = tf.zeros_like(predictions)

tp_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, ones_like_actuals),
tf.equal(predictions, ones_like_predictions)
),
"float"
)
)

tn_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, zeros_like_actuals),
tf.equal(predictions, zeros_like_predictions)
),
"float"
)
)

fp_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, zeros_like_actuals),
tf.equal(predictions, ones_like_predictions)
),
"float"
)
)

fn_op = tf.reduce_sum(
tf.cast(
tf.logical_and(
tf.equal(actuals, ones_like_actuals),
tf.equal(predictions, zeros_like_predictions)
),
"float"
)
)

tp, tn, fp, fn = \
session.run(
[tp_op, tn_op, fp_op, fn_op],
feed_dict
)

tpr = float(tp)/(float(tp) + float(fn))
fpr = float(fp)/(float(tp) + float(fn))

accuracy = (float(tp) + float(tn))/(float(tp) + float(fp) + float(fn) + float(tn))

recall = tpr
precision = float(tp)/(float(tp) + float(fp))

f1_score = (2 * (precision * recall)) / (precision + recall)

print 'Precision = ', precision
print 'Recall = ', recall
print 'F1 Score = ', f1_score
print 'Accuracy = ', accuracy
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐