混淆矩阵
混淆矩阵
|真实 Positive | TP | FN |
|真实 Negtive | FP | TN |
precision、recall和f1
\(precision=TP/(TP+FP)\)
\(recall=TP/(TP+FN)\)
关于precision和recall的解释可以参考sklearn的文档。不吹不黑,sklearn的文档真的就是一本优秀的机器学习教程,简明易懂还有代码。
https://scikit-learn.org/stable/modules/model_evaluation.html#precision-recall-f-measure-metrics
通俗理解:
Intuitively, precision is the ability of the classifier not to label as positive a sample that is negative, and recall is the ability of the classifier to find all the positive samples.
通俗来说,precision是分类器不把负样本分类为正样本的能力,recall是将所有正样本都找出来的能力。
需要注意的是,这两个一般是相反的,precision高,recall会相应的降低。(当然,有人会说那我全分对了两个值不就都是1了吗?对,这确实是理想情况,但是事实上,提高recall会让模型偏向于分类为正样本,导致负样本中也有更多的样本被分为正样本)
二者的调和均值是F1。
计算: 1
2
3
4
5
6
7
8
9
10
11>>> from sklearn.metrics import recall_score, accuracy_score, f1_score
>>> prediction_ids = [0,0,0,1,1,1,1,1]
>>> label_ids = [0,0,1,0,1,0,1,1]
>>> accuracy_score(label_ids, prediction_ids)
0.625
>>> recall_score(label_ids, prediction_ids)
0.75
>>> recall_score(label_ids, prediction_ids, pos_label=0)
0.5
>>> f1_score(label_ids, prediction_ids)
0.6666666666666665
recall_score的pos_label参数的意义是:positive_label,也就是Positive的标签号。默认是1,也就是TP指:实际是1,预测也是1。
sklearn的混淆矩阵横纵坐标排布
1 |
|
这里对应的混淆矩阵是:
|真实 0 | 2 | 2 |
|真实 1 | 1 | 3 |
也就是一个数字的行坐标代表真实标签,列坐标代表预测标签。