nltk.metrics.confusionmatrix module¶
- class nltk.metrics.confusionmatrix.ConfusionMatrix[source]¶
Bases:
object
The confusion matrix between a list of reference values and a corresponding list of test values. Entry [r,t] of this matrix is a count of the number of times that the reference value r corresponds to the test value t. E.g.:
>>> from nltk.metrics import ConfusionMatrix >>> ref = 'DET NN VB DET JJ NN NN IN DET NN'.split() >>> test = 'DET VB VB DET NN NN NN IN DET NN'.split() >>> cm = ConfusionMatrix(ref, test) >>> print(cm['NN', 'NN']) 3
Note that the diagonal entries Ri=Tj of this matrix corresponds to correct values; and the off-diagonal entries correspond to incorrect values.
- __init__(reference, test, sort_by_count=False)[source]¶
Construct a new confusion matrix from a list of reference values and a corresponding list of test values.
- Parameters:
reference (list) – An ordered list of reference values.
test (list) – A list of values to compare against the corresponding reference values.
- Raises:
ValueError – If
reference
andlength
do not have the same length.
- evaluate(alpha=0.5, truncate=None, sort_by_count=False)[source]¶
Tabulate the recall, precision and f-measure for each value in this confusion matrix.
>>> reference = "DET NN VB DET JJ NN NN IN DET NN".split() >>> test = "DET VB VB DET NN NN NN IN DET NN".split() >>> cm = ConfusionMatrix(reference, test) >>> print(cm.evaluate()) Tag | Prec. | Recall | F-measure ----+--------+--------+----------- DET | 1.0000 | 1.0000 | 1.0000 IN | 1.0000 | 1.0000 | 1.0000 JJ | 0.0000 | 0.0000 | 0.0000 NN | 0.7500 | 0.7500 | 0.7500 VB | 0.5000 | 1.0000 | 0.6667
- Parameters:
alpha (float) – Ratio of the cost of false negative compared to false positives, as used in the f-measure computation. Defaults to 0.5, where the costs are equal.
truncate (int, optional) – If specified, then only show the specified number of values. Any sorting (e.g., sort_by_count) will be performed before truncation. Defaults to None
sort_by_count (bool, optional) – Whether to sort the outputs on frequency in the reference label. Defaults to False.
- Returns:
A tabulated recall, precision and f-measure string
- Return type:
str
- f_measure(value, alpha=0.5)[source]¶
Given a value used in the confusion matrix, return the f-measure that corresponds to this value. The f-measure is the harmonic mean of the
precision
andrecall
, weighted byalpha
. In particular, given the precision p and recall r defined by:p = true positive / (true positive + false negative)
r = true positive / (true positive + false positive)
The f-measure is:
1/(alpha/p + (1-alpha)/r)
With
alpha = 0.5
, this reduces to:2pr / (p + r)
- Parameters:
value – value used in the ConfusionMatrix
alpha (float) – Ratio of the cost of false negative compared to false positives. Defaults to 0.5, where the costs are equal.
- Returns:
the F-measure corresponding to
value
.- Return type:
float
- precision(value)[source]¶
Given a value in the confusion matrix, return the precision that corresponds to this value. The precision is defined as:
p = true positive / (true positive + false negative)
and can loosely be considered the ratio of how often
value
was predicted correctly relative to the number of predictions forvalue
.- Parameters:
value – value used in the ConfusionMatrix
- Returns:
the precision corresponding to
value
.- Return type:
float
- pretty_format(show_percents=False, values_in_chart=True, truncate=None, sort_by_count=False)[source]¶
- Returns:
A multi-line string representation of this confusion matrix.
- Parameters:
truncate (int) – If specified, then only show the specified number of values. Any sorting (e.g., sort_by_count) will be performed before truncation.
sort_by_count – If true, then sort by the count of each label in the reference data. I.e., labels that occur more frequently in the reference label will be towards the left edge of the matrix, and labels that occur less frequently will be towards the right edge.
@todo: add marginals?
- recall(value)[source]¶
Given a value in the confusion matrix, return the recall that corresponds to this value. The recall is defined as:
r = true positive / (true positive + false positive)
and can loosely be considered the ratio of how often
value
was predicted correctly relative to how oftenvalue
was the true result.- Parameters:
value – value used in the ConfusionMatrix
- Returns:
the recall corresponding to
value
.- Return type:
float