An NxN table that summarizes how successful a **classification model’s** predictions were; that is, the correlation between the label and the model’s classification. One axis of a confusion matrix is the **label** that the model predicted, and the other axis is the actual label. N represents the number of **classes**. In a **binary classification** problem, N=2. For example, here is a sample confusion matrix for a binary classification problem:

Tumor (predicted) | Non-Tumor (predicted) | |
---|---|---|

Tumor (actual) | 18 | 1 |

Non-Tumor (actual) | 6 | 452 |

The preceding confusion matrix shows that of the 19 samples that actually had tumors, the model correctly classified 18 as having tumors (18 **true positives**), and incorrectly classified 1 as not having a tumor (1 **false negative**). Similarly, of 458 samples that actually did not have tumors, 452 were correctly classified (452 **true negatives**) and 6 were incorrectly classified (6 **false positives**).

The confusion matrix for a **multi-class classification** problem can help you determine mistake patterns. For example, a confusion matrix could reveal that a model trained to recognize handwritten digits tends to mistakenly predict 9 instead of 4, or 1 instead of 7.

Confusion matrices contain sufficient information to calculate a variety of performance metrics, including **precision** and **recall**.