Matthews correlation coefficient
The Matthews correlation coefficient (MCC) or phi coefficient is used in machine learning as a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975.[1] The MCC is defined identically to Pearson's phi coefficient, introduced by Karl Pearson,[2][3] also known as the Yule phi coefficient from its introduction by Udny Yule in 1912.[4] Despite these antecedents which predate Matthews's use by several decades, the term MCC is widely used in the field of bioinformatics and machine learning.
The coefficient takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes.[5] The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation. However, if MCC equals neither −1, 0, or +1, it is not a reliable indicator of how similar a predictor is to random guessing.[6] MCC is closely related to the chi-square statistic for a 2×2 contingency table
where n is the total number of observations.
While there is no perfect way of describing the confusion matrix of true and false positives and negatives by a single number, the Matthews correlation coefficient is generally regarded as being one of the best such measures.[7] Other measures, such as the proportion of correct predictions (also termed accuracy), are not useful when the two classes are of very different sizes. For example, assigning every object to the larger set achieves a high proportion of correct predictions, but is not generally a useful classification.
The MCC can be calculated directly from the confusion matrix using the formula:
In this equation, TP is the number of true positives, TN the number of true negatives, FP the number of false positives and FN the number of false negatives. If any of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews correlation coefficient of zero, which can be shown to be the correct limiting value.
The MCC can be calculated with the formula:
using the positive predictive value, the true positive rate, the true negative rate, the negative predictive value, the false discovery rate, the false negative rate, the false positive rate, and the false omission rate.
The original formula as given by Matthews was:[1]
This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (Δp) and Youden's J statistic (Informedness or Δp').[7][8] Markedness and Informedness correspond to different directions of information flow and generalize Youden's J statistic, the p statistics and (as their geometric mean) the Matthews Correlation Coefficient to more than two classes.[7]
Some scientists claim the Matthews correlation coefficient to be the most informative single score to establish the quality of a binary classifier prediction in a confusion matrix context.[9]
Example
Given a sample of 13 pictures, 8 of cats and 5 of dogs, where cats belong to class 1 and dogs belong to class 0,
- actual = [1,1,1,1,1,1,1,1,0,0,0,0,0],
assume that a classifier that distinguishes between cats and dogs is trained, and we take the 13 pictures and run them through the classifier, and the classifier makes 8 accurate predictions and misses 5: 3 cats wrongly predicted as dogs (first 3 predictions) and 2 dogs wrongly predicted as cats (last 2 predictions).
- prediction = [0,0,0,1,1,1,1,1,0,0,0,1,1]
With these two labelled sets (actual and predictions) we can create a confusion matrix that will summarize the results of testing the classifier:
|
In this confusion matrix, of the 8 cat pictures, the system judged that 3 were dogs, and of the 5 dog pictures, it predicted that 2 were cats. All correct predictions are located in the diagonal of the table (highlighted in bold), so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
In abstract terms, the confusion matrix is as follows:
|
where: P = Positive; N = Negative; TP = True Positive; FP = False Positive; TN = True Negative; FN = False Negative.
Plugging the numbers from the formula:
MCC = [(5*3) - (2*3)]/ SQRT[(5+2)*(5+3)*(3+2)*(3+3)] = 9/SQRT[1680] = 0.219
Confusion matrix
Sources: Fawcett (2006),[10] Powers (2011),[11] Ting (2011),[12] CAWCR,[13] D. Chicco & G. Jurman (2020, 2021),[14][15] Tharwat (2018).[16] |
Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:
True condition | ||||||
Total population | Condition positive | Condition negative | Prevalence = Σ Condition positive/Σ Total population | Accuracy (ACC) = Σ True positive + Σ True negative/Σ Total population | ||
Predicted condition positive |
True positive | False positive, Type I error |
Positive predictive value (PPV), Precision = Σ True positive/Σ Predicted condition positive | False discovery rate (FDR) = Σ False positive/Σ Predicted condition positive | ||
Predicted condition negative |
False negative, Type II error |
True negative | False omission rate (FOR) = Σ False negative/Σ Predicted condition negative | Negative predictive value (NPV) = Σ True negative/Σ Predicted condition negative | ||
True positive rate (TPR), Recall, Sensitivity, probability of detection, Power = Σ True positive/Σ Condition positive | False positive rate (FPR), Fall-out, probability of false alarm = Σ False positive/Σ Condition negative | Positive likelihood ratio (LR+) = TPR/FPR | Diagnostic odds ratio (DOR) = LR+/LR− | F1 score = 2 · Precision · Recall/Precision + Recall | ||
False negative rate (FNR), Miss rate = Σ False negative/Σ Condition positive | Specificity (SPC), Selectivity, True negative rate (TNR) = Σ True negative/Σ Condition negative | Negative likelihood ratio (LR−) = FNR/TNR |
Multiclass case
The Matthews correlation coefficient has been generalized to the multiclass case. This generalization was called the statistic (for K different classes) by the author, and defined in terms of a confusion matrix [17] .[18]
When there are more than two labels the MCC will no longer range between -1 and +1. Instead the minimum value will be between -1 and 0 depending on the true distribution. The maximum value is always +1.
This formula can be more easily understood by defining intermediate variables:[19]
- the number of times class k truly occurred,
- the number of times class k was predicted,
- the total number of samples correctly predicted,
- the total number of samples. This allows the formula to be expressed as:
Using above formula to compute MCC measure for the Dog & Cat prediction discussed above, where the Confusion Matrix is treated as a 2 x Multiclass example:
numer = (8*13) - (7*8) - (6*5) = 18
denom = SQRT[(13^2 - 7^2 - 6^2) * (13^2 - 8^2 - 5^2)] = SQRT[6720]
MCC = 18/81.975 = 0.219
Advantages of MCC over accuracy and F1 score
As explained by Davide Chicco in his paper "Ten quick tips for machine learning in computational biology" (BioData Mining, 2017) and by Giuseppe Jurman in his paper "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation" (BMC Genomics, 2020), the Matthews correlation coefficient is more informative than F1 score and accuracy in evaluating binary classification problems, because it takes into account the balance ratios of the four confusion matrix categories (true positives, true negatives, false positives, false negatives).[9][20]
The former article explains, for Tip 8:
In order to have an overall understanding of your prediction, you decide to take advantage of common statistical scores, such as accuracy, and F1 score.
(Equation 1, accuracy: worst value = 0; best value = 1)
(Equation 2, F1 score: worst value = 0; best value = 1)
However, even if accuracy and F1 score are widely employed in statistics, both can be misleading, since they do not fully consider the size of the four classes of the confusion matrix in their final score computation.
Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue.
By applying your only-positive predictor to your imbalanced validation set, therefore, you obtain values for the confusion matrix categories:
TP = 95, FP = 5; TN = 0, FN = 0.
These values lead to the following performance scores: accuracy = 95%, and F1 score = 97.44%. By reading these over-optimistic scores, then you will be very happy and will think that your machine learning algorithm is doing an excellent job. Obviously, you would be on the wrong track.
On the contrary, to avoid these dangerous misleading illusions, there is another performance score that you can exploit: the Matthews correlation coefficient [40] (MCC).
(Equation 3, MCC: worst value = −1; best value = +1).
By considering the proportion of each class of the confusion matrix in its formula, its score is high only if your classifier is doing well on both the negative and the positive elements.
In the example above, the MCC score would be undefined (since TN and FN would be 0, therefore the denominator of Equation 3 would be 0). By checking this value, instead of accuracy and F1 score, you would then be able to notice that your classifier is going in the wrong direction, and you would become aware that there are issues you ought to solve before proceeding.
Consider this other example. You ran a classification on the same dataset which led to the following values for the confusion matrix categories:
TP = 90, FP = 4; TN = 1, FN = 5.
In this example, the classifier has performed well in classifying positive instances, but was not able to correctly recognize negative data elements. Again, the resulting F1 score and accuracy scores would be extremely high: accuracy = 91%, and F1 score = 95.24%. Similarly to the previous case, if a researcher analyzed only these two score indicators, without considering the MCC, they would wrongly think the algorithm is performing quite well in its task, and would have the illusion of being successful.
On the other hand, checking the Matthews correlation coefficient would be pivotal once again. In this example, the value of the MCC would be 0.14 (Equation 3), indicating that the algorithm is performing similarly to random guessing. Acting as an alarm, the MCC would be able to inform the data mining practitioner that the statistical model is performing poorly.
For these reasons, we strongly encourage to evaluate each test performance through the Matthews correlation coefficient (MCC), instead of the accuracy and the F1 score, for any binary classification problem.
— Davide Chicco, Ten quick tips for machine learning in computational biology[9]
Note that the F1 score depends on which class is defined as the positive class. In the first example above, the F1 score is high because the majority class is defined as the positive class. Inverting the positive and negative classes results in the following confusion matrix:
TP = 0, FP = 0; TN = 5, FN = 95
This gives an F1 score = 0%.
The MCC doesn't depend on which class is the positive one, which has the advantage over the F1 score to avoid incorrectly defining the positive class.
See also
- Cohen's kappa
- Cramér's V, a similar measure of association between nominal variables.
- F1 score
- Phi coefficient
- Fowlkes–Mallows index
References
- Matthews, B. W. (1975). "Comparison of the predicted and observed secondary structure of T4 phage lysozyme". Biochimica et Biophysica Acta (BBA) - Protein Structure. 405 (2): 442–451. doi:10.1016/0005-2795(75)90109-9. PMID 1180967.
- Cramer, H. (1946). Mathematical Methods of Statistics. Princeton: Princeton University Press, p. 282 (second paragraph). ISBN 0-691-08004-6
- Date unclear, but prior to his death in 1936.
- Yule, G. Udny (1912). "On the Methods of Measuring Association Between Two Attributes". Journal of the Royal Statistical Society. 75 (6): 579–652. doi:10.2307/2340126. JSTOR 2340126.
- Boughorbel, S.B (2017). "Optimal classifier for imbalanced data using Matthews Correlation Coefficient metric". PLOS ONE. 12 (6): e0177678. Bibcode:2017PLoSO..1277678B. doi:10.1371/journal.pone.0177678. PMC 5456046. PMID 28574989.
- Chicco, D. (2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioDataMining. 14. doi:10.1186/s13040-021-00244-z.
- Powers, David M W (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation" (PDF). Journal of Machine Learning Technologies. 2 (1): 37–63.
- Perruchet, P.; Peereman, R. (2004). "The exploitation of distributional information in syllable processing". J. Neurolinguistics. 17 (2–3): 97–119. doi:10.1016/s0911-6044(03)00059-9. S2CID 17104364.
- Chicco D (December 2017). "Ten quick tips for machine learning in computational biology". BioData Mining. 10 (35): 35. doi:10.1186/s13040-017-0155-3. PMC 5721660. PMID 29234465.
- Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010.
- Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63.
- Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8.
- Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.
- Chicco D., Jurman G. (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.CS1 maint: uses authors parameter (link)
- Chicco D., Toetsch N., Jurman G. (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 1-22. doi:10.1186/s13040-021-00244-z. PMID 33541410.CS1 maint: uses authors parameter (link)
- Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. doi:10.1016/j.aci.2018.08.003.
- Gorodkin, Jan (2004). "Comparing two K-category assignments by a K-category correlation coefficient". Computational Biology and Chemistry. 28 (5): 367–374. doi:10.1016/j.compbiolchem.2004.09.006. PMID 15556477.
- Gorodkin, Jan. "The Rk Page". The Rk Page. Retrieved 28 December 2016.
- "Matthew Correlation Coefficient". scikit-learn.org.
- Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.