14

More Performance Evaluation Metrics You Should Know for Classification Problems

 4 years ago
source link: https://mc.ai/more-performance-evaluation-metrics-you-should-know-for-classification-problems/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

The equations of 4 key classification metrics

Recall versus Precision

Precisionis the ratio of True Positives to all the positives predicted by the model.

Low precision: more the number of False positives the model predicts lesser the precision.

Recall (Sensitivity)is the ratio of True Positives to all the positives in your Dataset.

Low recall: more the number of False Negatives the model predicts lesser the recall.

The idea of recall and precision seems to be abstract. Let me illustrate the difference in three real cases.
  • the result of TP will be that the COVID 19 residents diagnosed with COVID-19.
  • the result of TN will be that healthy residents are with good health.
  • the result of FP will be that those actually healthy residents are predicted as COVID 19 residents.
  • the result of FN will be that those actual COVID 19 residents are predicted as the healthy residents

In case 1, which scenario do you think will have the highest cost?

Imagine that if we predict COVID 19 residents as healthy patients and they do not need to quarantine, there would be a massive number of COVID 19 infection. The cost of f alse negative is much higher the cost of f alse positives.

  • the result of TP will be that spam emails are placed in the spam folder.
  • the result of TN will be that important emails are received.
  • the result of FP will be that important emails are placed in the spam folder.
  • the result of FN will be that spam emails are received.

In case 2, which scenario do you think will have the highest cost?

Well, since missing important emails will clearly be more of a problem than receiving spam, we can say that in this case, FP will have a higher cost than FN.

  • the result of TP will be that bad loans are correctly predicted as bad loans.
  • the result of TN will be that good loans are correctly predicted as good loans.
  • the result of FP will be that (actual) good loans are incorrectly predicted as bad loans.
  • the result of FN will be that (actual) bad loans are incorrectly predicted as good loans.

In case 3, which scenario do you think will have the highest cost?

The banks would lose a bunch amount of money if the actual bad loans are predicted as good loans due to loans not being repaid. In other hands, banks wont be able to make more revenue if the actual good loans are predicted as bad loans. Therefore, the cost of False Negatives is much higher the cost of False Positives. Imagine that

Summary

In practice, the cost of false negative is not the same as the cost of false positive depending on the different specific cases. It is evident that not only should we calculate accuracy, but we should also evaluate our model using other metrics, for example, Recall and Precision .


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK