No video

Precision, Recall, & F1 Score Intuitively Explained

  Рет қаралды 51,843

Scarlett's Log

Scarlett's Log

Күн бұрын

Пікірлер: 32
@swedishguyonyoutube4684
@swedishguyonyoutube4684 Жыл бұрын
Many thanks for this intuitive yet detailed and well-reasoned explanation!
@sarvagyagupta1744
@sarvagyagupta1744 2 жыл бұрын
Another good explanation of F1 score is that it considers FN and FP equally like: Tp/(TP + 1/2(FN+FP)). So this is another way how it's a goos mix of precision and recall.
@Maddy6919
@Maddy6919 8 ай бұрын
These concepts "clicked" for me thanks to this video, really great work and thank you so much for sharing.
@jhfoleiss
@jhfoleiss 3 жыл бұрын
Awesome explanation, thank you very much. It shines because it gives the motivation for each metric. Thanks!
@justsuyash
@justsuyash 8 ай бұрын
Probably the best explanation on the net.
@vladimirblagojevic1950
@vladimirblagojevic1950 3 жыл бұрын
Great explanation because it includes motivation for the metrics and how they relate to each other.
@RolandoLopezNieto
@RolandoLopezNieto 4 ай бұрын
Great explanation, thank you very much
@aristotlesocrates8409
@aristotlesocrates8409 4 ай бұрын
Excellent explanation
@stretch8390
@stretch8390 Жыл бұрын
Really clear explanation, well done.
@IAMjulesATX
@IAMjulesATX 2 жыл бұрын
this is sooooo helpful!! thank you!
@samirkhan6195
@samirkhan6195 2 жыл бұрын
I couldnt get one thing , putting here if you are able to answer please ---> ' in the Slide of 'Precision' , you mentioned that it solves problem with Accuracy cheating , where if 99% of people are healthy and 1% unhealthy , and if your model predict all as 'Healthy' then Precision would be 0 , but how ? lets say 100 people , 99 healthy(positive), 1 unhealthy (negative) , prediction made by model as All Positive (cheated) , thus 99 (True-Positive) , 1 ( False-Positive) , Precision = TP / ( TP +FP ) = 99 / ( 99 + 1 ) = 0.99 = 99% right ? how 0 ?
@ScarlettsLog
@ScarlettsLog 2 жыл бұрын
Your confusion is defining what positive and negative is. If you are "looking for" the unhealthy case, then unhealthy = positive. Thus, TP = 0, in your example. Hope that helps.
@jeromeeusebius
@jeromeeusebius 2 жыл бұрын
As Scarletts noted in their reply, you have the "unhealthy" and 'healthy' classes mixed up. If you see one of the early slides about terminologies, the class we are interested in is the the one we called positive, and the one we aren't interested in is the 'negative class'. In this case of infected and uninfected people, are interested in the infected people, hence this is the positive class. So we the accuracy metric, we could cheat by just predicting the negative class (all healthy people of which we have 99 of them) and still achieve an accuracy of 99 which is very high but will be misleading We Precision, defined as TP/(TP + FP), if we predict all negatives, then there are no true positives, as for TP (model and label matches) but here model predicts all negative. Consequently TP = 0, and hence precision is 0. Hopefully this comment adds more details and why the precision is 0.
@samirkhan6195
@samirkhan6195 2 жыл бұрын
@@jeromeeusebius i get it , Thanks for spending your time in writting this all, i appreciate it !!!
@MenTaLLyMenTaL
@MenTaLLyMenTaL 2 жыл бұрын
​@@ScarlettsLog I have a further question on this if you can help. With Healthy=Negative & Unhealthy=Positive; if the model predicts all as Healthy (negative), then TP=0, FP=0. So Precision = TP / (TP + FP) = 0 / 0 = undefined. How can we call it 0% precise? Thanks. On the same lines, in the slide @6:35, you say "Cheating precision 100% => 0% recall". How can the model cheat to be 100% precise? For that, TP must be = (TP + FP) != 0. If the model makes all negative guesses, then Precision = TP/(TP+FP) = 0/0 = undefined again as above. If the model makes all positive guesses, then Precision = some fraction that is not necessarily 100%.
@zerocolours8470
@zerocolours8470 10 ай бұрын
​@@MenTaLLyMenTaL You are correct, they would be considered undefined, it is a special case/edge case that defaults to 0. For anyone else confused, I will provide two case studies to showcase how we would normally assume precision/recall to operate and how they operate in edge cases. Say we have a dataset of 100 people, 1 person is unhealthy and 99 are healthy. If we consider unhealthy = Positive and Healthy = Negative, we can see that: True positive here means when we guess that someone is unhealthy, and they are actually indeed unhealthy. False positive here means when we guess that someone is unhealthy, but they are actually healthy. True negative here means when we guess that someone is healthy, and they are actually healthy. False negative here means when we guess that someone is healthy, but they are actually unhealthy. Now let's do some experiments. If we have a model that predicts everyone as unhealthy, what will the precision and recall scores be? Remember, Our dataset has 99 people healthy and 1 one unhealthy. Precision: (TP)/(TP + FP) For precision, since our model predicts everyone as unhealthy, we correctly guess one person who is unhealthy as unhealthy (TP value of 1) and guess 99 people who are actually healthy as unhealthy (FP value of 99). Thus precision is: 1/100 = 0.01% Recall: (TP)/(TP + FN) For recall, since our model predicts everyone as unhealthy, we correctly guess one person who is unhealthy as unhealthy (TP value of 1) and we then guess 99 people who are actually healthy as unhealthy. As we saw in recall, this incorrect guessing of 99 people are false positive (FP) cases. This means that we just don't make any FN guesses (defined as when we guess someone as healthy but they are actually unhealthy) and thus we get an FN of 0. This then turns into 1/(1+0) which is 1 or 100%. Now let's do another experiment. Let's have our model predict everyone as healthy. Precision: (TP)/(TP + FP) TP assumes we guess as unhealthy and they are actually unhealthy. Since we predict everyone as healthy, we get a True Positive score of 0. There is 1 person who is actually unhealthy, but we are always guessing "healthy", so we just never make any True Positive guesses. Now for FP, we also get a score of 0. FP assumes we guess someone as unhealthy but they are actually healthy, since we never even guess that someone is unhealthy, we again just make no False Positive guesses. Thus, we get precision = 0/0. This is a special case and has been noted in implementations to then default to 0 as the output. Since YT deletes links, search up sklearn.metrics.precision_recall_fscore_support and click on the first link and go to the notes section. However, in this specific scenario we've devised, we could consider precision to be 100% because we never false guess someone as unhealthy when they are actually healthy (FP = 0), as a response. Basically, when this situation happens, a library or framework will report precision to be 0, but as ML practicioners, you would need to see what this undefined precision output means in the context of your problem. Recall: (TP)/(TP + FN) By now, I will just state that TP is 0. It is the same reasoning as above. Remember False Negative (FN) happens when we guess someone as healthy but they are actually unhealthy. This happens 1 time, we guess the sick person as being healthy. Thus we get 0/(0 + 1) = 0 for recall.
@kinkysaru769
@kinkysaru769 2 жыл бұрын
You have very clear articulation. Thank you
@hansoflaherty9864
@hansoflaherty9864 10 ай бұрын
Thank you
@mohammadkashif750
@mohammadkashif750 2 жыл бұрын
very well explained
@janithforex4075
@janithforex4075 Жыл бұрын
Nice explanations. When you say negative labels are left out of picture in the precision calculation, I don’t quite get it because precision formula contains the term FP which represents a quantity from negative labels. Am I out of the track?
@ScarlettsLog
@ScarlettsLog Жыл бұрын
Leaving it out of the picture refers only to the guesses that the model made. Yes, you need the negative labels to check if it was an FP guess, but it does not concern your own pool of negative guesses. Hope that makes sense, and sorry if the wording in the slides caused confusion.
@felipe_marra
@felipe_marra 9 ай бұрын
tnks
@jairtorres289
@jairtorres289 3 жыл бұрын
Loved it!, thanks a lot!
@CursiveStarVlogs
@CursiveStarVlogs Жыл бұрын
Good work 👍🧡
@stanislavhinov4225
@stanislavhinov4225 3 жыл бұрын
Great video!
@timguo6858
@timguo6858 Жыл бұрын
Love you!
@GauchoMwenyewe
@GauchoMwenyewe 2 жыл бұрын
Good stuff
@VOGTLANDOUTDOORS
@VOGTLANDOUTDOORS Жыл бұрын
ENJOYED THE VIDEO - keep up the great work ! I DO find myself with one question though... what about the "NULL" case did it actually get considered? In one example you describe a type B as the "positive" case amongst types A, B, C and D; this makes SENSE, BUT... ... you then describe the "negatives" as being ONLY A, C and D... but what about the case of "NOT A, B, C OR D" ? This is the "null" set, and it occurs quite regularly in Computer Vision OBJECT DETECTION models. IMAGINE THIS SCENARIO: An image of FRUIT on a TABLE - 3 red apples, 1 yellow apple, and 2 bananas. SAY you're looking to detect APPLES, but your MODEL only detects the 3 RED APPLES and UTTERLY MISSES the 1 YELLOW APPLE? THIS is a case of there BEING an APPLE, but you DIDN'T MIS-CLASSIFY it as the BANANA... YOU MISSED DETECTING IT COMPLETELY ! HOW would you describe THIS common scenario if you're only considering/assuming that your model WILL classify EVERYTHING in the image as EITHER an APPLE or a BANANA... but you DIDN'T expect it to UTTERLY IGNORE the yellow apple altogether? It's been 2 years since you posted, so I'm not expecting a reply; I AM hoping that other viewers will ponder the explanations presented - there's a bit more going on... Cheers, -Mark Vogt, Solution Architect/Data Scientist - AVANADE
@ScarlettsLog
@ScarlettsLog Жыл бұрын
I think you hit on a very important point, which also doubly highlights why the F1 score is rarely used for multi-class predictions. If only considering the binary case, the issues you highlight go away; over and including 0.5 in positive, and the rest are negative. This is a great reason to investigate metrics that are better suited for the multi-class case, even if accuracy and F1 give you "reasonable" results.
@sfadfhgj
@sfadfhgj 3 ай бұрын
I'm not a data scientist but... Aren't we calculating the precision, recall, accuracy etc of the classifications that are *made* here? I think you could always create confusion matrix for each object on a binary bases, "detected" and "not detected", so that you can measure the accuracy/precision etc of the "detection" capability of the model, and not the classifications of the objects. Because these two feels like different things to measure.
@elvenkim
@elvenkim 2 жыл бұрын
Awesome! I love the cheating cases brought in!
@gauravjha2148
@gauravjha2148 2 жыл бұрын
Excellent explanation
КТО ЛЮБИТ ГРИБЫ?? #shorts
00:24
Паша Осадчий
Рет қаралды 3,7 МЛН
Ik Heb Aardbeien Gemaakt Van Kip🍓🐔😋
00:41
Cool Tool SHORTS Netherlands
Рет қаралды 9 МЛН
Get 10 Mega Boxes OR 60 Starr Drops!!
01:39
Brawl Stars
Рет қаралды 19 МЛН
managed to catch #tiktok
00:16
Анастасия Тарасова
Рет қаралды 48 МЛН
MFML 044 - Precision vs recall
5:47
Cassie Kozyrkov
Рет қаралды 13 М.
ROC and AUC, Clearly Explained!
16:17
StatQuest with Josh Starmer
Рет қаралды 1,5 МЛН
Stanford's FREE data science book and course are the best yet
4:52
Python Programmer
Рет қаралды 693 М.
Mean Average Precision (mAP) Explained and PyTorch Implementation
27:32
КТО ЛЮБИТ ГРИБЫ?? #shorts
00:24
Паша Осадчий
Рет қаралды 3,7 МЛН