Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is confidence threshold #9679

Closed
1 task done
284nnuS opened this issue Oct 3, 2022 · 12 comments
Closed
1 task done

What is confidence threshold #9679

284nnuS opened this issue Oct 3, 2022 · 12 comments
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@284nnuS
Copy link

284nnuS commented Oct 3, 2022

Search before asking

Question

Hello , i'm a newbiew. So can i ask u guys about :
parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
I want to understand about this hyper param. Where can i find this to understand

Additional

No response

@284nnuS 284nnuS added the question Further information is requested label Oct 3, 2022
@maxPrakken
Copy link

The confidence determines how certain the model is that the prediction received matches to a certain class. the threshold determines what the threshold for labeling something as something should be. lets say you have a confidence threshold of 0.6, which means the model will have to be at least 60% sure the object you're trying to classify is that object before it'll label it.

https://support.ultimate.ai/hc/en-us/articles/7941166026258-Confidence-Thresholds

here's a article about it if you want to learn more

@284nnuS
Copy link
Author

284nnuS commented Oct 7, 2022

The confidence determines how certain the model is that the prediction received matches to a certain class. the threshold determines what the threshold for labeling something as something should be. lets say you have a confidence threshold of 0.6, which means the model will have to be at least 60% sure the object you're trying to classify is that object before it'll label it.

https://support.ultimate.ai/hc/en-us/articles/7941166026258-Confidence-Thresholds

here's a article about it if you want to learn more

Does the confidence is related to IoU ? For example, As you say conf 0.6 which means the model will have at least 60% sure. But what IoU does it have?
Hope that you can reply to me,
Many thanks

@maxPrakken
Copy link

The confidence determines how certain the model is that the prediction received matches to a certain class. the threshold determines what the threshold for labeling something as something should be. lets say you have a confidence threshold of 0.6, which means the model will have to be at least 60% sure the object you're trying to classify is that object before it'll label it.
https://support.ultimate.ai/hc/en-us/articles/7941166026258-Confidence-Thresholds
here's a article about it if you want to learn more

Does the confidence is related to IoU ? For example, As you say conf 0.6 which means the model will have at least 60% sure. But what IoU does it have? Hope that you can reply to me, Many thanks

"confidence threshold is the minimum score that the model will consider the prediction to be a true prediction (otherwise it will ignore this prediction entirely). IoU threshold is the minimum overlap between ground truth and prediction boxes for the prediction to be considered a true positive." These two values are used to calculate the mAP but are not directly related. IoU just like confidence is a value that is extracted from the results of your model.

I hope this helped you understand better. Below I'll also link a stack overflow post that explains the concept pretty well in the accepted answer, this answer also relates to YOLOv5 so it should be applicable.

https://stackoverflow.com/questions/68527004/selecting-an-iou-and-confidence-threshold-for-evaluation-of-model-performance

@silversurfer11
Copy link

silversurfer11 commented Oct 11, 2022

The confidence determines how certain the model is that the prediction received matches to a certain class. the threshold determines what the threshold for labeling something as something should be. lets say you have a confidence threshold of 0.6, which means the model will have to be at least 60% sure the object you're trying to classify is that object before it'll label it.
https://support.ultimate.ai/hc/en-us/articles/7941166026258-Confidence-Thresholds
here's a article about it if you want to learn more

Does the confidence is related to IoU ? For example, As you say conf 0.6 which means the model will have at least 60% sure. But what IoU does it have? Hope that you can reply to me, Many thanks

"confidence threshold is the minimum score that the model will consider the prediction to be a true prediction (otherwise it will ignore this prediction entirely). IoU threshold is the minimum overlap between ground truth and prediction boxes for the prediction to be considered a true positive." These two values are used to calculate the mAP but are not directly related. IoU just like confidence is a value that is extracted from the results of your model.

I hope this helped you understand better. Below I'll also link a stack overflow post that explains the concept pretty well in the accepted answer, this answer also relates to YOLOv5 so it should be applicable.

https://stackoverflow.com/questions/68527004/selecting-an-iou-and-confidence-threshold-for-evaluation-of-model-performance

@maxPrakken As per my understanding, 'conf-thres' and 'iou-thres' in yolov5 are both related to and used in NMS to find out the final 'one' predicted bounding box from multiple bounding boxes for a particular object. 'conf-thres' is used for confidence score thresholding step of NMS, where all those bounding boxes detected for a particular object are suppressed or ignored for which the confidence score is less than conf-thres value. Then out of the remaining bounding boxes after this 1st step, the one bounding box with maximum confidence score is chosen as the final 'one' predicted bounding box for a particular object. In the next step, which is IoU thresholding step, all those bounding boxes from the remaining boxes which have an IoU of more than iou-thres value with the final predicted bounding box chosen after step 1 are suppressed and ignored. This ideally leaves just one bounding box per object in an image and that becomes the predicted bounding box. This is the overall NMS to get the predicted bounding box. Then this predicted bounding box is matched with ground-truth or label bounding box to calculate IoU and precision and other metrics. Please correct me if I'm wrong in my understand of the two hyperparameters.
However, I am not quite sure of how the decision between true or false is made when comparing the predicted bounding box and ground-truth or label bounding box. Is iou-thres used for NMS again used here to decide if the prediction is True positive or not? @glenn-jocher could you also please confirm this?

@github-actions
Copy link
Contributor

github-actions bot commented Nov 11, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Nov 11, 2022
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 22, 2022
@yesid-acm
Copy link

The confidence determines how certain the model is that the prediction received matches to a certain class. the threshold determines what the threshold for labeling something as something should be. lets say you have a confidence threshold of 0.6, which means the model will have to be at least 60% sure the object you're trying to classify is that object before it'll label it.
https://support.ultimate.ai/hc/en-us/articles/7941166026258-Confidence-Thresholds
here's a article about it if you want to learn more

Does the confidence is related to IoU ? For example, As you say conf 0.6 which means the model will have at least 60% sure. But what IoU does it have? Hope that you can reply to me, Many thanks

"confidence threshold is the minimum score that the model will consider the prediction to be a true prediction (otherwise it will ignore this prediction entirely). IoU threshold is the minimum overlap between ground truth and prediction boxes for the prediction to be considered a true positive." These two values are used to calculate the mAP but are not directly related. IoU just like confidence is a value that is extracted from the results of your model.
I hope this helped you understand better. Below I'll also link a stack overflow post that explains the concept pretty well in the accepted answer, this answer also relates to YOLOv5 so it should be applicable.
https://stackoverflow.com/questions/68527004/selecting-an-iou-and-confidence-threshold-for-evaluation-of-model-performance

@maxPrakken As per my understanding, 'conf-thres' and 'iou-thres' in yolov5 are both related to and used in NMS to find out the final 'one' predicted bounding box from multiple bounding boxes for a particular object. 'conf-thres' is used for confidence score thresholding step of NMS, where all those bounding boxes detected for a particular object are suppressed or ignored for which the confidence score is less than conf-thres value. Then out of the remaining bounding boxes after this 1st step, the one bounding box with maximum confidence score is chosen as the final 'one' predicted bounding box for a particular object. In the next step, which is IoU thresholding step, all those bounding boxes from the remaining boxes which have an IoU of more than iou-thres value with the final predicted bounding box chosen after step 1 are suppressed and ignored. This ideally leaves just one bounding box per object in an image and that becomes the predicted bounding box. This is the overall NMS to get the predicted bounding box. Then this predicted bounding box is matched with ground-truth or label bounding box to calculate IoU and precision and other metrics. Please correct me if I'm wrong in my understand of the two hyperparameters. However, I am not quite sure of how the decision between true or false is made when comparing the predicted bounding box and ground-truth or label bounding box. Is iou-thres used for NMS again used here to decide if the prediction is True positive or not? @glenn-jocher could you also please confirm this?

Hola @glenn-jocher can you answer this:** Is iou-thres used for NMS again used here to decide if the prediction is True positive or not? @glenn-jocher could you also please confirm this?

@glenn-jocher
Copy link
Member

@yesid-acm yes, you are absolutely correct. Both 'conf-thres' and 'iou-thres' are used in NMS to determine the final predicted bounding box from multiple bounding boxes for a specific object. The 'conf-thres' is used for the confidence score thresholding step, while the 'iou-thres' is used in the IoU thresholding step. The 'conf-thres' filters out bounding boxes with confidence scores below the set value, and the 'iou-thres' removes any remaining boxes that don't meet the IoU overlap criteria with the selected bounding box.

Regarding the decision between true or false when comparing predicted and ground-truth bounding boxes, the 'iou-thres' is indeed used to determine if the prediction is a true positive or not. This value helps in deciding if the overlap between the predicted and ground-truth bounding boxes is significant enough to consider the prediction as true positive.

Your understanding of these hyperparameters and their role in NMS is accurate. I hope this confirmation helps. Feel free to reach out if you have further questions!

@dinhthihuyen
Copy link

@glenn-jocher , why param conf-thres in validation is quite low (0.001 for default)?
As I understand from above, if let conf-thres equals 0.001, iou equals 0.6 (default), in NMS stage:

  • Step 1: Filter out and only keep bounding box with conf >= 0.001
  • Step 2: With bounding box getting from result of Step1, only consider bounding box that have iou >= 0.6
    Is my understanding true?
    I have another question: when say [email protected], is that means iou equals 0.5, so what does 0.6 means?
    I confused because when running validation with iou equals 0.6, results still includes map@50, map@50:95.
    Thanks so much for your support!

@glenn-jocher
Copy link
Member

Yes, you are correct about the default values in NMS. During validation, a low default 'conf-thres' (0.001) means that even detections with exceedingly low confidence are included in the validation process. Then, during NMS, this set of detections is subject to the IoU threshold (0.6 by default), which eliminates low-overlapping bounding boxes.

Regarding mAP, it refers to mean average precision, where the IoU value is integrated into the calculation. The IoU value is depicted after the '@' symbol, e.g., mAP @dinhthihuyen.5. This signifies that an IoU of 0.5 is considered when evaluating average precision.

When both 'iou=0.6' and '[email protected]' are present in the validation results, it indicates that evaluations are performed at multiple IoU thresholds, from 0.5 to 0.95 by a step of 0.05, hence the [email protected], [email protected], [email protected], and so on.

I hope this clarifies your inquiries, and please let me know if you have further questions!

@dienhoa
Copy link

dienhoa commented Jan 7, 2024

@glenn-jocher Maybe it's too basic but I think we need some more clarification about confidence

    xc = prediction[..., 4] > conf_thres  # candidates
### ....
        x[:, 5:] *= x[:, 4:5]  # conf = obj_conf * cls_conf

The objectness score is associated with the 4th column of the prediction

So the conf-threshold here is to filter out boxes which are less likely contains an object. We have another confidence score which is class confidence score associated to 5: columns. This is the probability of the box belong to a specific class excluding the background

  • We have another confidence score which is the one shown in the final graph (F1, PR curve, P, R, ...). This is the multiplication of objectness score and class score (The x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf I shown above)

Maybe we need some refactors for all of these namings. I think it may cause confusion.

And do you know how to choose the best objectness confidence threshold, for example to optimize the F1 score?

Thank you!

@glenn-jocher
Copy link
Member

@dienhoa you've made a great observation, and I appreciate your attention to detail. Indeed, the term 'confidence' can refer to different concepts within the YOLO architecture, and it's important to distinguish between them:

  1. Objectness Confidence: This score, found in the 4th column of the prediction tensor, reflects the model's confidence that a bounding box contains any object, regardless of class.

  2. Class Confidence: These scores, found in columns 5 and onwards, represent the model's confidence that the detected object belongs to each specific class.

  3. Combined Confidence: This is the product of the objectness confidence and the highest class confidence score for a given bounding box, which is used in the final detection output.

The --conf-thres in val.py is indeed the objectness confidence threshold, which filters out detections with an objectness score below the threshold. The combined confidence score is what's typically used for evaluating the model's performance and is plotted in the precision-recall (PR) curves and used to calculate metrics like F1 score.

Choosing the best objectness confidence threshold to optimize the F1 score can be a bit of trial and error. It's often done by evaluating the model at various thresholds and selecting the one that yields the highest F1 score. This process can be automated by iterating over a range of thresholds and calculating the F1 score for each.

I hope this helps clarify the different types of confidence scores in YOLOv5. If you have any more questions or need further assistance, feel free to ask!

@kshitizkhanal7
Copy link

@glenn-jocher Can you please tell me the default value of overlap threshold of bounding box to determine true positive for prediction in YOLOv8?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

8 participants