-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is confidence threshold #9679
Comments
The confidence determines how certain the model is that the prediction received matches to a certain class. the threshold determines what the threshold for labeling something as something should be. lets say you have a confidence threshold of 0.6, which means the model will have to be at least 60% sure the object you're trying to classify is that object before it'll label it. https://support.ultimate.ai/hc/en-us/articles/7941166026258-Confidence-Thresholds here's a article about it if you want to learn more |
Does the confidence is related to IoU ? For example, As you say conf 0.6 which means the model will have at least 60% sure. But what IoU does it have? |
"confidence threshold is the minimum score that the model will consider the prediction to be a true prediction (otherwise it will ignore this prediction entirely). IoU threshold is the minimum overlap between ground truth and prediction boxes for the prediction to be considered a true positive." These two values are used to calculate the mAP but are not directly related. IoU just like confidence is a value that is extracted from the results of your model. I hope this helped you understand better. Below I'll also link a stack overflow post that explains the concept pretty well in the accepted answer, this answer also relates to YOLOv5 so it should be applicable. |
@maxPrakken As per my understanding, 'conf-thres' and 'iou-thres' in yolov5 are both related to and used in NMS to find out the final 'one' predicted bounding box from multiple bounding boxes for a particular object. 'conf-thres' is used for confidence score thresholding step of NMS, where all those bounding boxes detected for a particular object are suppressed or ignored for which the confidence score is less than conf-thres value. Then out of the remaining bounding boxes after this 1st step, the one bounding box with maximum confidence score is chosen as the final 'one' predicted bounding box for a particular object. In the next step, which is IoU thresholding step, all those bounding boxes from the remaining boxes which have an IoU of more than iou-thres value with the final predicted bounding box chosen after step 1 are suppressed and ignored. This ideally leaves just one bounding box per object in an image and that becomes the predicted bounding box. This is the overall NMS to get the predicted bounding box. Then this predicted bounding box is matched with ground-truth or label bounding box to calculate IoU and precision and other metrics. Please correct me if I'm wrong in my understand of the two hyperparameters. |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
Hola @glenn-jocher can you answer this:** Is iou-thres used for NMS again used here to decide if the prediction is True positive or not? @glenn-jocher could you also please confirm this? |
@yesid-acm yes, you are absolutely correct. Both 'conf-thres' and 'iou-thres' are used in NMS to determine the final predicted bounding box from multiple bounding boxes for a specific object. The 'conf-thres' is used for the confidence score thresholding step, while the 'iou-thres' is used in the IoU thresholding step. The 'conf-thres' filters out bounding boxes with confidence scores below the set value, and the 'iou-thres' removes any remaining boxes that don't meet the IoU overlap criteria with the selected bounding box. Regarding the decision between true or false when comparing predicted and ground-truth bounding boxes, the 'iou-thres' is indeed used to determine if the prediction is a true positive or not. This value helps in deciding if the overlap between the predicted and ground-truth bounding boxes is significant enough to consider the prediction as true positive. Your understanding of these hyperparameters and their role in NMS is accurate. I hope this confirmation helps. Feel free to reach out if you have further questions! |
@glenn-jocher , why param conf-thres in validation is quite low (0.001 for default)?
|
Yes, you are correct about the default values in NMS. During validation, a low default 'conf-thres' (0.001) means that even detections with exceedingly low confidence are included in the validation process. Then, during NMS, this set of detections is subject to the IoU threshold (0.6 by default), which eliminates low-overlapping bounding boxes. Regarding mAP, it refers to mean average precision, where the IoU value is integrated into the calculation. The IoU value is depicted after the '@' symbol, e.g., mAP @dinhthihuyen.5. This signifies that an IoU of 0.5 is considered when evaluating average precision. When both 'iou=0.6' and '[email protected]' are present in the validation results, it indicates that evaluations are performed at multiple IoU thresholds, from 0.5 to 0.95 by a step of 0.05, hence the [email protected], [email protected], [email protected], and so on. I hope this clarifies your inquiries, and please let me know if you have further questions! |
@glenn-jocher Maybe it's too basic but I think we need some more clarification about
xc = prediction[..., 4] > conf_thres # candidates
### ....
x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf The objectness score is associated with the 4th column of the prediction So the conf-threshold here is to filter out boxes which are less likely contains an object. We have another
Maybe we need some refactors for all of these namings. I think it may cause confusion. And do you know how to choose the best objectness confidence threshold, for example to optimize the F1 score? Thank you! |
@dienhoa you've made a great observation, and I appreciate your attention to detail. Indeed, the term 'confidence' can refer to different concepts within the YOLO architecture, and it's important to distinguish between them:
The Choosing the best objectness confidence threshold to optimize the F1 score can be a bit of trial and error. It's often done by evaluating the model at various thresholds and selecting the one that yields the highest F1 score. This process can be automated by iterating over a range of thresholds and calculating the F1 score for each. I hope this helps clarify the different types of confidence scores in YOLOv5. If you have any more questions or need further assistance, feel free to ask! |
@glenn-jocher Can you please tell me the default value of overlap threshold of bounding box to determine true positive for prediction in YOLOv8? |
Search before asking
Question
Hello , i'm a newbiew. So can i ask u guys about :
parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
I want to understand about this hyper param. Where can i find this to understand
Additional
No response
The text was updated successfully, but these errors were encountered: