-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Call of model.render() modify the predictions #11810
Comments
Actually it seems that if I don't call But in this example you don't seem to call it. So i'm a bit confused |
@TimotheeWrightFicha the In the example you provided, it appears that If you are experiencing inconsistent predictions for a specific image, it might be worth investigating any potential variations in the input image (e.g., resizing, normalization, cropping, etc.) between the two calls. Additionally, ensure that the Please let me know if you have any further questions or concerns. We are here to help! |
Thank you for the answer. In the following code i'm testing if the 2 images are different after each operation
OUTPUT:
It is clear that model_inference.render()[0] does some inplace change on processed_image1, which is a problem. I've tried to look a bit into the code of Yolo but I don't really understand what's going on. |
@TimotheeWrightFicha thank you for bringing this to our attention. We appreciate your effort in providing the code and the corresponding output to help us understand the issue. Based on your code and the observed output, it appears that Allow me to investigate this behavior further to provide a more accurate explanation. I will review the relevant code in YOLOv5 and consult with the team to understand if this is an intended behavior or a potential issue. I will get back to you as soon as possible with more information and a proposed solution. Thank you for your patience, and we apologize for any inconvenience this may have caused. Please let me know if you have any additional details or questions related to this issue. We appreciate your contribution to the project. |
Your fast and precise support is always appreciated @glenn-jocher ! if you need context to debug this I can be available :) |
Hello @TimotheeWrightFicha, Thank you for bringing this issue to our attention and providing the code and output to help us understand the problem. We apologize for any inconvenience this may have caused. Based on the code you provided, it seems that We appreciate your willingness to provide further context or assistance in debugging this issue. Your contribution is valuable in helping us improve the YOLOv5 project. We will thoroughly investigate this behavior and provide a proper solution or clarification as soon as possible. We apologize for any delays and appreciate your patience. Please don't hesitate to reach out if you have any further questions or concerns. We are here to help! Thank you for your continued support. -Glenn Jocher |
@glenn-jocher I'd like to add a thought. If I get the predictions Which is good for my 720, 1280 image but the predictions will not match the 640x640 image that we get from Is that an issue or is it expected ? |
@TimotheeWrightFicha the predictions obtained using This behavior is expected because To obtain predictions that match the 640x640 resized image, you can resize the predictions to match the dimensions of the resized image using appropriate scaling factors. Let me know if you have any further questions or concerns. We're here to help! -Glenn Jocher |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Hello @glenn-jocher,
I have a small question regarding the model.render() function.
Let's first define this class
And call it
Output:
As you can see the predictions points are a bit different but moreover, one time we have 2 object, the other only one object.
Can you explain to me why adding
image = model_inference.render()[0]
change the predictions ?It seems that most of the time the output are the same, but this is constantly happening for one image.
Thank you !
Additional
I sometime want to call
model_inference.render()[0]
to have the bounding boxes drawn on the image for debugging purposeNo response
The text was updated successfully, but these errors were encountered: