-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding tracker after detection #1331
Comments
See https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading import torch
from PIL import Image
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().eval() # yolov5s.pt
model = model.autoshape() # for autoshaping of PIL/cv2/np inputs and NMS
# Images
img = Image.open('data/images/zidane.jpg') # PIL image
# Inference
prediction = model(img) |
@glenn-jocher Hi... I am trying to do inference in my own way, using the torchscript file trained on my custom dataset. Now how to get xmin, xmax, ymin, ymax from this point. I have been following this (https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading#issuecomment-695146444) approach for inference. Please help me out. Thanks in advance. |
@shashi7679 sorry for the confusion. PyTorch Hub YOLOv5 models treat torch inputs as pass-throughs, so no pre or post processing is performed on torch inputs. If you want a results object then please pass numpy arrays, PIL, cv2, file paths, etc. |
@glenn-jocher But when I am passing array/PIL/cv2.. I am getting this error. RuntimeError: forward() Expected a value of type 'Tensor' for argument 'x' but instead found type 'JpegImageFile'. |
@shashi7679 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to start investigating a possible problem. How to create a Minimal, Reproducible ExampleWhen asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
For Ultralytics to provide assistance your code should also be:
If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem. Thank you! 😃 |
❔Question
First of all thanks for this repository, it is completely useful. I have reached the accuracy and the performance of detection for my specific case. Now my aim is to embed some tracking algorithm just after detection. How can I get the bounding box coordinates. I am planning to pass those coordinates to the tracker when detection is occured.
The text was updated successfully, but these errors were encountered: