-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplified Inference #1153
Simplified Inference #1153
Conversation
This PR allows for using YOLOv5 independent of the this repo, and for automatic handling of input formats. Now you can pass in PIL objects directly, numpy array images, cv2 images, or torch inputs. You can pass in a batch as a list, or as a torch batched input. Options are: def forward(self, x, size=640, augment=False, profile=False):
# supports inference from various sources. For height=720, width=1280, RGB images example inputs are:
# opencv: x = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(720,1280,3)
# PIL: x = Image.open('image.jpg') # HWC x(720,1280,3)
# numpy: x = np.zeros((720,1280,3)) # HWC
# torch: x = torch.zeros(16,3,720,1280) # BCHW
# multiple: x = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images Input import cv2
import numpy as np
from PIL import Image, ImageDraw
from models.experimental import attempt_load
# Model
# model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
model = attempt_load('yolov5s.pt').autoshape() # add autoshape wrapper
# Images
img1 = Image.open('inference/images/zidane.jpg') # PIL
img2 = cv2.imread('inference/images/zidane.jpg')[:, :, ::-1] # opencv (BGR to RGB)
img3 = np.zeros((640, 1280, 3)) # numpy
imgs = [img1, img2, img3]
# Inference
prediction = model(imgs, size=640) # includes NMS
# Plot
for i, img in enumerate(imgs):
print('\nImage %g/%g: %s ' % (i + 1, len(imgs), img.shape), end='')
img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np
if prediction[i] is not None: # is not None
for *box, conf, cls in prediction[i]: # [xy1, xy2], confidence, class
print('%s %.2f, ' % (model.names[int(cls)], conf), end='') # label
ImageDraw.Draw(img).rectangle(box, width=3) # plot
img.save('results%g.jpg' % i) # save Output
|
Torch Hub ExampleThis repo NOT required import cv2
import numpy as np
import torch
from PIL import Image, ImageDraw
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True).fuse().eval()
model = model.autoshape() # add autoshape wrapper IMPORTANT
# Image
torch.hub.download_url_to_file('https://raw.githubusercontent.com/ultralytics/yolov5/master/inference/images/zidane.jpg', 'zidane.jpg')
img1 = Image.open('zidane.jpg') # PIL
img2 = cv2.imread('zidane.jpg')[:, :, ::-1] # opencv (BGR to RGB)
img3 = np.zeros((640, 1280, 3)) # numpy
imgs = [img1, img2, img3] # batched inference
# Inference
prediction = model(imgs, size=640) # includes NMS
# Plot
for i, img in enumerate(imgs):
print('\nImage %g/%g: %s ' % (i + 1, len(imgs), img.shape), end='')
img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np
if prediction[i] is not None: # is not None
for *box, conf, cls in prediction[i]: # [xy1, xy2], confidence, class
print('class %g %.2f, ' % (cls, conf), end='') # label
ImageDraw.Draw(img).rectangle(box, width=3) # plot
img.save('results%g.jpg' % i) # save |
When I run this, I get the following error:
|
@dagap I'll inline the latest here in case anyone arrives here. Official page: https://pytorch.org/hub/ultralytics_yolov5/ Load From PyTorch Hub (NEW FORMAT)To load YOLOv5 from PyTorch Hub for inference with PIL, OpenCV, Numpy or PyTorch inputs: import cv2
import torch
from PIL import Image
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).fuse().autoshape() # for PIL/cv2/np inputs and NMS
# Images
for f in ['zidane.jpg', 'bus.jpg']: # download 2 images
print(f'Downloading {f}...')
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/' + f, f)
img1 = Image.open('zidane.jpg') # PIL image
img2 = cv2.imread('bus.jpg')[:, :, ::-1] # OpenCV image (BGR to RGB)
imgs = [img1, img2] # batched list of images
# Inference
results = model(imgs, size=640) # includes NMS
# Results
results.print() # print results to screen
results.show() # display results
results.save() # save as results1.jpg, results2.jpg... etc.
# Data
print('\n', results.xyxy[0]) # print img1 predictions
# x1 (pixels) y1 (pixels) x2 (pixels) y2 (pixels) confidence class
# tensor([[7.47613e+02, 4.01168e+01, 1.14978e+03, 7.12016e+02, 8.71210e-01, 0.00000e+00],
# [1.17464e+02, 1.96875e+02, 1.00145e+03, 7.11802e+02, 8.08795e-01, 0.00000e+00],
# [4.23969e+02, 4.30401e+02, 5.16833e+02, 7.20000e+02, 7.77376e-01, 2.70000e+01],
# [9.81310e+02, 3.10712e+02, 1.03111e+03, 4.19273e+02, 2.86850e-01, 2.70000e+01]]) |
@glenn-jocher Hi , i have trained yolov5 model using yolov5s.pt and saved the weights as best.pt |
@pk-1196 see PyTorch Hub tutorial for inference directions: YOLOv5 Tutorials
|
Hi there! When I run this code, I get an error:
|
Replacement for #1045
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
This PR introduces adjustments to confidence thresholds and the NMS (Non-Maximum Suppression) process, alongside code clean-up and modularization.
📊 Key Changes
detect.py
.NMS
import and unnecessary model evaluation call inhubconf.py
.autoShape
class incommon.py
for a robust model wrapper to support various input forms and removed redundant imports.autoShape
functionality withinyolo.py
and allowed the addition/removal of NMS from YOLO models.sotabench.py
anddatasets.py
.🎯 Purpose & Impact
autoShape
enables the model to handle different input types more seamlessly, enhancing user-friendliness and the potential for integration into different pipelines without requiring pre-formatting by the user.