Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load YOLOv5 from PyTorch Hub ⭐ #36

Open
glenn-jocher opened this issue Jun 11, 2020 · 317 comments · Fixed by #1153
Open

Load YOLOv5 from PyTorch Hub ⭐ #36

glenn-jocher opened this issue Jun 11, 2020 · 317 comments · Fixed by #1153
Assignees
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 11, 2020

📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub https://pytorch.org/hub/ultralytics_yolov5. See YOLOv5 Docs for additional details. UPDATED 26 March 2023.

Before You Start

Install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.

pip install -r https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt

💡 ProTip: Cloning https://github.com/ultralytics/yolov5 is not required 😃

Load YOLOv5 with PyTorch Hub

Simple Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the lightest and fastest YOLOv5 model. For details on all available models please see the README.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
im = 'https://ultralytics.com/images/zidane.jpg'

# Inference
results = model(im)

results.pandas().xyxy[0]
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

Detailed Example

This example shows batched inference with PIL and OpenCV image sources. results can be printed to console, saved to runs/hub, showed to screen on supported environments, and returned as tensors or pandas dataframes.

import cv2
import torch
from PIL import Image

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
for f in 'zidane.jpg', 'bus.jpg':
    torch.hub.download_url_to_file('https://ultralytics.com/images/' + f, f)  # download 2 images
im1 = Image.open('zidane.jpg')  # PIL image
im2 = cv2.imread('bus.jpg')[..., ::-1]  # OpenCV image (BGR to RGB)

# Inference
results = model([im1, im2], size=640) # batch of images

# Results
results.print()  
results.save()  # or .show()

results.xyxy[0]  # im1 predictions (tensor)
results.pandas().xyxy[0]  # im1 predictions (pandas)
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

For all inference options see YOLOv5 AutoShape() forward method:

yolov5/models/common.py

Lines 243 to 252 in 30e4c4f

def forward(self, imgs, size=640, augment=False, profile=False):
# Inference from various sources. For height=640, width=1280, RGB images example inputs are:
# filename: imgs = 'data/images/zidane.jpg'
# URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg'
# OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
# PIL: = Image.open('image.jpg') # HWC x(640,1280,3)
# numpy: = np.zeros((640,1280,3)) # HWC
# torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
# multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images

Inference Settings

YOLOv5 models contain various inference attributes such as confidence threshold, IoU threshold, etc. which can be set by:

model.conf = 0.25  # NMS confidence threshold
      iou = 0.45  # NMS IoU threshold
      agnostic = False  # NMS class-agnostic
      multi_label = False  # NMS multiple labels per box
      classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
      max_det = 1000  # maximum number of detections per image
      amp = False  # Automatic Mixed Precision (AMP) inference

results = model(im, size=320)  # custom inference size

Device

Models can be transferred to any device after creation:

model.cpu()  # CPU
model.cuda()  # GPU
model.to(device)  # i.e. device=torch.device(0)

Models can also be created directly on any device:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', device='cpu')  # load on CPU

💡 ProTip: Input images are automatically transferred to the correct model device before inference.

Silence Outputs

Models can be loaded silently with _verbose=False:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', _verbose=False)  # load silently

Input Channels

To load a pretrained YOLOv5s model with 4 input channels rather than the default 3:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', channels=4)

In this case the model will be composed of pretrained weights except for the very first input layer, which is no longer the same shape as the pretrained input layer. The input layer will remain initialized by random weights.

Number of Classes

To load a pretrained YOLOv5s model with 10 output classes rather than the default 80:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)

In this case the model will be composed of pretrained weights except for the output layers, which are no longer the same shape as the pretrained output layers. The output layers will remain initialized by random weights.

Force Reload

If you run into problems with the above steps, setting force_reload=True may help by discarding the existing cache and force a fresh download of the latest YOLOv5 version from PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)  # force reload

Screenshot Inference

To run inference on your desktop screen:

import torch
from PIL import ImageGrab

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
im = ImageGrab.grab()  # take a screenshot

# Inference
results = model(im)

Multi-GPU Inference

YOLOv5 models can be be loaded to multiple GPUs in parallel with threaded inference:

import torch
import threading

def run(model, im):
  results = model(im)
  results.save()

# Models
model0 = torch.hub.load('ultralytics/yolov5', 'yolov5s', device=0)
model1 = torch.hub.load('ultralytics/yolov5', 'yolov5s', device=1)

# Inference
threading.Thread(target=run, args=[model0, 'https://ultralytics.com/images/zidane.jpg'], daemon=True).start()
threading.Thread(target=run, args=[model1, 'https://ultralytics.com/images/bus.jpg'], daemon=True).start()

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized weights (to train from scratch) use pretrained=False. You must provide your own training script in this case. Alternatively see our YOLOv5 Train Custom Data Tutorial for model training.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

Base64 Results

For use with API services. See #2291 and Flask REST API example for details.

results = model(im)  # inference

results.ims # array of original images (as np array) passed to model for inference
results.render()  # updates results.ims with boxes and labels
for im in results.ims:
    buffered = BytesIO()
    im_base64 = Image.fromarray(im)
    im_base64.save(buffered, format="JPEG")
    print(base64.b64encode(buffered.getvalue()).decode('utf-8'))  # base64 encoded image with results

Cropped Results

Results can be returned and saved as detection crops:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

Pandas Results

Results can be returned as Pandas DataFrames:

results = model(im)  # inference
results.pandas().xyxy[0]  # Pandas DataFrame
Pandas Output (click to expand)
print(results.pandas().xyxy[0])
#      xmin    ymin    xmax   ymax  confidence  class    name
# 0  749.50   43.50  1148.0  704.5    0.874023      0  person
# 1  433.50  433.50   517.5  714.5    0.687988     27     tie
# 2  114.75  195.75  1095.0  708.0    0.624512      0  person
# 3  986.00  304.00  1028.0  420.0    0.286865     27     tie

Sorted Results

Results can be sorted by column, i.e. to sort license plate digit detection left-to-right (x-axis):

results = model(im)  # inference
results.pandas().xyxy[0].sort_values('xmin')  # sorted left-right

Box-Cropped Results

Results can be returned and saved as detection crops:

results = model(im)  # inference
crops = results.crop(save=True)  # cropped detections dictionary

JSON Results

Results can be returned in JSON format once converted to .pandas() dataframes using the .to_json() method. The JSON format can be modified using the orient argument. See pandas .to_json() documentation for details.

results = model(ims)  # inference
results.pandas().xyxy[0].to_json(orient="records")  # JSON img1 predictions
JSON Output (click to expand)
[
{"xmin":749.5,"ymin":43.5,"xmax":1148.0,"ymax":704.5,"confidence":0.8740234375,"class":0,"name":"person"},
{"xmin":433.5,"ymin":433.5,"xmax":517.5,"ymax":714.5,"confidence":0.6879882812,"class":27,"name":"tie"},
{"xmin":115.25,"ymin":195.75,"xmax":1096.0,"ymax":708.0,"confidence":0.6254882812,"class":0,"name":"person"},
{"xmin":986.0,"ymin":304.0,"xmax":1028.0,"ymax":420.0,"confidence":0.2873535156,"class":27,"name":"tie"}
]

Custom Models

This example loads a custom 20-class VOC-trained YOLOv5s model 'best.pt' with PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'custom', path='path/to/best.pt')  # local model
model = torch.hub.load('path/to/yolov5', 'custom', path='path/to/best.pt', source='local')  # local repo

TensorRT, ONNX and OpenVINO Models

PyTorch Hub supports inference on most YOLOv5 export formats, including custom trained models. See TFLite, ONNX, CoreML, TensorRT Export tutorial for details on exporting models.

💡 ProTip: TensorRT may be up to 2-5X faster than PyTorch on GPU benchmarks
💡 ProTip: ONNX and OpenVINO may be up to 2-3X faster than PyTorch on CPU benchmarks

model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5s.pt')  # PyTorch
                                                            'yolov5s.torchscript')  # TorchScript
                                                            'yolov5s.onnx')  # ONNX
                                                            'yolov5s_openvino_model/')  # OpenVINO
                                                            'yolov5s.engine')  # TensorRT
                                                            'yolov5s.mlmodel')  # CoreML (macOS-only)
                                                            'yolov5s.tflite')  # TFLite
                                                            'yolov5s_paddle_model/')  # PaddlePaddle

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@synked16
Copy link

synked16 commented Aug 5, 2020

@glenn-jocher
so can i fit a model with it?

@glenn-jocher glenn-jocher unpinned this issue Aug 19, 2020
@MohamedAliRashad
Copy link

Can someone use the training script with this configuration ?

@rlalpha
Copy link

rlalpha commented Sep 18, 2020

Can I ask about the meaning of the output?
How can I reconstruct as box prediction results via the output?
Thanks

@glenn-jocher

This comment has been minimized.

@glenn-jocher
Copy link
Member Author

@rlalpha I've updated pytorch hub functionality now in c4cb785 to automatically append an NMS module to the model when pretrained=True is requested. Anyone using YOLOv5 pretrained pytorch hub models must remove this last layer prior to training now:
model.model = model.model[:-1]

Anyone using YOLOv5 pretrained pytorch hub models directly for inference can now replicate the following code to use YOLOv5 without cloning the ultralytics/yolov5 repository. In this example you see the pytorch hub model detect 2 people (class 0) and 1 tie (class 27) in zidane.jpg. Note there is no repo cloned in the workspace. Also note that ideally all inputs to the model should be letterboxed to the nearest 32 multiple. The second best option is to stretch the image up to the next largest 32-multiple as I've done here with PIL resize.
Screen Shot 2020-09-18 at 6 30 08 PM

@rlalpha
Copy link

rlalpha commented Sep 20, 2020

@rlalpha I've updated pytorch hub functionality now in c4cb785 to automatically append an NMS module to the model when pretrained=True is requested. Anyone using YOLOv5 pretrained pytorch hub models must remove this last layer prior to training now:
model.model = model.model[:-1]

Anyone using YOLOv5 pretrained pytorch hub models directly for inference can now replicate the following code to use YOLOv5 without cloning the ultralytics/yolov5 repository. In this example you see the pytorch hub model detect 2 people (class 0) and 1 tie (class 27) in zidane.jpg. Note there is no repo cloned in the workspace. Also note that ideally all inputs to the model should be letterboxed to the nearest 32 multiple. The second best option is to stretch the image up to the next largest 32-multiple as I've done here with PIL resize.
Screen Shot 2020-09-18 at 6 30 08 PM

I got how to do it now. Thank you for rapid reply.

@glenn-jocher glenn-jocher linked a pull request Oct 18, 2020 that will close this issue
@glenn-jocher
Copy link
Member Author

glenn-jocher commented Oct 18, 2020

@rlalpha @justAyaan @MohamedAliRashad this PyTorch Hub tutorial is now updated to reflect the simplified inference improvements in PR #1153. It's very simple now to load any YOLOv5 model from PyTorch Hub and use it directly for inference on PIL, OpenCV, Numpy or PyTorch inputs, including for batched inference. Reshaping and NMS are handled automatically. Example script is shown in above tutorial.

@pfeatherstone
Copy link

@glenn-jocher calling model = torch.hub.load('ultralytics/yolov5', 'yolov5l', pretrained=True) throws error:

Using cache found in /home/pf/.cache/torch/hub/ultralytics_yolov5_master
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 15, in <module>
    from models.common import Conv, Bottleneck, SPP, DWConv, Focus, BottleneckCSP, Concat, NMS, autoShape
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 8, in <module>
    from utils.datasets import letterbox
ModuleNotFoundError: No module named 'utils.datasets'; 'utils' is not a package

Process finished with exit code 1

@glenn-jocher
Copy link
Member Author

@pfeatherstone thanks for the feedback! Can you try with force_reload=True? Without it the cached repo is used, which may be out of date.

import torch
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True)

@pfeatherstone
Copy link

Still doesn't work. I get the following errors:

Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /home/pf/.cache/torch/hub/master.zip
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 15, in <module>
    from models.common import Conv, Bottleneck, SPP, DWConv, Focus, BottleneckCSP, Concat, NMS, autoShape
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 8, in <module>
    from utils.datasets import letterbox
ModuleNotFoundError: No module named 'utils.datasets'; 'utils' is not a package
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/local/pycharm-2020.2/plugins/python/helpers/pydev/pydevd.py", line 1785, in stoptrace
    debugger.exiting()
  File "/usr/local/pycharm-2020.2/plugins/python/helpers/pydev/pydevd.py", line 1471, in exiting
    sys.stdout.flush()
ValueError: I/O operation on closed file.

Process finished with exit code 1

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Oct 20, 2020

@pfeatherstone I've raised a new bug report in #1181 for your observation. This typically indicates a pip package called utils is installed in your environment, you should pip uninstall utils.

@suleymanVR
Copy link

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', _verbose=False)
Can we use it like this for yolov8: model = torch.hub.load('ultralytics/ultralytics', 'yolov8n', path='best.pt'))

@Prakhar2295
Copy link

Can someone please look into this issue regarding yolov5 ...it is showing 'ultralytics' module not found while working in windows but the module is already installed but still it is showing error and the same code is working fine in colab.
can someone just look into this....issue.
Thanks

@Prakhar2295
Copy link

Can someone please look into this issue regarding yolov5 ...it is showing 'ultralytics' module not found while working in windows but the module is already installed but still it is showing error and the same code is working fine in colab.
can someone just look into this....issue
IMG20230517055116_01
IMG20230517055100_01
IMG20230517054950
we
IMG20230517054827_01
IMG20230517054059_01
IMG20230517054936
IMG20230517054924

@421psh
Copy link

421psh commented May 17, 2023

Same issue as @Prakhar2295. Hope for a fix soon.

@Prakhar2295
Copy link

The above issue is still open

@Prakhar2295
Copy link

IMG20230517204702_01
IMG20230517204642_01
IMG20230517204702_01
IMG20230517204614_01
IMG20230517204624_01
IMG20230517204601_01
IMG20230517204547_01
IMG20230517204529_01

@Prakhar2295
Copy link

Prakhar2295 commented May 17, 2023

@421psh

Same issue as @Prakhar2295. Hope for a fix soon.

Please check i have resolved this issue.
Let me know if you have any doubts

@TheAnswer96
Copy link

How did you solve the problem?
I have the same issue, in particular once I forced the download of the last Yolov5 version all became a mess. :(

@tallhafaruqi
Copy link

I want to detect windows and walls using object detection can anyone help me with this purpose?

@StephenZhao1
Copy link

I want to detect windows and walls using object detection can anyone help me with this purpose?

please display some demo images

@tallhafaruqi
Copy link

can you share your email please?

@StephenZhao1
Copy link

can you share your email please?

[email protected], too busy to answer quickly

@vinodbaste
Copy link

vinodbaste commented Nov 16, 2023

Traceback (most recent call last):
File "yolov5\hubconf.py", line 49, in _create
model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model
File "D:\projects\yolov5\models\common.py", line 345, in init
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
File "D:\projects\yolov5\models\experimental.py", line 79, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 1131, in _load
result = unpickler.load()
File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\pathlib.py", line 1084, in new
raise NotImplementedError("cannot instantiate %r on your system"
NotImplementedError: cannot instantiate 'PosixPath' on your system

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "yolov5\hubconf.py", line 60, in _create
model = attempt_load(path, device=device, fuse=False) # arbitrary model
File "D:\projects\yolov5\models\experimental.py", line 79, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 1131, in _load
result = unpickler.load()
File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\pathlib.py", line 1084, in new
raise NotImplementedError("cannot instantiate %r on your system"
NotImplementedError: cannot instantiate 'PosixPath' on your system

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\projects\main.py", line 149, in
yolo_classifier = torch.hub.load(
File "C:\Users\vAppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\hub.py", line 542, in load
model = _load_local(repo_or_dir, model, *args, **kwargs)
File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\hub.py", line 572, in _load_local
model = entry(*args, **kwargs)
File "yolov5\hubconf.py", line 83, in custom
return _create(path, autoshape=autoshape, verbose=_verbose, device=device)
File "yolov5\hubconf.py", line 78, in _create
raise Exception(s) from e
Exception: cannot instantiate 'PosixPath' on your system. Cache may be out of date, try force_reload=True or see https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.

Here is the fix:

import pathlib
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath

@usmanyaqoob49
Copy link

usmanyaqoob49 commented Jan 6, 2024

Traceback (most recent call last): File "yolov5\hubconf.py", line 49, in _create model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model File "D:\projects\yolov5\models\common.py", line 345, in init model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse) File "D:\projects\yolov5\models\experimental.py", line 79, in attempt_load ckpt = torch.load(attempt_download(w), map_location='cpu') # load File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 1131, in _load result = unpickler.load() File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\pathlib.py", line 1084, in new raise NotImplementedError("cannot instantiate %r on your system" NotImplementedError: cannot instantiate 'PosixPath' on your system

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "yolov5\hubconf.py", line 60, in _create model = attempt_load(path, device=device, fuse=False) # arbitrary model File "D:\projects\yolov5\models\experimental.py", line 79, in attempt_load ckpt = torch.load(attempt_download(w), map_location='cpu') # load File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\serialization.py", line 1131, in _load result = unpickler.load() File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\pathlib.py", line 1084, in new raise NotImplementedError("cannot instantiate %r on your system" NotImplementedError: cannot instantiate 'PosixPath' on your system

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "D:\projects\main.py", line 149, in yolo_classifier = torch.hub.load( File "C:\Users\vAppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\hub.py", line 542, in load model = _load_local(repo_or_dir, model, *args, **kwargs) File "C:\Users\AppData\Local\miniconda3\envs\env_3_9\lib\site-packages\torch\hub.py", line 572, in _load_local model = entry(*args, **kwargs) File "yolov5\hubconf.py", line 83, in custom return _create(path, autoshape=autoshape, verbose=_verbose, device=device) File "yolov5\hubconf.py", line 78, in _create raise Exception(s) from e Exception: cannot instantiate 'PosixPath' on your system. Cache may be out of date, try force_reload=True or see #36 for help.

Here is the fix:

import pathlib
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath

Yes thats the correct solution, add this in the file where you are loading the model.

@usmanyaqoob49
Copy link

import pathlib
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath

The above solution is not working @vinodbaste

@veera12356
Copy link

@vinodbaste solution recommended is not working

Can anyone help me

@pinnintipraneethkumar
Copy link

@glenn-jocher ,
Hi, I exported the openvino model with export.py which created the folder name "best_openvino_model" which conatins the meta data of the model, but when i change the model folder name to "openvino_model" , the torch.hub.load throws this error

Exception: [Errno 13] Permission denied: 'D:\update_infer\yolov5\best_openvino_model_16\openvino_model'. Cache may be out of date, try force_reload=True or see https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.

note: openvino_model folder contains all meta data information same as best_oprnvino_model folder

@tejasri19
Copy link

tejasri19 commented Aug 15, 2024

results = model("path to image")
boxes = []
scores = []
for box in results[0].boxes:
    cords = box.xyxy[0].tolist()
    x1, y1, x2, y2 = [round(x) for x in cords]
    score = box.conf[0].item()  # Assuming the confidence score is available in box.conf
    cls = results[0].names[box.cls[0].item()]
    boxes.append([x1, y1, x2, y2, score, cls])
    scores.append(score)

print("Boxes:", boxes)
print("Scores:", scores
```)

This is working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.