integrations/tensorrt/ #8545
Replies: 19 comments 52 replies
-
Can you share any guides on installing TensorRT on Ubuntu? I saw one of the comments suggesting to install from here:- Only issue in ubuntu is I can't install PyCuda for 20.04 and higher versions so, would it run perfect without that? |
Beta Was this translation helpful? Give feedback.
-
Hi, I have a problem when i tried to convert pytorch format to tensorrt format: Could anyone tell me how to fix this? |
Beta Was this translation helpful? Give feedback.
-
Hi! Export custom model trained from YOLOv8 Model to Tensorrt! yolo export model="../../weights/M04-best_V2.pt" format=engine half=True device=0 TensorRT: export failure ❌ 6.3s: 'tensorrt_bindings.tensorrt.IBuilderConfig' object has no attribute 'max_workspace_size' Any idea? info: |
Beta Was this translation helpful? Give feedback.
-
Good day, I'm working on a Jetson Orin, everything is set up, but i have an issue. Any help with this? |
Beta Was this translation helpful? Give feedback.
-
Hi, I found a error that I didn't read before. It happen when I try export to a YOLOv10 model, (for example, the pretrained model autodownloaded with "yolov10n.pt" param) to TensorRT INT8. I used the code example on this page, so I don't know how could be resolved. The message error:
The code executed is: from ultralytics import YOLO
model = YOLO("yolov10n.pt")
out = model.export(format="engine", imgsz=640, dynamic=True, verbose=False, batch=8, workspace=2, int8=True, data="data.yaml") |
Beta Was this translation helpful? Give feedback.
-
what kind of quantization it used in the example? is that just the implicit quantization or more controlled ptq quantization? |
Beta Was this translation helpful? Give feedback.
-
Hello !, I have converted my trained .pt model to .engine. but when I try inference with my own code, it says it cannot get class names from this ''names = model.model.names'' |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
any solution to reuse the .cache file when deploy in many places ? (when using TensorRT int8) |
Beta Was this translation helpful? Give feedback.
-
Hi. Is there a difference between creating engine file with 'yolo export model=mymodel.onnx format=engine' and with 'trtexec --onnx=mymodel.onnx --saveEngine=mymodel.engine'? Are they interchangeable? Also, I'd like to know the version of tensorrt (e.g., 8.6.2) when using 'yolo export' command. Thanks |
Beta Was this translation helpful? Give feedback.
-
So I have setup TensorRT(10.4) on my AWS instance and it works well for all the pretrained models on yolo that I have tried but the moment I do transfer learnning and try to use any of the the other models it is failing and giving me this error:-
|
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
I have followed all your instructions but when exporting to tensorrt still getting error: ONNX: export success ✅ 1.4s, saved as 'yolov8n.onnx' (12.2 MB) I am using Jetson AGX Orin Jetpack 6.1 r36.4 cuda12.6 tensorrt 10.5 versions. I have already installed python3-libnvinfer and python3-libnvinfer-dev. I have some doubts whether Ultralytics supports up to date Jetson packages. |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a problem when i covert my 'best.pt' file into 'best.engine' so running code i occur following error, can you tell me why this happen or what is the solution for this ? Speed: 5.5ms preprocess, 6.9ms inference, 137.0ms postprocess per image at shape (1, 3, 640, 640) |
Beta Was this translation helpful? Give feedback.
-
Hello Team, Im faving this error can you please lat me know what is the exam problem here, I'm using WARNING PyTorch: starting from 'best_det.pt' with input shape (16, 3, 640, 640) BCHW and output shape(s) (16, 5, 8400) (83.6 MB) ONNX: starting export with onnx 1.16.2 opset 17... PyTorch: starting from 'best_det.pt' with input shape (16, 3, 640, 640) BCHW and output shape(s) (16, 5, 8400) (83.6 MB) ONNX: starting export with onnx 1.16.2 opset 17... |
Beta Was this translation helpful? Give feedback.
-
When exporting a |
Beta Was this translation helpful? Give feedback.
-
Reading engine from file /content/engine/yolo11x_fp16.engine def load_and_inference_fastreid(fastreid_batch_images, engine, fastreid_inputs:np.ndarray,fastreid_outputs:np.ndarray,bindings,stream):
def _do_inference_base(inputs, outputs, stream, execute_async_func): def do_inference(context, engine, bindings, inputs, outputs, stream):
Define target classestarget_classes = ['car', 'bus', 'truck', 'motorcycle'] def main():
|
Beta Was this translation helpful? Give feedback.
-
We are using the .engine model with live camera feeds, and providing a batch of images from each camera. However, when one or more cameras fail, .engine refuses to process the batch due to the discrepancy in batch sizes. Is it possible to process dynamic batches to handle such failures? |
Beta Was this translation helpful? Give feedback.
-
Hey there,I have error when converting yolov8m model to tensorrt with precision int8 I cannot find where error is happening please can you check it [11/30/2024-14:35:17] [TRT] [V] --------------- Timing Runner: /model.7/conv/Conv (CudnnConvolution) |
Beta Was this translation helpful? Give feedback.
-
integrations/tensorrt/
Discover the power and flexibility of exporting Ultralytics YOLOv8 models to TensorRT format for enhanced performance and efficiency on NVIDIA GPUs.
https://docs.ultralytics.com/integrations/tensorrt/
Beta Was this translation helpful? Give feedback.
All reactions