modes/train/ #8075
Replies: 185 comments 502 replies
-
how to print IOU and f-score with the training result? |
Beta Was this translation helpful? Give feedback.
-
How are we able to save sample labls and predictions on the validation set during training? I remember it being easy from yolov5 but I have not been able to figure it out with yolov8. |
Beta Was this translation helpful? Give feedback.
-
If I am not mistaken, the logs shown during training also contain the box(P,R,[email protected] and [email protected]:0.95) and mask(P,R,[email protected] and [email protected]:0.95) for validation set during each epoch. Then why is it happening that during model.val() using the best.pt, I am getting worse metrics. From the training and validation curves, it is clear that the model is overfitting for the segmentation task but that is separate issue of overfitting. Can you please help me out in this? |
Beta Was this translation helpful? Give feedback.
-
So, imgsz works different when training than when predicting? For train: if it's an Is this right? |
Beta Was this translation helpful? Give feedback.
-
Hi all, I have a segment model with customed data with single class, but there is a trend to overfit in the recent several training results, I tried adding more data in the training set with reduce box_loss and cls_loss in val, but dfl_loss is increasing. Is there suggestion to tuing the model. Thanks a lot. |
Beta Was this translation helpful? Give feedback.
-
I have a question for training the segmentation model. I have objects in my dataset that screen each other, such that the top object separates the segmentation mask of the bottom object into two independent parts. as far as I can see, the coordinates of each point are listed sequentially in the label file. If I add the points of the two masks one after the other in the coordinates of the same object, will I solve the problem? |
Beta Was this translation helpful? Give feedback.
-
Hello there! |
Beta Was this translation helpful? Give feedback.
-
Hello, I am working on a project for android devices. The gpu and cpu powers of the device I have are weak. Will it speed up if I make the imgsz value 320 for train? Or what are your recommendations? What happens if the imgsz parameter for training is 640 and the imgsz parameter for prediction is 320? Or what changes if imgsz for training is 320 and imgsz for prediction is 320? Sorry for my English Note: I converted it to tflite model. Thanks. You are amazing |
Beta Was this translation helpful? Give feedback.
-
I've come to rely on YOLOv8 in my daily work; it's remarkably user-friendly. Thank you to the Ultralytics team for your excellent work on these models! I'm currently tackling a project focused on detecting minor defects on automobile engine parts. As the defects will be a smaller object in a given frame ,could you offer guidance on training arguments or techniques while training a model that might improve performance for this type of data? I'm also interested in exploring attention mechanisms to enhance the model performance, but I'd appreciate help understanding how to implement this. Special appreciation to Ultralytics team. |
Beta Was this translation helpful? Give feedback.
-
Running this provided example Which lead me to this Stackoverflow: https://stackoverflow.com/q/75111196/815507 There are solutions from Stackoverflow: I wonder if you could help and update the guide to provide the best resolution? |
Beta Was this translation helpful? Give feedback.
-
We need to disable blur augmentation. I have filed an issue, Glenn suggested me to use blur=0, but it is not a valid argument. #8824 |
Beta Was this translation helpful? Give feedback.
-
How can I train YOLOv8 with my custom dataset? |
Beta Was this translation helpful? Give feedback.
-
Hey, Was trying out training custom object detection model using pretrained YOLO-v8 model.
0% 0/250 [00:00<?, ?it/s] |
Beta Was this translation helpful? Give feedback.
-
Hi! I'm working on a project where I plan to use YOLOv8 as the backbone for object detection, but I need a more hands-on approach during the training phase. How to I train the model manually, looping through epochs, perform forward propagation, calculate loss functions, backpropagate, and update weights? At the moment the model.train() seems to handle all of this automatically in the background. The end goal is knowledge distillation, but for a start I need to access these things. I haven't been able to find any examples of YOLOv8 being used in this way, some code and tips would be helpful. |
Beta Was this translation helpful? Give feedback.
-
Im trying to understand concept of training. I would like to extend default classes with helmet, gloves, etc.
Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
Please tell me how to set imgsz when I use the yolo11 model for training. Can it only be an integer (this means that the final input is a square)? After I modify this parameter, will the input interface of the network change the input of data? There are corresponding transformations, such as imgsz=1280, does it mean that my input data will be resized to 1280*1280 and input into the network for training? Thanks! |
Beta Was this translation helpful? Give feedback.
-
How can I do fine tuning with existing .onnx weights? |
Beta Was this translation helpful? Give feedback.
-
when training, how does the model know if the training data is BGR or RGB? |
Beta Was this translation helpful? Give feedback.
-
I referred to your document, and it says that running one epoch on the COCO dataset with an A100 GPU took 20 minutes and 36 seconds. Now, I have the same COCO dataset, and I want to know if using an RTX 4090 GPU will allow me to complete one epoch in 20 minutes and 36 seconds or less, ideally under 20 minutes. This is because you said that the RTX 4090 is faster than the A100 for most tasks! |
Beta Was this translation helpful? Give feedback.
-
Hi I'm working on a research project at Northeastern University. The loss became 0 after the first epoch and no detection at all:
The weird thing is I even tried to replicate the portion was working and improving before, and the training also returns 0 after the first epoch. I also tried to clone a new repo / directory of yolo11, still the loss is 0 right after the first epoch. Please sharing some insights! |
Beta Was this translation helpful? Give feedback.
-
Hi,
|
Beta Was this translation helpful? Give feedback.
-
Hello Team,
This case is particularly for an astronomical dataset, I am training a custom YOLOv11 model for that. The image size if 2kx4k. The idea is, I have very limited images to train, but each one contains huge number of objects. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi, I'm trying to remove has parameters: albumations: Blur(p=0.01, Blur_limit=(3, 7)), MedianBlur(p=0.01, Blur_limit=(3, 7)), ToGray(p=0.01, num_output_channels=3, method= 'weighted_average'), CLAHE (p=0.01, clip_limit=(1.0, 4.0), tile_grid_size=(8, 8)) nc: 2 # Número de classes Augmentation settingsaugmentations: It is not possible to remove |
Beta Was this translation helpful? Give feedback.
-
Thanks for your amazing work... could you please tell me what is the difference bt two different formats dataset first one in yoloV9 and second one in yoloV11. Which i have exported from the roboflow. I have exported two dataset in two different formats first one in yoloV9 and second one in yoloV11. my_train_data: i have put data training data of both format for i am trying to train the yoloV11-x-seg.pt model for two classes -> sofa and table WARNING Model does not support 'augment=True', reverting to single-scale prediction. and my model also not train further it train only for the 6 epochs i have use 5 for patience in the training I have trained 3 models with different parameters and i have face the same problem with all you can see below model = YOLO(r"yolo11x-seg.pt") 1 -> New https://pypi.org/project/ultralytics/8.3.38 available Update with 'pip install -U ultralytics'
0 -1 1 2784 ultralytics.nn.modules.conv.Conv [3, 96, 3, 2] Transferred 1077/1077 items from pretrained weights
EarlyStopping: Training stopped early as no improvement observed in last 5 epochs. Best results observed at epoch 1, best model saved as best.pt. 6 epochs completed in 0.608 hours. Validating E:\Image_Inpainting\ImageInpaintAPI\runs\segment\train6122\weights\best.pt... 2-> New https://pypi.org/project/ultralytics/8.3.38 available Update with 'pip install -U ultralytics'
0 -1 1 2784 ultralytics.nn.modules.conv.Conv [3, 96, 3, 2] Transferred 1071/1077 items from pretrained weights
EarlyStopping: Training stopped early as no improvement observed in last 5 epochs. Best results observed at epoch 1, best model saved as best.pt. 6 epochs completed in 3.411 hours. Validating E:\Image_Inpainting\ImageInpaintAPI\runs\segment\train61\weights\best.pt... 3 -> New https://pypi.org/project/ultralytics/8.3.38 available Update with 'pip install -U ultralytics'
0 -1 1 2784 ultralytics.nn.modules.conv.Conv [3, 96, 3, 2] Transferred 1077/1077 items from pretrained weights
EarlyStopping: Training stopped early as no improvement observed in last 5 epochs. Best results observed at epoch 1, best model saved as best.pt. 6 epochs completed in 0.594 hours. Validating E:\Image_Inpainting\ImageInpaintAPI\runs\segment\train612\weights\best.pt... I think this explanation is enough for this problem... |
Beta Was this translation helpful? Give feedback.
-
Hi I'm currently trying to explore the yolo model. I'm really enjoying how well the model is performing but I wanted to fine tune it further. I have a question regarding the scheduler of this model. If cos_lr is disabled is the learning rate constant? Or is it still being optimized? Also if I specify lr0 and lrf, what scheduler or optimizer will be used for this if the cos_lr is set to false? |
Beta Was this translation helpful? Give feedback.
-
Hi @glenn-jocher in YOLOv8 bottleneck we have conv followed conv layer now i need to add SimAM (Simplified Attention Module) like this |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm trying to reduce overfitting by increasing L2 parameter (weight_decay = 0.001) while training YOLOcls nano to binary classification, but the cmd prints a message saying that: "parameter groups 39 weight(decay=0.0), 40 weight(decay=0.001), 40 bias(decay=0.0)". As such, is the L2 regularization being correctly applied to all weights ? If not, how to do it correctly ? (Check the output message below): YOLO11n-cls summary: 151 layers, 1,533,378 parameters, 1,533,378 gradients |
Beta Was this translation helpful? Give feedback.
-
What is the best ratio of small object in the image size to imgsz?Increasing imgsz also reduces speed of inferencing?What is the use of mask_ratio? |
Beta Was this translation helpful? Give feedback.
-
How to include suitable CONTEXT while making label? For example, if I want to detect "Cat on table", should I include whole cat and table as label or just cat and half of table? |
Beta Was this translation helpful? Give feedback.
-
modes/train/
Step-by-step guide to train YOLOv8 models with Ultralytics YOLO including examples of single-GPU and multi-GPU training
https://docs.ultralytics.com/modes/train/
Beta Was this translation helpful? Give feedback.
All reactions