Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cropping training images along with their respective annotations #13336

Closed
1 task done
andualemw1 opened this issue Sep 27, 2024 · 6 comments
Closed
1 task done

Cropping training images along with their respective annotations #13336

andualemw1 opened this issue Sep 27, 2024 · 6 comments
Labels
question Further information is requested

Comments

@andualemw1
Copy link

Search before asking

Question

As we know, YOLO supports both square and rectangular images. However, for speed and dataset size considerations, I want to crop an image from 1280x1280 to 640x640. YOLO annotations/labels are originally created based on the image’s width and height. How can I bridge the gap in training the dataset before and after cropping the image while keeping the annotations unchanged?

thanks in advance!

Additional

No response

@andualemw1 andualemw1 added the question Further information is requested label Sep 27, 2024
@UltralyticsAssistant
Copy link
Member

UltralyticsAssistant commented Sep 27, 2024

👋 Hello @andualemw1, thank you for your interest in YOLOv5 🚀! An Ultralytics engineer will assist you soon.

To get started with cropping images while retaining annotations, you might find our ⭐️ Tutorials helpful. You can explore guides for tasks such as Custom Data Training where managing image sizes and annotations is discussed.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

For custom training questions, provide as much information as possible, including dataset image examples and training logs. Verify you are following our Tips for Best Training Results.

Requirements

Ensure you have Python>=3.8.0 with all requirements.txt installed, including PyTorch>=1.8. To get started, run:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 can be run in verified environments, including:

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export, and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

Explore our latest object detection model, YOLOv8 🚀! Designed for speed and accuracy, perfect for a wide range of tasks. Discover more in our YOLOv8 Docs and get started with:

pip install ultralytics

Feel free to provide further details to help us address your question! 🔍

@pderrenger
Copy link
Member

@andualemw1 to crop your images from 1280x1280 to 640x640 while maintaining correct YOLO annotations, you'll need to adjust the annotations to match the new image dimensions. This involves recalculating the bounding box coordinates based on the new crop position. You can automate this process using a script to update the annotations accordingly. If you need further guidance, consider checking out image processing libraries like OpenCV or PIL for assistance with cropping and recalculating coordinates.

@andualemw1
Copy link
Author

Thank you so much for your reply support, this has been my quest for quite a while, I will do accordingly.

@pderrenger
Copy link
Member

You're welcome! If you have any more questions as you proceed, feel free to ask.

@andualemw1
Copy link
Author

Hello, sorry for coming back again. I am always confused about how to establish YOLOv5 benchmarking. Is it correct to use a confidence threshold of 0.25 on the test dataset for a custom model and for the standard Yolo model?

python val.py --data <data.yaml> --weights <model.pt> --task test --conf-thres 0.25 --iou-thres 0.5

thank you in advance!

@pderrenger
Copy link
Member

Yes, your approach to benchmarking with val.py and a confidence threshold of 0.25 is correct. The --conf-thres 0.25 sets the minimum confidence for detections, which is standard for YOLOv5 evaluation. Ensure you're using the same command and thresholds consistently across both custom and standard models for fair comparisons. You can refer to the YOLOv5 documentation for additional details. Let me know if you have further questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants