Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to show segmentation and object detection at the same time? #10372

Closed
1 task done
stphtan94117 opened this issue Dec 2, 2022 · 11 comments
Closed
1 task done
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@stphtan94117
Copy link

Search before asking

Question

some object used segmentation detection ,some object use bbox detection.

For example, the scene is on the road.
I want to use segmentation to detect potholes, but the road markings which use object detection (BBOX)
I don't know if it is possible to show both models at the same time?

I want to achieve both segmentation and object detection in one task.
I don't want to separate the two model training.

very thanks.

Additional

No response

@stphtan94117 stphtan94117 added the question Further information is requested label Dec 2, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2022

👋 Hello @stphtan94117, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email [email protected].

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@stphtan94117 segmentation models are also detection models, but not vice versa. i.e.:

Screenshot 2022-12-03 at 12 16 44

@stphtan94117
Copy link
Author

stphtan94117 commented Dec 4, 2022

@glenn-jocher
i mean some object displayed segmantation,other object only show bbox in the picture.
Take your bus picture as an example, bus object displayed green polygon,
three person object only show bbox without polygon(NO segmentation)

i want to save time to label because segmentation label takes me a lot of time
So I want to be lazy and use bbox for some objects, and use segmentation for key objects.

if can display both type,what happened when use --save-txt?
because bbox is xywh, seg is x1y1x2y2....

@glenn-jocher
Copy link
Member

@stphtan94117 sure, you can customize segment/predict.py to suit your needs here:
https://github.com/ultralytics/yolov5/blob/master/segment/predict.py

@stphtan94117
Copy link
Author

@glenn-jocher
Can you give me a hint on how to add detection BBOX in predict.py?

@github-actions
Copy link
Contributor

github-actions bot commented Jan 5, 2023

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Jan 5, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 15, 2023
@glenn-jocher
Copy link
Member

@stphtan94117 To display both segmentation and bounding boxes in the same image, you can modify the predict.py file by adding logic to draw bounding boxes based on the detected objects in addition to the existing segmentation visualization logic. For saving labels, by default --save-txt saves in YOLO format (x_center, y_center, width, height) for bounding boxes, and xyxy for segments. You can customize the output format if needed.

@lililibin2022
Copy link

Hiiii Glenn Jocher, It's an honour to connect with you!
I have a question related to mold detection in corn. In scenarios where I aim to segment the mold-affected and unaffected parts of the corn, is it feasible to simultaneously detect the entire corn (including both moldy and non-moldy regions)?

@pderrenger
Copy link
Member

Hello! It's great to see your interest in using YOLOv5 for mold detection in corn. Yes, you can achieve simultaneous detection and segmentation by using a segmentation model like YOLOv8, which supports both object detection and segmentation tasks. This approach would allow you to detect the entire corn and segment the mold-affected areas within it. For more detailed guidance, you might want to explore the YOLOv5 or YOLOv8 documentation and experiment with pre-trained models or custom datasets. If you have further questions, feel free to ask!

@lililibin2022
Copy link

Hi! Really thank you for your reply! Yes, I am currently building the dataset and testing this. I’m just wondering if the mold levels in corn vary significantly—look, how a healthy kernel looks different from one completely affected by mold. How might the model differentiate between them...

@pderrenger
Copy link
Member

Hi! You're welcome! The model can differentiate between healthy and mold-affected kernels if your dataset contains diverse, well-labeled examples for each class (e.g., healthy, partially affected, completely affected). To improve performance, ensure balanced class representation and use data augmentation during training to account for variability. Let me know if you need further assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

4 participants