models/yolov8/ #10285
Replies: 73 comments 160 replies
-
I can't find score of yolov8x6.pt... |
Beta Was this translation helpful? Give feedback.
-
I used yolov8 to successfully detect objects in nuscene camera image dataset for autonomus driving. However, i am finding it dificult to extract or retrieve bounding boxes, classes/labels and confidence scores from the processed images. I will need to use these information (bounding boxes cordinates, confidence scores, labels). I tried using format [xmin, ymin, xmax, ymax] format, and using logic that relies on the 'xyxy' attribute but to no avail. @pderrenger i really need your help as I need to move on to the next task. Thanks |
Beta Was this translation helpful? Give feedback.
-
HI! |
Beta Was this translation helpful? Give feedback.
-
Does YOLOV8 have any inherent object tracking across images? |
Beta Was this translation helpful? Give feedback.
-
Have an issues with class loading and labels. |
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO model = YOLO("best.pt") Hi, I have to export model into tflite format but the error I'm getting is as given below. TensorFlow SavedModel: export failure ❌ 19.1s: generic_type: cannot initialize type "StatusCode": an object with that name is already definedImportError Traceback (most recent call last) 11 frames ImportError: generic_type: cannot initialize type "StatusCode": an object with that name is already defined. Please give suggestion on this. |
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO model = YOLO("best.pt") Hi, I have to export model into tflite format but the error I'm getting is as given below. TensorFlow SavedModel: export failure ❌ 19.1s: generic_type: cannot initialize type "StatusCode": an object with that name is already definedImportError Traceback (most recent call last) 11 frames ImportError: generic_type: cannot initialize type "StatusCode": an object with that name is already defined. Please give suggestion on this. |
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
Hi! I am currently involved in developing a vehicle detection system, with a particular focus on determining whether vehicles are parked or in motion through pixel speed estimation. I have been experimenting with the speed_estimator function, in terms of both kilometers per hour and pixels per frame, but so far, I have not achieved satisfactory results. Could any of you suggest advanced methodologies or configuration adjustments that could improve the accuracy of the detection? Any recommendations on libraries, algorithms, or alternative approaches would also be greatly appreciated. I thank you in advance for any guidance or advice you can provide. Best regards. |
Beta Was this translation helpful? Give feedback.
-
Muchas gracias!
def plot_box_and_track(self, track_id, box, cls, track):
"""Plots track and bounding box."""
speed = self.dist_data.get(track_id, 0)
status_label = "Stopped" if speed < 1 else f"Moving at {speed:.2f}
px/frame" # rango de confianza
bbox_color = (0, 255, 0) if speed < 1 else (0, 0, 255)
# Draw bounding box
cv2.rectangle(self.im0, (int(box[0]), int(box[1])), (int(box[2]),
int(box[3])), bbox_color, 2)
cv2.putText(self.im0, status_label, (int(box[0]), int(box[1]) - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.9, bbox_color, 2)
cv2.polylines(self.im0, [self.trk_pts], isClosed=False, color=
bbox_color, thickness=1)
cv2.circle(self.im0, (int(track[-1][0]), int(track[-1][1])), 5,
bbox_color, -1)
print(f"Plotted bounding box at ({box[0]}, {box[1]}, {box[2]}, {box[
3]}) with label '{status_label}'")
def calculate_speed(self, trk_id, track):
"""Calculates the speed of an object in pixels per frame."""
if len(track) < 2:
self.dist_data[trk_id] = 0
return
previous_point = track[-2]
current_point = track[-1]
distance = np.sqrt((current_point[0] - previous_point[0]) ** 2 + (
current_point[1] - previous_point[1]) ** 2)
self.dist_data[trk_id] = distance
This was the modification I made and it works quite well. Now I'm having
problems when vehicles start moving away from the camera, then the bounding
boxes get smaller and start determining that the vehicle is not moving. I'm
working in it
Thank you very much for taking the time and responding, I am really
fascinated with everything you are doing, incredible. I like it a lot,
thank you
El lun, 3 jun 2024 a la(s) 4:22 p.m., Glenn Jocher ***@***.***)
escribió:
… Hello,
Thank you for reaching out with your query on vehicle detection and speed
estimation using the speed_estimator function. To enhance the accuracy of
your system, you might consider a few advanced methodologies and
adjustments:
1.
*Model Fine-tuning:* If you haven't already, fine-tuning your YOLOv8
model on a dataset specifically annotated with vehicle speeds and states
(parked or in motion) could significantly improve detection accuracy.
2.
*Optical Flow Techniques:* For estimating pixel speed, optical flow
methods can be very effective. Libraries like OpenCV offer functions like
calcOpticalFlowFarneback, which might provide more precise speed
estimations.
3.
*Data Augmentation:* Incorporating variations in vehicle speeds and
lighting conditions during training can help the model generalize better
over different real-world scenarios.
4.
*Temporal Models:* Consider using LSTM networks or 3D ConvNets that
can leverage temporal information across frames to better estimate speeds
and detect motion.
5.
*Ensemble Methods:* Combining predictions from multiple models or
different configurations of the same model can sometimes yield better
results.
For libraries, aside from OpenCV, you might look into PyTorch and
TensorFlow for implementing and training any deep learning models. Both
frameworks support the advanced techniques mentioned above and are
compatible with YOLOv8.
I hope these suggestions help you enhance your vehicle detection system.
If you have further questions or need more detailed assistance, feel free
to ask.
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJOWIKZXFJUAIH6VF2LZFTGA3AVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMNJTHEZDA>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Muchisimas gracias la verdad tengo un millon de dudas, si me dices que te
puedo preguntar te puedo dar un listado de preguntas enorme, es broma!
De verdad que estoy gratamente sorprendido por la comunidad, la atención,
la documentación, muy interesante todo, y la verdad que muy agradecido,
tienes un entusiasta en potencia con todo el mundo del computer vision,
gracias a ustedes!
En estos momentos estoy trabajando como te comente principalmente, en la
detección de colisiones (aun me falta por mejorar, pero va bastante bien),
vehiculos en excesos de velocidad, la detección de la senda peatonal
crosswalk, esto me va a servir para detectar las infracciones, me
gustaría empezar a trabajar cuando los vehículos se pasen un semáforo en
rojo.
bueno como te decía muchas dudas, pero esas son las principales donde estoy
trabajando y donde quiero empezar a trabajar! cualquier ayuda, tips, alguna
función que me pueda servir, o cualquier cosa con la que me pueda guiar,
estaré muy atento y más que agradecido.
Estoy abierto también a cualquier colaboración, o algo que pueda aportar,
por mi experiencia pues no será mucho, pero bueno, las ganas son enormes!
Saludos!
__
Roberto Schaefer
…__
El mar, 4 jun 2024 a la(s) 9:58 a.m., Glenn Jocher ***@***.***)
escribió:
¡Hola! Gracias por compartir tus modificaciones y por tus amables
palabras. 😊
En cuanto al problema que mencionas con los vehículos que se alejan de la
cámara, una posible solución podría ser ajustar la escala de los bounding
boxes en función de la profundidad estimada o la perspectiva. Esto podría
ayudar a mantener la consistencia del tamaño de los bounding boxes a medida
que los vehículos se mueven.
Otra opción sería implementar un filtro de seguimiento más robusto que
pueda adaptarse a cambios rápidos en el tamaño y la posición de los
objetos, como un filtro de Kalman, que es común en aplicaciones de
seguimiento.
Espero que estas sugerencias te sean útiles. ¡Sigue experimentando y no
dudes en preguntar si necesitas más ayuda! 🚀
—
Glenn Jocher
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJLN5VEELLOELI445RLZFXBYFAVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMNRVGM2TG>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
¡Muy interesante lo que dices!
Si tienes algo que me puedas compartir en cuanto a la detección de
semáforos en rojo, sería genial, y he estado estudiando Kalman para mejorar
la precisión en cuanto a la detección! cualquier documentacion especifica
puede ser muy util para mi! estoy buscando seguir aprendiendo
El mié, 5 jun 2024 a la(s) 12:49 a.m., Glenn Jocher (
***@***.***) escribió:
… @roscha10 <https://github.com/roscha10> ¡Hola Roberto!
Muchas gracias por tus amables palabras y por compartir tu entusiasmo por
la visión por computadora. Es genial escuchar sobre tus proyectos en
detección de colisiones y otras aplicaciones de tráfico. 🚗💡
Para tus proyectos actuales y futuros, te recomendaría explorar las
capacidades de seguimiento y detección de YOLOv8, que pueden ser muy útiles
para detectar infracciones como el paso de semáforos en rojo. Además, el
uso de filtros como Kalman, mencionado por Glenn, puede mejorar la
precisión en la detección de objetos en movimiento.
Si tienes preguntas específicas o necesitas consejos sobre funciones
específicas, no dudes en preguntar. La comunidad está aquí para ayudarte.
También, cualquier contribución o idea que quieras compartir será
bienvenida; las ganas y el entusiasmo son tan importantes como la
experiencia.
¡Saludos y éxito en tus proyectos!
—
Reply to this email directly, view it on GitHub
<#10285 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7E5FJJFIANACWM4GE6MMZDZF2KG5AVCNFSM6AAAAABGWINEX6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TMNZSHEYTI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello, I use the yolov8n, yolov8s, yolov8m models to identify thermal people, but when I train, when the results of the above 3 models come out, the yolov8n model has Precision=0.8645568 while yolov8s only has 0.8404626 and yolov8m only has there are 0.7639084. |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm using yolov8 and I just can't use it from CLI. I have installed ultralytics as below
but everytime when I want to use the yolo command, it appears like this. 'yolo' is not recognized as an internal or external command operable program or batch file |
Beta Was this translation helpful? Give feedback.
-
Hi everyone, I am using yolov8l-seg.pt model for segmenting my breast cancer MRI 2D images. There are only 2 classes ( 0 for no cancer, 1 for cancer). The data.yaml file looks like this
The images are 256X256 RGB and have not been normalized.
Thanks |
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
Hi YOLO Team, I’ve trained a YOLO model on a custom dataset of 12k images, achieving around 60% accuracy. I recently obtained another custom dataset that contains the same classes as the original dataset which is having 4k images. I’m considering the best approach to improve the model’s performance with this new data. Would you recommend: Fine-tuning the existing model on the new dataset? Thanks, |
Beta Was this translation helpful? Give feedback.
-
Hi YOLO Team, I’ve trained a YOLO model on a custom dataset of 12k images, achieving around 60% accuracy. I recently obtained another custom dataset that contains the same classes as the original dataset which is having 4k images. I’m considering the best approach to improve the model’s performance with this new data. Would you recommend: Fine-tuning the existing model on the new dataset? Thanks, |
Beta Was this translation helpful? Give feedback.
-
There are parameters in yolo which you can change to achieve that. Made the freeze = 0 ( this will unfreeze all layers) and if you also interested in not initializing your model with pretrained weights, make pretrained = False. I hope this helps. |
Beta Was this translation helpful? Give feedback.
-
Here it is
|
Beta Was this translation helpful? Give feedback.
-
You will be able to train from 0 with the things I have mentioned. The freeze = 0and pratrained = False ensure that you train from 0. Thus means the model will not have any weights and you will get to train from scratch. I hope this help. |
Beta Was this translation helpful? Give feedback.
-
How it's possible that, lowering IoU thresholds, model's Recall could increase?
With this output Lowering IoU from 0.6 to 0.2
I observe this an increase of Recall. How it's possible? |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm using YOLOv8n for training of errors in FDM 3D Priniting (in educational purposes). I wonder how training process works (or if I can check the training script from Ultralitics). The thing is that I want to compare performance on limited number of epochs with and without augmentations. I don't clearly understand the usage of albumentations at the start of training task. Does this augmentation applies on all images while training? If it does, how do I turn it off completely to compare performance? |
Beta Was this translation helpful? Give feedback.
-
Dear Ultralytics team, I am writing thesis and there is a part regarding to Pose estimation models, which I want to introduce the YOLOv8-Pose family, I wonder if you still have the image illustrates the YOLOv8-Pose family like this image https://pyimagesearch.com/wp-content/uploads/2023/05/yolov8-model-comparison.png |
Beta Was this translation helpful? Give feedback.
-
Hi, I wanted to know if I can perform an object detection and segmentation (of a specific class)and make inferences in the same video. |
Beta Was this translation helpful? Give feedback.
-
Hi, If I train a model, it can recognize a variety of targets, just like the trained model you provided, which can recognize 80 types of targets. Now I only want to recognize a few targets in a photo instead of all targets. , so how do I write the code? |
Beta Was this translation helpful? Give feedback.
-
If the model performs well and you are happy with it, you can filter by class:
or multiple classes you want:
or like this:
|
Beta Was this translation helpful? Give feedback.
-
How does YOLO object tracking identify the same object across different frames? |
Beta Was this translation helpful? Give feedback.
-
Hi, I can see the GFLOPs of yolov8 series and yolo11 series. Is that the GFLOPs during training? If it is true, so what is the GFLOPs of these models during inference? |
Beta Was this translation helpful? Give feedback.
-
models/yolov8/
Explore the thrilling features of YOLOv8, the latest version of our real-time object detector! Learn how advanced architectures, pre-trained models and optimal balance between accuracy & speed make YOLOv8 the perfect choice for your object detection tasks.
https://docs.ultralytics.com/models/yolov8/
Beta Was this translation helpful? Give feedback.
All reactions