You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on a project using YOLO for violence detection. I have trained the model using violence data with skeleton points overlaid using YOLO-pose, as shown in the attached picture. My expectation was that the model would detect both the violence and the skeleton points when testing on new videos.
However, after training the model with data that includes skeleton points, the test video did not show the skeleton points, nor did it show the bounding boxes for the violence. Why is this happening?
I also thought that using data with skeleton points would improve the accuracy of violence detection. Is this correct?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am working on a project using YOLO for violence detection. I have trained the model using violence data with skeleton points overlaid using YOLO-pose, as shown in the attached picture. My expectation was that the model would detect both the violence and the skeleton points when testing on new videos.
However, after training the model with data that includes skeleton points, the test video did not show the skeleton points, nor did it show the bounding boxes for the violence. Why is this happening?
I also thought that using data with skeleton points would improve the accuracy of violence detection. Is this correct?
Beta Was this translation helpful? Give feedback.
All reactions