Replies: 3 comments
-
not exactly sure why we use separate object loss maybe you will find some insights in the very first YOLO paper |
Beta Was this translation helpful? Give feedback.
-
You can do this both ways. Handling them separately reduces class imbalance in higher class datasets like COCO. |
Beta Was this translation helpful? Give feedback.
-
I have the same question. Besides, the label of objectness loss in the paper of yolov3 is 1 (have object in this anchor) or 0 (have no object in this anchor). However, in the code, the label is treated as the IoU between the anchor and ground truth of the bounding box (instead of 0 or 1 simply). I am not sure which is better |
Beta Was this translation helpful? Give feedback.
-
I started investigating yolo loss recently. The reason of having the objectness loss is not very clear to me. I guess it is inherited from those region proposal methods. However, in yolo, it seems that the functionality of objectness loss can be totally covered by the class loss and box loss.
I can only find one conceptually discussion about the objectness loss (https://towardsdatascience.com/yolo-v3-explained-ff5b850390f). Does anyone know if there are any literatures or experiments about if we can avoid the objectness loss?
By considering the importance of the objectness loss, another question arises: if we change the weight of objectness (self.hyp['obj']) in training, should we still simply multiply objectness score and cls score in detection? If yes, what if we set "self.hyp['obj']=0"? By doing this, the objectness score won't be controlled during training, thus "simply multiply objectness score and cls score in detection" is definitely wrong ...
Beta Was this translation helpful? Give feedback.
All reactions