Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving mota output #180

Closed
wants to merge 72 commits into from
Closed
Show file tree
Hide file tree
Changes from 45 commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
bb148b6
debugging
nikk-nikaznan Apr 11, 2024
2ed0673
cleaned up
nikk-nikaznan Apr 11, 2024
1592236
added plot raw
nikk-nikaznan Apr 16, 2024
bdf477f
some changes to the plot
nikk-nikaznan Apr 18, 2024
fb87dec
moved the plot to visualisation
nikk-nikaznan Jun 4, 2024
409ef4a
fixed the test
nikk-nikaznan Jun 4, 2024
5e910b4
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jun 4, 2024
51c7459
removed some commented line
nikk-nikaznan Jun 4, 2024
4ec7825
removed some commented line
nikk-nikaznan Jun 4, 2024
2e25910
add one test
nikk-nikaznan Jun 12, 2024
0d36020
cleaned up
nikk-nikaznan Jun 12, 2024
29da996
adding some test for inference
nikk-nikaznan Jun 14, 2024
d6291d1
cleaned up for the test
nikk-nikaznan Jun 14, 2024
c8b033f
testing 3.9 locally
nikk-nikaznan Jun 14, 2024
f54fdf0
testing
nikk-nikaznan Jun 14, 2024
1a140cf
testing
nikk-nikaznan Jun 14, 2024
444c915
remove matplotlib in sort
nikk-nikaznan Jun 14, 2024
90d7376
remove matplotlib in sort
nikk-nikaznan Jun 14, 2024
a90f9c4
cleaned up sort
nikk-nikaznan Jun 14, 2024
44d8062
add load trained model test
nikk-nikaznan Jun 14, 2024
95b06de
adding some more test
nikk-nikaznan Jun 14, 2024
5595135
test to write to csv
nikk-nikaznan Jun 14, 2024
829f7a2
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jun 21, 2024
c3dce1f
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jun 21, 2024
10f9512
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jun 28, 2024
dec2a03
rebase
nikk-nikaznan Jun 28, 2024
64d583c
Merge branch 'nikkna/eval_track_dev' of github.com:SainsburyWellcomeC…
nikk-nikaznan Jun 28, 2024
0aa5040
nned to fix some test
nikk-nikaznan Jun 28, 2024
7d80256
cleaned up test
nikk-nikaznan Jul 1, 2024
ca779ad
fixed all test_tracking_evaluation
nikk-nikaznan Jul 1, 2024
1bf8735
some test
nikk-nikaznan Jul 2, 2024
a960fa0
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jul 4, 2024
aaf0c48
cleaned up
nikk-nikaznan Jul 4, 2024
a1cf6d3
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jul 4, 2024
95d3d47
fixed test
nikk-nikaznan Jul 4, 2024
e74ff93
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jul 8, 2024
92cddc9
Merge branch 'main' of github.com:SainsburyWellcomeCentre/crabs-explo…
nikk-nikaznan Jul 9, 2024
167f79b
cleaned up
nikk-nikaznan Jul 9, 2024
3dad25e
fixed some conflict
nikk-nikaznan Jul 10, 2024
11ca37f
fixed test
nikk-nikaznan Jul 10, 2024
305a599
adding some line so can run plot from terminal
nikk-nikaznan Jul 11, 2024
028de49
Merge branch 'main' into nikkna/eval_track_dev
nikk-nikaznan Jul 12, 2024
1e3ef67
adding total gt check
nikk-nikaznan Jul 12, 2024
a8f3942
Merge branch 'nikkna/eval_track_dev' of github.com:SainsburyWellcomeC…
nikk-nikaznan Jul 12, 2024
68a8c2b
edit example usage
nikk-nikaznan Jul 12, 2024
d85f1ce
Merge branch 'main' into nikkna/eval_track_dev
sfmig Nov 5, 2024
9b4954f
Apply suggestions from code review
sfmig Nov 5, 2024
857da57
Make precommits happy
sfmig Nov 5, 2024
8564861
Add reference to other guides
sfmig Nov 5, 2024
57fc90e
Set output directory for evaluated frames in constructor of evaluatio…
sfmig Nov 5, 2024
a08a44b
Simplify io an tracking utils
sfmig Nov 5, 2024
c40a6b8
Add trained model data, input video data, and output directory to con…
sfmig Nov 5, 2024
8c61ba5
Fix tracking.io tests
sfmig Nov 5, 2024
694d900
Simplify tracking utils test
sfmig Nov 5, 2024
e56d00c
Remove saving slurm logs as artifacts (separate PR)
sfmig Nov 7, 2024
460156b
Return dict from core detection and tracking
sfmig Nov 7, 2024
fecb27f
Return dict from core detection and tracking
sfmig Nov 7, 2024
c118d29
Separate video frame extraction loop
sfmig Nov 7, 2024
b53ba3d
Factor out prep of detector and tracker
sfmig Nov 7, 2024
2fc14e1
Fix writing to csv file
sfmig Nov 7, 2024
d3f6c4c
Adapt evaluation to bounding boxes dict
sfmig Nov 7, 2024
29b6feb
Fix test_write_tracked_detections_to_csv
sfmig Nov 7, 2024
6d39074
Fix constructor test
sfmig Nov 7, 2024
a570f69
Fix evaluate tests
sfmig Nov 7, 2024
41b3a7b
Handle video creation and release in each separate loop
sfmig Nov 7, 2024
30fa38c
Fix save video for bboxes dict. Make video and frame saving more atomic.
sfmig Nov 7, 2024
c3ff6af
Fix MOTA to recover old value for sample clip
sfmig Nov 7, 2024
7ec26ab
Clarify index
sfmig Nov 7, 2024
29e7399
Rename mota per frame function
sfmig Nov 7, 2024
5890772
Vectorise format_bbox_predictions_for_sort
sfmig Nov 7, 2024
bf59540
Remove batch dimension from predictions when returning
sfmig Nov 7, 2024
ff376d7
Merge branch 'main' into nikkna/eval_track_dev
sfmig Nov 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 47 additions & 4 deletions crabs/tracker/evaluate_tracker.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
import csv
import logging
from pathlib import Path
from typing import Any, Dict, Optional, Tuple

import numpy as np

from crabs.tracker.utils.tracking import extract_bounding_box_info
from crabs.tracker.utils.tracking import (
extract_bounding_box_info,
save_tracking_mota_metrics,
)


class TrackerEvaluate:
Expand All @@ -13,6 +17,7 @@ def __init__(
gt_dir: str,
predicted_boxes_id: list[np.ndarray],
iou_threshold: float,
tracking_output_dir: Path,
):
"""
Initialize the TrackerEvaluate class with ground truth directory, tracked list, and IoU threshold.
Expand All @@ -32,6 +37,7 @@ def __init__(
self.gt_dir = gt_dir
self.predicted_boxes_id = predicted_boxes_id
self.iou_threshold = iou_threshold
self.tracking_output_dir = tracking_output_dir

def get_predicted_data(self) -> Dict[int, Dict[str, Any]]:
"""
Expand Down Expand Up @@ -226,7 +232,7 @@ def evaluate_mota(
pred_data: Dict[str, np.ndarray],
iou_threshold: float,
gt_to_tracked_id_previous_frame: Optional[Dict[int, int]],
) -> Tuple[float, Dict[int, int]]:
) -> Tuple[float, int, int, int, int, int, Dict[int, int]]:
"""
Evaluate MOTA (Multiple Object Tracking Accuracy).

Expand Down Expand Up @@ -254,6 +260,7 @@ def evaluate_mota(
"""
total_gt = len(gt_data["bbox"])
false_positive = 0
true_positive = 0
indices_of_matched_gt_boxes = set()
gt_to_tracked_id_current_frame = {}

Expand All @@ -278,6 +285,7 @@ def evaluate_mota(
index_gt_not_match = j

if index_gt_best_match is not None:
true_positive += 1
# Successfully found a matching ground truth box for the tracked box.
indices_of_matched_gt_boxes.add(index_gt_best_match)
# Map ground truth ID to tracked ID
Expand All @@ -299,7 +307,15 @@ def evaluate_mota(
mota = (
1 - (missed_detections + false_positive + num_switches) / total_gt
)
return mota, gt_to_tracked_id_current_frame
return (
mota,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returning a long tuple is sometimes considered a code smell.

Maybe we can pass mota and its components as a dict to reduce this?

true_positive,
missed_detections,
false_positive,
num_switches,
total_gt,
gt_to_tracked_id_current_frame,
)

def evaluate_tracking(
self,
Expand All @@ -323,19 +339,46 @@ def evaluate_tracking(
"""
mota_values = []
prev_frame_id_map: Optional[dict] = None
results: dict[str, Any] = {
"Frame Number": [],
"Total Ground Truth": [],
"True Positives": [],
"Missed Detections": [],
"False Positives": [],
"Number of Switches": [],
"Mota": [],
sfmig marked this conversation as resolved.
Show resolved Hide resolved
}

for frame_number in sorted(ground_truth_dict.keys()):
gt_data_frame = ground_truth_dict[frame_number]

if frame_number < len(predicted_dict):
pred_data_frame = predicted_dict[frame_number]
mota, prev_frame_id_map = self.evaluate_mota(

(
mota,
true_positives,
missed_detections,
false_positives,
num_switches,
total_gt,
prev_frame_id_map,
) = self.evaluate_mota(
gt_data_frame,
pred_data_frame,
self.iou_threshold,
prev_frame_id_map,
)
mota_values.append(mota)
results["Frame Number"].append(frame_number)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we make evaluate_mota return a dict for the MOTA and its components (called for example mota_dict), then we can make results have the same keys. That way we can make this bit smaller:

for key in results.keys():
	results[key].append(mota_dict[key])

results["Total Ground Truth"].append(total_gt)
results["True Positives"].append(true_positives)
results["Missed Detections"].append(missed_detections)
results["False Positives"].append(false_positives)
results["Number of Switches"].append(num_switches)
results["Mota"].append(mota)

save_tracking_mota_metrics(self.tracking_output_dir, results)

return mota_values

Expand Down
1 change: 1 addition & 0 deletions crabs/tracker/track_video.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,7 @@ def run_tracking(self):
self.args.gt_path,
self.tracked_bbox_id,
self.config["iou_threshold"],
self.tracking_output_dir,
)
evaluation.run_evaluation()

Expand Down
149 changes: 149 additions & 0 deletions crabs/tracker/utils/io.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
import argparse
import csv
import os
from datetime import datetime
from pathlib import Path

import cv2
import matplotlib.pyplot as plt
import numpy as np

from crabs.detector.utils.visualization import draw_bbox
Expand Down Expand Up @@ -154,6 +156,7 @@ def save_required_output(
frame_copy = frame.copy()
for bbox in tracked_boxes:
xmin, ymin, xmax, ymax, id = bbox
print(f"Calling draw_bbox with {bbox}")
sfmig marked this conversation as resolved.
Show resolved Hide resolved
draw_bbox(
frame_copy,
(xmin, ymin),
Expand All @@ -178,3 +181,149 @@ def release_video(video_output) -> None:
"""
if video_output:
video_output.release()


def read_metrics_from_csv(filename):
"""
Read the tracking output metrics from a CSV file.
To be called by plot_output_histogram.

Parameters
----------
filename : str
Name of the CSV file to read.

Returns
-------
tuple:
Tuple containing lists of true positives, missed detections,
false positives, number of switches, and total ground truth for each frame.
"""
true_positives_list = []
missed_detections_list = []
false_positives_list = []
num_switches_list = []
total_ground_truth_list = []
mota_value_list = []

with open(filename, mode="r") as file:
reader = csv.DictReader(file)
for row in reader:
true_positives_list.append(int(row["True Positives"]))
missed_detections_list.append(int(row["Missed Detections"]))
false_positives_list.append(int(row["False Positives"]))
num_switches_list.append(int(row["Number of Switches"]))
total_ground_truth_list.append(int(row["Total Ground Truth"]))
mota_value_list.append(float(row["Mota"]))

return (
true_positives_list,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this tuple can be a dict instead? It's a bit less of a code smell

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we read the csv as a pandas dataframe instead we can extract the columns more efficiently (that is, without explicit looping).

There is also a dataframe .to_dict() method, so we may be able to get the output dictionary in one go this way.

missed_detections_list,
false_positives_list,
num_switches_list,
total_ground_truth_list,
mota_value_list,
)


def plot_output_histogram(filename):
"""
Plot metrics along with the total ground truth for each frame.

Example usage:
> filename = <video_name>/tracking_metrics_output.csv
> python crabs/tracker/utils/io.py filename

Parameters
----------
true_positives_list : list[int]
List of counts of true positives for each frame.
missed_detections_list : list[int]
List of counts of missed detections for each frame.
false_positives_list : list[int]
List of counts of false positives for each frame.
num_switches_list : list[int]
List of counts of identity switches for each frame.
total_ground_truth_list : list[int]
List of total ground truth objects for each frame.
"""
(
true_positives_list,
missed_detections_list,
false_positives_list,
num_switches_list,
total_ground_truth_list,
mota_value_list,
) = read_metrics_from_csv(filename)
filepath = Path(filename)
plot_name = filepath.name

num_frames = len(true_positives_list)
frames = range(1, num_frames + 1)
sfmig marked this conversation as resolved.
Show resolved Hide resolved

plt.figure(figsize=(10, 6))

overall_mota = sum(mota_value_list) / len(mota_value_list)

# Calculate percentages
true_positives_percentage = [
tp / gt * 100 if gt > 0 else 0
for tp, gt in zip(true_positives_list, total_ground_truth_list)
]
missed_detections_percentage = [
md / gt * 100 if gt > 0 else 0
for md, gt in zip(missed_detections_list, total_ground_truth_list)
]
false_positives_percentage = [
fp / gt * 100 if gt > 0 else 0
for fp, gt in zip(false_positives_list, total_ground_truth_list)
]
num_switches_percentage = [
ns / gt * 100 if gt > 0 else 0
for ns, gt in zip(num_switches_list, total_ground_truth_list)
]

# Plot metrics
plt.plot(
frames,
true_positives_percentage,
label=f"True Positives ({sum(true_positives_list)})",
color="g",
)
plt.plot(
frames,
missed_detections_percentage,
label=f"Missed Detections ({sum(missed_detections_list)})",
color="r",
)
plt.plot(
frames,
false_positives_percentage,
label=f"False Positives ({sum(false_positives_list)})",
color="b",
)
plt.plot(
frames,
num_switches_percentage,
label=f"Number of Switches ({sum(num_switches_list)})",
color="y",
)

plt.xlabel("Frame Number")
plt.ylabel("Percentage of Total Ground Truth (%)")
plt.title(f"{plot_name}_mota:{overall_mota:.2f}")

plt.legend()
plt.savefig(f"{plot_name}.pdf")
plt.show()


if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Plot output histogram.")
parser.add_argument(
"filename",
type=str,
help="Path to the CSV file containing the metrics",
)
args = parser.parse_args()
plot_output_histogram(args.filename)
10 changes: 10 additions & 0 deletions crabs/tracker/utils/tracking.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

import cv2
import numpy as np
import pandas as pd


def extract_bounding_box_info(row: list[str]) -> Dict[str, Any]:
Expand Down Expand Up @@ -152,3 +153,12 @@ def prep_sort(prediction: dict, score_threshold: float) -> np.ndarray:
pred_sort.append(bbox)

return np.asarray(pred_sort)


def save_tracking_mota_metrics(
tracking_output_dir: Path,
track_results: dict[str, Any],
) -> None:
track_df = pd.DataFrame(track_results)
output_filename = f"{tracking_output_dir}/tracking_metrics_output.csv"
sfmig marked this conversation as resolved.
Show resolved Hide resolved
track_df.to_csv(output_filename, index=False)
Loading