Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log trained model used and other essentials during inference #256

Open
sfmig opened this issue Nov 21, 2024 · 0 comments
Open

Log trained model used and other essentials during inference #256

sfmig opened this issue Nov 21, 2024 · 0 comments

Comments

@sfmig
Copy link
Collaborator

sfmig commented Nov 21, 2024

Usually (in training and evaluation) we do this using MLflow.

However, it may be more complex to integrate MLflow in the inference step than in the other cases. I think it may require to define a dataloader from a video (see #238).

In the meantime, it may be a good idea to implement a simple way of keeping track of basic metrics:

  • detector model used,
  • tracking config used,
    • in the bash script for an array job in the HPC we save the tracking config along the output data (implemented in Bash script for running inference on clips #251 , but we don't do this when we run the command locally. A "local solution" that extends to the bash script would be ideal.
  • ground truth used if available,
  • selected outputs (are videos saved, are frames saved),
  • anything else?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant