-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate script: review of command-line arguments #172
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #172 +/- ##
==========================================
- Coverage 37.73% 37.11% -0.63%
==========================================
Files 20 20
Lines 1349 1412 +63
==========================================
+ Hits 509 524 +15
- Misses 840 888 +48 ☔ View full report in Codecov by Sentry. |
… from ckpt if not passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good to me. Just wonder if we should move some of the functions like the get_..
to evaluate_utils
to make the evaluate_model
a bit cleaner.
…/crabs-exploration into smg/evaluate-cli
* move eval params to config (WIP) * follow train CLI: add debugger options, add experiment name, use score_threshold from config * fix prettier * edit CLI defaults and get dataset params from ckpt if not defined (WIP) * fix ninja comma * Add sections to config * Rename to evaluate utils * Match current train script and add slurm logs as artifacts * Fix evaluate_utils * Use config from ckpt if not passed. Use dataset, annot files and seed from ckpt if not passed. * Clarify CLI help (hopefully) * Add score threshold for visualisation as CLI argument * Small fix to config yaml * Clean up * Fix save frames and add output_dir * Fix tests * Move get_ functions to evaluate utils * Replace assert by try-except
This PR changes the CLI arguments of the evaluate script to better match the training script.
The most common use would be to run:
This will load the trained model and use its config to run the evaluation. Note that it is important that we use the config used in training, to make the same dataset and use the correct test split.
However, if a dataset or config file is specified via CLI, the parameters will be overwritten.
Other additions
I will do the tests in a separate PR 😬 ---> #199