Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Evaluate script: review of command-line arguments (#172)
* move eval params to config (WIP) * follow train CLI: add debugger options, add experiment name, use score_threshold from config * fix prettier * edit CLI defaults and get dataset params from ckpt if not defined (WIP) * fix ninja comma * Add sections to config * Rename to evaluate utils * Match current train script and add slurm logs as artifacts * Fix evaluate_utils * Use config from ckpt if not passed. Use dataset, annot files and seed from ckpt if not passed. * Clarify CLI help (hopefully) * Add score threshold for visualisation as CLI argument * Small fix to config yaml * Clean up * Fix save frames and add output_dir * Fix tests * Move get_ functions to evaluate utils * Replace assert by try-except
- Loading branch information