Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate script: review of command-line arguments #172

Merged
merged 22 commits into from
Jun 27, 2024
Merged

Conversation

sfmig
Copy link
Collaborator

@sfmig sfmig commented Apr 18, 2024

This PR changes the CLI arguments of the evaluate script to better match the training script.

The most common use would be to run:

evaluate-detector ---trained_model_path <path-to-ckpt>

This will load the trained model and use its config to run the evaluation. Note that it is important that we use the config used in training, to make the same dataset and use the correct test split.

However, if a dataset or config file is specified via CLI, the parameters will be overwritten.

Other additions

  • organises config .yaml parameters into sections (feel free to give feedback, not sure what is most intuitive).
  • adds evaluate logs to MLFlow,
  • adds a small fix to save frames from test set,
  • adds an option to save frames in a selected output directory (if none, uses a timestamped directory),
  • fixes test for saving frames and adapt to check timestamped directory.

I will do the tests in a separate PR 😬 ---> #199

@sfmig sfmig force-pushed the smg/evaluate-cli branch from 42b7d87 to a659439 Compare April 18, 2024 15:59
@sfmig sfmig force-pushed the smg/evaluate-cli branch from a659439 to 2e262a7 Compare April 18, 2024 16:12
@codecov-commenter
Copy link

codecov-commenter commented Apr 18, 2024

Codecov Report

Attention: Patch coverage is 41.80328% with 71 lines in your changes missing coverage. Please review.

Project coverage is 37.11%. Comparing base (5a47ee0) to head (8bb9d9c).

Files Patch % Lines
crabs/detection_tracking/evaluate_utils.py 51.16% 42 Missing ⚠️
crabs/detection_tracking/evaluate_model.py 0.00% 28 Missing ⚠️
crabs/detection_tracking/visualization.py 85.71% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #172      +/-   ##
==========================================
- Coverage   37.73%   37.11%   -0.63%     
==========================================
  Files          20       20              
  Lines        1349     1412      +63     
==========================================
+ Hits          509      524      +15     
- Misses        840      888      +48     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@sfmig sfmig changed the title Evaluate CLI rev Evaluate script: review of command-line arguments Apr 19, 2024
@nikk-nikaznan nikk-nikaznan mentioned this pull request Jun 26, 2024
3 tasks
@sfmig sfmig force-pushed the smg/evaluate-cli branch from 739fe44 to 9ae8730 Compare June 26, 2024 17:21
@sfmig sfmig marked this pull request as ready for review June 26, 2024 17:27
@sfmig sfmig mentioned this pull request Jun 26, 2024
@sfmig sfmig requested a review from nikk-nikaznan June 26, 2024 17:33
Copy link
Collaborator

@nikk-nikaznan nikk-nikaznan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good to me. Just wonder if we should move some of the functions like the get_.. to evaluate_utils to make the evaluate_model a bit cleaner.

crabs/detection_tracking/evaluate_model.py Outdated Show resolved Hide resolved
crabs/detection_tracking/evaluate_model.py Outdated Show resolved Hide resolved
crabs/detection_tracking/evaluate_model.py Outdated Show resolved Hide resolved
@sfmig sfmig merged commit 2bf7705 into main Jun 27, 2024
6 checks passed
@sfmig sfmig deleted the smg/evaluate-cli branch June 27, 2024 10:08
sfmig added a commit that referenced this pull request Jul 8, 2024
* move eval params to config (WIP)

* follow train CLI: add debugger options, add experiment name, use score_threshold from config

* fix prettier

* edit CLI defaults and get dataset params from ckpt if not defined (WIP)

* fix ninja comma

* Add sections to config

* Rename to evaluate utils

* Match current train script and add slurm logs as artifacts

* Fix evaluate_utils

* Use config from ckpt if not passed. Use dataset, annot files and seed from ckpt if not passed.

* Clarify CLI help (hopefully)

* Add score threshold for visualisation as CLI argument

* Small fix to config yaml

* Clean up

* Fix save frames and add output_dir

* Fix tests

* Move get_ functions to evaluate utils

* Replace assert by try-except
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants