-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Getting started documentation to point users to faster models #730
Comments
Do we want users to use models that are less accurate? That is the implicit trade-off of showcasing faster more approximate models as the first thing people see (as a lot of people will then just use that).
I'm not sure this really represents what a real user would do when assessing a package or more general a very credible package review so I am not super keen to make decisions that optimise for it?> All that being said I really don't feel that strongly. I think the minimum we should do is clearly point people to the fact there are different model formulations they could use that have different properties |
All good points. I think more generally though, new users will use tools they saw others use and may also not use ones that had a bad/unfair review. So, if people see elsewhere that our models are slow, they may not even try to use them. Moreover, I may be wrong in saying this but often, users may not take the time to try out different packages before making a choice. |
To be honest my view is that we need better multi-model evaluations that are across groups rather than optimising for the current status quo |
Alright. I'll close this issue. |
Do we want to reopen and instead of changing the default improving the signposting to faster model configurations? |
This issue can be solved after merging the benchmarking vignette in #695.
I was curious about how the paper "Real-time estimation of the epidemic reproduction number: Scoping review of the applications and challenges" measured run times of the various R packages and came across this section in the supplementary material where more details are given (page 3):
This has got me thinking about whether we should change our docs to use the quicker models that sacrifice accuracy as that is what users will interact with first (copy & paste to try out) but with a caveat. We can then signpost to the slower but more accurate models for real-world use cases.
I'm also making a note here to raise an issue in EpiEstim to re-assess the speed score in this table using other faster and relatively accurate models with evidence in #695.
The text was updated successfully, but these errors were encountered: