You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To reduce running costs and deployment complexity, these bots could be migrated to a set of serverless function calls
As long as the serverless platform supports Python function calls on a sub-minute latency, it would likely be much cheaper than the current k8s setup, though that should be confirmed with some cost estimation with the planned function provider
Simple multiply the number of function calls per minute (scaled by the amount of resources required) by the number of updates per month (most bots update every few minutes or once per minute)
The key question is whether modern serverless hosting providers can provide sub-minute responses at lower cost than the current setup for all current bots and any projected new bots. Python typically has slow boot times so this likely means either "warm" function calls, or potentially redeveloping these all in TypeScript for consistency with the rest of the codebase (with the help of ChatGPT?)
Scoping this as an investigation ticket to make it achievable, if this were to be implemented there would need to be subsequent tickets
The text was updated successfully, but these errors were encountered:
To reduce running costs and deployment complexity, these bots could be migrated to a set of serverless function calls
As long as the serverless platform supports Python function calls on a sub-minute latency, it would likely be much cheaper than the current k8s setup, though that should be confirmed with some cost estimation with the planned function provider
Simple multiply the number of function calls per minute (scaled by the amount of resources required) by the number of updates per month (most bots update every few minutes or once per minute)
The key question is whether modern serverless hosting providers can provide sub-minute responses at lower cost than the current setup for all current bots and any projected new bots. Python typically has slow boot times so this likely means either "warm" function calls, or potentially redeveloping these all in TypeScript for consistency with the rest of the codebase (with the help of ChatGPT?)
Scoping this as an investigation ticket to make it achievable, if this were to be implemented there would need to be subsequent tickets
The text was updated successfully, but these errors were encountered: