Replies: 1 comment
-
The main reasoning behind hosting models locally are cost and privacy. Dwata can utilise models like Llama 3 using existing providers so that workflows can be refined. Once workflows are set then user can switch to using locally hosted models. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We want to enable running chats and workflows with locally hosted language models, either directly on the same machine that is running Dwata or a server which Dwata can directly manage using SSH. In either case, we want to use Ollama or Llama.cpp or similar software and integrate with their API. Server management, software installation using SSH will be handled separately. Perhaps using Docker will give us a good place to start and we can directly install on a Debian if needed.
Beta Was this translation helpful? Give feedback.
All reactions