-
Notifications
You must be signed in to change notification settings - Fork 4
[feat] Add Multiple LLM Provider Support (OpenAI/Gemini/Self-Hosted) #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
hey @khelli07 , Would this be a valid addition to the project? If so, I’d love to contribute and work on it! |
Hey, yeah sure if you are interested with it. Actually it's been a while since I touch this project. But you can see the documentation here. I remember there's still some bugs and limitation on this. So I think you can add the feature you proposed or improve other things. I would suggest you do the thing you proposed first tho. I think it's quite simple since ollama has same API as OpenAI, iirc. |
Thank you for sharing the documentation! I’ll go through it and start working on it |
Hey @khelli07, I’ve made changes to add support for multiple LLM providers, but I’m facing an issue when trying to authorize the scheduler. When I type /schedule authorize, nothing happens. Any idea what might be wrong? Also, should I create a new issue for this, or can I include it when I push the PR? |
Description
The app currently only supports Ollama for AI features. Allow users to choose between:
Proposed Solution
I want to implement this! 🙋
Let me know if this aligns with the roadmap! I’d love to collaborate on it.
The text was updated successfully, but these errors were encountered: