-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop sequences parameter in gpt.py should be None? #19
Comments
Hey, appreciate the feedback. We didn't test the framework with Ollama. Infrastructure for using models is ultimately up to the user - we simply provided the model wrappers we used in our experiments as an example. If you'd like - you can make a pull request to include generalized Ollama support! Thanks. |
Merged
Sorry for the late response! I've added a simple implementation. |
zrquan
added a commit
to zrquan/iris
that referenced
this issue
Mar 3, 2025
zrquan
added a commit
to zrquan/iris
that referenced
this issue
Mar 5, 2025
zrquan
added a commit
to zrquan/iris
that referenced
this issue
Mar 5, 2025
zrquan
added a commit
to zrquan/iris
that referenced
this issue
Mar 5, 2025
zrquan
added a commit
to zrquan/iris
that referenced
this issue
Mar 5, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When I use the OpenAI Chat Completions API provided by Ollama, the content returned by LLM is empty. And when the stop parameter is set to None, LLM can respond normally.
The text was updated successfully, but these errors were encountered: