-
Notifications
You must be signed in to change notification settings - Fork 61.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 怎么让所有的模型都走一个openai代理中转? #6339
Comments
Title: [Bug] How to make all models transit with an openai agent? 📦 Deployment methodDocker 📌 Software versionCurrent version: v2.15.8 💻 System environmentOther 📌 System versionCurrent version: v2.15.8 🌐 BrowserOther 📌 Browser version64 🐛 Question descriptionIt is a genius who can think of separating each API agent. Almost all of the people who are transiting now are imitating openai agent interfaces. Automatically separating other official AI interfaces cannot be used. 📷 Reproduction stepsNo response 🚦 Expected resultsNo response 📝 Supplementary informationNo response |
建议你好好看看NextChat的配置文档,里面说的很清楚了,直接gemini-2.0-flash@OpenAI就可以用了 |
It is recommended that you take a look at the NextChat configuration document, which is very clear. You can use it directly by gemini-2.0-flash@OpenAI. |
有问题可以问,先搞清楚问题是什么,不要上来就火力全开。遇到脾气不好的开发者,直接把你issue关了,你又不高兴。 |
|
要不然你就自己部署oneapi |
Otherwise you can deploy oneapi by yourself |
📦 部署方式
Docker
📌 软件版本
当前版本:v2.15.8
💻 系统环境
Other
📌 系统版本
当前版本:v2.15.8
🌐 浏览器
Other
📌 浏览器版本
64
🐛 问题描述
能想出分隔每个api代理商的是个天才 现在几乎全部搞中转的都是仿openai的代理接口 自动分隔用其他AI官方接口全部用不了 太6
本以为直接 -e CUSTOM_MODELS="-all,gemini-2.0-flash" \ 就让其用一个模型总行吧 结果还是给我去到了谷歌的接口 为什么不直接出个openai接口可新增可减少的呢?
📷 复现步骤
No response
🚦 期望结果
No response
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: