Replies: 4 comments 2 replies
-
I'm unsure if this is a bug perhaps? |
Beta Was this translation helpful? Give feedback.
-
After further research in the Docs, I found an example of how the model information shoult be formatted, in the documentation: E.g.: "model_client": {
"model": "bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF",
"model_type": "OpenAIChatCompletionClient",
"base_url": "http://localhost:1234/v1",
"api_version": "1.0",
"component_type": "model",
"model_capabilities": {
"vision": false,
"function_calling": true,
"json_output": false
}
}, @wpcool It would be helpful to see your full JSON, but my suspicion is your "model" field either didn't conform to the xxx/yyyy standard, or the order of the items in the dictionary is different from the above perhaps? |
Beta Was this translation helpful? Give feedback.
-
did you solve this? |
Beta Was this translation helpful? Give feedback.
-
Model name should follow the names given in the file "_model_info.py" File Path: .../site-packages/autogen_ext/models/openai/_model_info.py I was trying to use Gemini Flash Lite and gave the model name as "gemini-2.0-flash-lite" as per the documentation in https://ai.google.dev/gemini-api/docs/models/gemini#gemini-2.0-flash-lite. After changing it to the name given in the model_info file, the error is resolved. |
Beta Was this translation helpful? Give feedback.
-
I use another LLM as base model ,when I run in playground , it errors :"Team creation failed: Model creation failed: model_info is required when model name is not a valid OpenAI mode" . but I have write model_info in the "model_client"

Beta Was this translation helpful? Give feedback.
All reactions