-
-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix trainer and Qwen2-VL #179
Conversation
Any plan to release a new version? mlx-lm 0.21.0 has pinned Once mlx-vlm released a new version, we could loose the pin of |
Yes, there will be a new release this week :) |
Not sure I follow. Could you elaborate? |
I will pin v0.22.0 as well. There are some recent models and changes that require the latest MLX release. |
In which part do you need help? |
I need more context... For instance, what is Xinference ? |
Oh, I misunderstood, https://github.com/xorbitsai/inference is our project, and we have adopted your project for the Mac VLM engine. |
We have pinned mlx version, but the better option is that mlx-vlm can be compatible with mlx>=0.22.0 |
No worries, So awesome, really happy to see this!🔥 Sure thing, I will update the dependency to use >=0.22.0 on the next release |
@Blaizzy when will the next release come? Our CI is broken, and really appreciate you can release new version as soon as possible. |
Hey, It' done v0.1.11 is out :) |
Closes #176
Closes #173