-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
latest-aio-gpu-intel-f32 docker not working on the arc a750 #4905
Comments
I am using latest-aio-gpu-intel-f16 docker with I5-12400 Igpu, and the model cannot be loaded normally, but it is normal to switch to pure CPU 5:19PM INF [/build/backend/python/coqui/run.sh] Fails: failed to load model with internal loader: backend not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh |
i am using an intel arc a750 as it shows in the logs |
Have you looked at #3967? In the log.txt you have a similar error
|
it seems they have problems with outdated dependecies. i'll look if a apt update/upgrade inside the container fixes stuff for me |
If the host's oneapi works with your arc gpu it might be possible to add '-v /opt/intel/oneapi:/opt/intel/oneapi' to the docker container and use the host's library. That'll save the update/rebuild step. |
that would work, unfortunately i use unraid which has no package manager and the root filesystem resets every reboot except some config files of course |
i updated the packages inside the container which was quite a lot but no dice |
LocalAI version:
docker image with the tag :latest-aio-gpu-intel-f32
Environment, CPU architecture, OS, and Version:
Linux Zana 6.6.68-Unraid #1 SMP PREEMPT_DYNAMIC Tue Dec 31 13:42:37 PST 2024 x86_64 11th Gen Intel(R) Core(TM) i5-11400 @ 2.60GHz GenuineIntel GNU/Linux
64 GB RAM
Describe the bug
gpu does not work nor do i get any output from chat. however the tag[master-vulkan-ffmpeg-core] works just fine with the gpu
To Reproduce
just run the docker latest-aio-gpu-intel-f32 while passing through /dev/dri and use localai
Expected behavior
gpu accelerated chatting
Logs
logs.txt
Additional context
i use the a750 with 8 GB vram which is not a lot but at least something
The text was updated successfully, but these errors were encountered: