Skip to content

llama.cpp server-cuda-b5014 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b5014

Recent tagged image versions

  • Published about 4 hours ago · Digest
    sha256:a6d458362bc98f253d5b1087edd24100204cd3501dcd8da78880ce5b8bbbaa8c
    36 Version downloads
  • Published about 4 hours ago · Digest
    sha256:729d4dc6cb5f16646606879a6f625c1506155b871f1e0b64bfcf64696d1a3656
    16 Version downloads
  • Published about 4 hours ago · Digest
    sha256:6ee78f89de654ec58cd4d922686536b156a323f10af933154a4d4d6abd0751d0
    22 Version downloads
  • Published about 4 hours ago · Digest
    sha256:5c84673e60d45b1d24b1e6e53a95542164c24c3359dc58108c1fa41ac4ef7878
    16 Version downloads
  • Published about 4 hours ago · Digest
    sha256:3b674f529388ded1d94d99563a141519933addb7e6e6f00e367f945ceb38eabf
    22 Version downloads

Loading

Details


Last published

4 hours ago

Discussions

2.13K

Issues

749

Total downloads

118K