mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 21:39:52 +00:00
0a1d889e27
* server: add cURL support to `full.Dockerfile` * server: add cURL support to `full-cuda.Dockerfile` and `server-cuda.Dockerfile` * server: add cURL support to `full-rocm.Dockerfile` and `server-rocm.Dockerfile` * server: add cURL support to `server-intel.Dockerfile` * server: add cURL support to `server-vulkan.Dockerfile` * fix typo in `server-vulkan.Dockerfile` Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
full-cuda.Dockerfile | ||
full-rocm.Dockerfile | ||
full.Dockerfile | ||
llama-cpp-clblast.srpm.spec | ||
llama-cpp-cuda.srpm.spec | ||
llama-cpp.srpm.spec | ||
main-cuda.Dockerfile | ||
main-intel.Dockerfile | ||
main-rocm.Dockerfile | ||
main-vulkan.Dockerfile | ||
main.Dockerfile | ||
server-cuda.Dockerfile | ||
server-intel.Dockerfile | ||
server-rocm.Dockerfile | ||
server-vulkan.Dockerfile | ||
server.Dockerfile | ||
tools.sh |