From 051633ed2d910deff3237e6a2f15712051921d86 Mon Sep 17 00:00:00 2001 From: Olivier Chafik Date: Mon, 10 Jun 2024 16:05:11 +0100 Subject: [PATCH] update dockerfile refs --- README-sycl.md | 2 +- README.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README-sycl.md b/README-sycl.md index 720d2ced9..8228d32cd 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -99,7 +99,7 @@ The docker build option is currently limited to *intel GPU* targets. ### Build image ```sh # Using FP16 -docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/llama-intel.Dockerfile . +docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/llama-cli-intel.Dockerfile . ``` *Notes*: diff --git a/README.md b/README.md index b90471c17..ba1a862dc 100644 --- a/README.md +++ b/README.md @@ -555,7 +555,7 @@ Building the program with BLAS support may lead to some performance improvements ```sh # Build the image - docker build -t llama-cpp-vulkan -f .devops/llama-vulkan.Dockerfile . + docker build -t llama-cpp-vulkan -f .devops/llama-cli-vulkan.Dockerfile . # Then, use it: docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-vulkan -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 @@ -907,7 +907,7 @@ Assuming one has the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia ```bash docker build -t local/llama.cpp:full-cuda -f .devops/full-cuda.Dockerfile . -docker build -t local/llama.cpp:light-cuda -f .devops/llama-cuda.Dockerfile . +docker build -t local/llama.cpp:light-cuda -f .devops/llama-cli-cuda.Dockerfile . docker build -t local/llama.cpp:server-cuda -f .devops/llama-server-cuda.Dockerfile . ```