mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 19:04:35 +00:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4
.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
46 lines
1.0 KiB
Docker
46 lines
1.0 KiB
Docker
ARG UBUNTU_VERSION=22.04
|
|
|
|
# This needs to generally match the container host's environment.
|
|
ARG ROCM_VERSION=5.6
|
|
|
|
# Target the CUDA build image
|
|
ARG BASE_ROCM_DEV_CONTAINER=rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete
|
|
|
|
FROM ${BASE_ROCM_DEV_CONTAINER} as build
|
|
|
|
# Unless otherwise specified, we make a fat build.
|
|
# List from https://github.com/ggerganov/llama.cpp/pull/1087#issuecomment-1682807878
|
|
# This is mostly tied to rocBLAS supported archs.
|
|
ARG ROCM_DOCKER_ARCH=\
|
|
gfx803 \
|
|
gfx900 \
|
|
gfx906 \
|
|
gfx908 \
|
|
gfx90a \
|
|
gfx1010 \
|
|
gfx1030 \
|
|
gfx1100 \
|
|
gfx1101 \
|
|
gfx1102
|
|
|
|
COPY requirements.txt requirements.txt
|
|
COPY requirements requirements
|
|
|
|
RUN pip install --upgrade pip setuptools wheel \
|
|
&& pip install -r requirements.txt
|
|
|
|
WORKDIR /app
|
|
|
|
COPY . .
|
|
|
|
# Set nvcc architecture
|
|
ENV GPU_TARGETS=${ROCM_DOCKER_ARCH}
|
|
# Enable ROCm
|
|
ENV LLAMA_HIPBLAS=1
|
|
ENV CC=/opt/rocm/llvm/bin/clang
|
|
ENV CXX=/opt/rocm/llvm/bin/clang++
|
|
|
|
RUN make -j$(nproc) llama-cli
|
|
|
|
ENTRYPOINT [ "/app/llama-cli" ]
|