llama.cpp/.devops
serhii-nakon 6f1d9d71f4
Some checks are pending
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Waiting to run
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
Python Type-Check / pyright type-check (push) Waiting to run
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641)
* Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS

* Set ROCM_DOCKER_ARCH as string due it incorrectly build and cause OOM exit code
2024-09-30 20:57:12 +02:00
..
nix build(nix): Package gguf-py (#5664) 2024-09-02 14:21:01 +03:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
full-cuda.Dockerfile docker : fix missing binaries in full-cuda image (#9278) 2024-09-02 18:11:13 +02:00
full-rocm.Dockerfile Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) 2024-09-30 20:57:12 +02:00
full.Dockerfile build : Fix docker build warnings (#8535) (#8537) 2024-07-17 20:21:55 +02:00
llama-cli-cann.Dockerfile cann: add doc for cann backend (#8867) 2024-08-19 16:46:38 +08:00
llama-cli-cuda.Dockerfile docker : update CUDA images (#9213) 2024-08-28 13:20:36 +02:00
llama-cli-intel.Dockerfile Build Llama SYCL Intel with static libs (#8668) 2024-07-24 14:36:00 +01:00
llama-cli-rocm.Dockerfile Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) 2024-09-30 20:57:12 +02:00
llama-cli-vulkan.Dockerfile build : Fix docker build warnings (#8535) (#8537) 2024-07-17 20:21:55 +02:00
llama-cli.Dockerfile build : Fix docker build warnings (#8535) (#8537) 2024-07-17 20:21:55 +02:00
llama-cpp-cuda.srpm.spec devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-server-cuda.Dockerfile docker : update CUDA images (#9213) 2024-08-28 13:20:36 +02:00
llama-server-intel.Dockerfile server : add some missing env variables (#9116) 2024-08-27 11:07:01 +02:00
llama-server-rocm.Dockerfile Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) 2024-09-30 20:57:12 +02:00
llama-server-vulkan.Dockerfile server : add some missing env variables (#9116) 2024-08-27 11:07:01 +02:00
llama-server.Dockerfile server : add some missing env variables (#9116) 2024-08-27 11:07:01 +02:00
tools.sh examples : remove finetune and train-text-from-scratch (#8669) 2024-07-25 10:39:04 +02:00