llama.cpp/.devops
Diego Devesa 59f4db1088
Some checks failed
flake8 Lint / Lint (push) Waiting to run
Python Type-Check / pyright type-check (push) Waiting to run
Python check requirements.txt / check-requirements (push) Has been cancelled
ggml : add predefined list of CPU backend variants to build (#10626)
* ggml : add predefined list of CPU backend variants to build

* update CPU dockerfiles
2024-12-04 14:45:40 +01:00
..
nix server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
full-cuda.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
full-musa.Dockerfile mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) 2024-11-26 17:00:41 +01:00
full-rocm.Dockerfile Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) 2024-09-30 20:57:12 +02:00
full.Dockerfile ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
llama-cli-cann.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-cli-cuda.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-cli-intel.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-cli-musa.Dockerfile mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) 2024-11-26 17:00:41 +01:00
llama-cli-rocm.Dockerfile Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) 2024-09-30 20:57:12 +02:00
llama-cli-vulkan.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-cli.Dockerfile ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
llama-cpp-cuda.srpm.spec devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-server-cuda.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-server-intel.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-server-musa.Dockerfile mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516) 2024-11-26 17:00:41 +01:00
llama-server-rocm.Dockerfile Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS (#9641) 2024-09-30 20:57:12 +02:00
llama-server-vulkan.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-server.Dockerfile ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
tools.sh examples : remove finetune and train-text-from-scratch (#8669) 2024-07-25 10:39:04 +02:00