..
nix
server : replace behave with pytest ( #10416 )
2024-11-26 16:20:18 +01:00
cloud-v-pipeline
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
full-cuda.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
full-musa.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
full-rocm.Dockerfile
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS ( #9641 )
2024-09-30 20:57:12 +02:00
full.Dockerfile
build : Fix docker build warnings ( #8535 ) ( #8537 )
2024-07-17 20:21:55 +02:00
llama-cli-cann.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-cli-cuda.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-cli-intel.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-cli-musa.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-cli-rocm.Dockerfile
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS ( #9641 )
2024-09-30 20:57:12 +02:00
llama-cli-vulkan.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-cli.Dockerfile
build : Fix docker build warnings ( #8535 ) ( #8537 )
2024-07-17 20:21:55 +02:00
llama-cpp-cuda.srpm.spec
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA ( #8139 )
2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
llama-server-cuda.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-server-intel.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-server-musa.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-server-rocm.Dockerfile
Fix Docker ROCM builds, use AMDGPU_TARGETS instead of GPU_TARGETS ( #9641 )
2024-09-30 20:57:12 +02:00
llama-server-vulkan.Dockerfile
docker: use GGML_NATIVE=OFF ( #10368 )
2024-11-18 00:21:53 +01:00
llama-server.Dockerfile
server : add some missing env variables ( #9116 )
2024-08-27 11:07:01 +02:00
tools.sh
examples : remove finetune
and train-text-from-scratch
( #8669 )
2024-07-25 10:39:04 +02:00