llama.cpp/.devops
Georgi Gerganov f3f65429c4
llama : reorganize source code + improve CMake (#8006)
* scripts : update sync [no ci]

* files : relocate [no ci]

* ci : disable kompute build [no ci]

* cmake : fixes [no ci]

* server : fix mingw build

ggml-ci

* cmake : minor [no ci]

* cmake : link math library [no ci]

* cmake : build normal ggml library (not object library) [no ci]

* cmake : fix kompute build

ggml-ci

* make,cmake : fix LLAMA_CUDA + replace GGML_CDEF_PRIVATE

ggml-ci

* move public backend headers to the public include directory (#8122)

* move public backend headers to the public include directory

* nix test

* spm : fix metal header

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* scripts : fix sync paths [no ci]

* scripts : sync ggml-blas.h [no ci]

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-06-26 18:33:02 +03:00
..
nix llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
full-cuda.Dockerfile docker : add openmp lib (#7780) 2024-06-06 08:17:21 +03:00
full-rocm.Dockerfile Fixed painfully slow single process builds. (#7326) 2024-05-30 22:32:38 +02:00
full.Dockerfile docker : add openmp lib (#7780) 2024-06-06 08:17:21 +03:00
llama-cli-cuda.Dockerfile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cli-intel.Dockerfile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cli-rocm.Dockerfile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cli-vulkan.Dockerfile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cli.Dockerfile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cpp-clblast.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cpp-cuda.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cpp.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-server-cuda.Dockerfile Add healthchecks to llama-server containers (#8081) 2024-06-25 17:13:27 +02:00
llama-server-intel.Dockerfile Add healthchecks to llama-server containers (#8081) 2024-06-25 17:13:27 +02:00
llama-server-rocm.Dockerfile Add healthchecks to llama-server containers (#8081) 2024-06-25 17:13:27 +02:00
llama-server-vulkan.Dockerfile Add healthchecks to llama-server containers (#8081) 2024-06-25 17:13:27 +02:00
llama-server.Dockerfile Add healthchecks to llama-server containers (#8081) 2024-06-25 17:13:27 +02:00
tools.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00