llama.cpp/.devops
bandoti 17eb6aa8a9
vulkan : cmake integration (#8119)
* Add Vulkan to CMake pkg

* Add Sycl to CMake pkg

* Add OpenMP to CMake pkg

* Split generated shader file into separate translation unit

* Add CMake target for Vulkan shaders

* Update README.md

* Add make target for Vulkan shaders

* Use pkg-config to locate vulkan library

* Add vulkan SDK dep to ubuntu-22-cmake-vulkan workflow

* Clean up tabs

* Move sudo to apt-key invocation

* Forward GGML_EXTRA_LIBS to CMake config pkg

* Update vulkan obj file paths

* Add shaderc to nix pkg

* Add python3 to Vulkan nix build

* Link against ggml in cmake pkg

* Remove Python dependency from Vulkan build

* code review changes

* Remove trailing newline

* Add cflags from pkg-config to fix w64devkit build

* Update README.md

* Remove trailing whitespace

* Update README.md

* Remove trailing whitespace

* Fix doc heading

* Make glslc required Vulkan component

* remove clblast from nix pkg
2024-07-13 18:12:39 +02:00
..
nix vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
full-cuda.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
full-rocm.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
full.Dockerfile docker : add openmp lib (#7780) 2024-06-06 08:17:21 +03:00
llama-cli-cuda.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli-intel.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli-rocm.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli-vulkan.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli.Dockerfile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cpp-cuda.srpm.spec devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-server-cuda.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server-intel.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server-rocm.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server-vulkan.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server.Dockerfile Add healthchecks to llama-server containers (#8081) 2024-06-25 17:13:27 +02:00
tools.sh docker : fix filename for convert-hf-to-gguf.py in tools.sh (#8441) 2024-07-12 11:08:19 +03:00