llama.cpp/.devops
Joseph Stahl e190f1fca6
nix: make xcrun visible in Nix sandbox for precompiling Metal shaders (#6118)
* Symlink to /usr/bin/xcrun so that `xcrun` binary
is usable during build (used for compiling Metal shaders)

Fixes https://github.com/ggerganov/llama.cpp/issues/6117

* cmake - copy default.metallib to install directory

When metal files are compiled to default.metallib, Cmake needs to add this to the install directory so that it's visible to llama-cpp

Also, update package.nix to use absolute path for default.metallib (it's not finding the bundle)

* add `precompileMetalShaders` flag (defaults to false) to disable precompilation of metal shader

Precompilation requires Xcode to be installed and requires disable sandbox on nix-darwin
2024-03-25 17:51:46 -07:00
..
nix nix: make xcrun visible in Nix sandbox for precompiling Metal shaders (#6118) 2024-03-25 17:51:46 -07:00
cloud-v-pipeline ci : Cloud-V for RISC-V builds (#3160) 2023-09-15 11:06:56 +03:00
full-cuda.Dockerfile cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
full-rocm.Dockerfile python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
full.Dockerfile python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
llama-cpp-clblast.srpm.spec devops : added systemd units and set versioning to use date. (#2835) 2023-08-28 09:31:24 +03:00
llama-cpp-cuda.srpm.spec cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
llama-cpp.srpm.spec devops : added systemd units and set versioning to use date. (#2835) 2023-08-28 09:31:24 +03:00
main-cuda.Dockerfile cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
main-intel.Dockerfile docker : add build for SYCL, Vulkan + update readme (#5228) 2024-02-02 09:56:31 +02:00
main-rocm.Dockerfile python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
main-vulkan.Dockerfile docker : add build for SYCL, Vulkan + update readme (#5228) 2024-02-02 09:56:31 +02:00
main.Dockerfile Add llama.cpp docker support for non-latin languages (#1673) 2023-06-08 00:58:53 -07:00
server-cuda.Dockerfile cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
server-intel.Dockerfile docker : add build for SYCL, Vulkan + update readme (#5228) 2024-02-02 09:56:31 +02:00
server-rocm.Dockerfile docker : add server-first container images (#5157) 2024-01-28 09:55:31 +02:00
server-vulkan.Dockerfile docker : add build for SYCL, Vulkan + update readme (#5228) 2024-02-02 09:56:31 +02:00
server.Dockerfile docker : add server-first container images (#5157) 2024-01-28 09:55:31 +02:00
tools.sh docker : add finetune option (#4211) 2023-11-30 23:46:01 +02:00