mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 21:39:52 +00:00
e190f1fca6
* Symlink to /usr/bin/xcrun so that `xcrun` binary is usable during build (used for compiling Metal shaders) Fixes https://github.com/ggerganov/llama.cpp/issues/6117 * cmake - copy default.metallib to install directory When metal files are compiled to default.metallib, Cmake needs to add this to the install directory so that it's visible to llama-cpp Also, update package.nix to use absolute path for default.metallib (it's not finding the bundle) * add `precompileMetalShaders` flag (defaults to false) to disable precompilation of metal shader Precompilation requires Xcode to be installed and requires disable sandbox on nix-darwin |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
full-cuda.Dockerfile | ||
full-rocm.Dockerfile | ||
full.Dockerfile | ||
llama-cpp-clblast.srpm.spec | ||
llama-cpp-cuda.srpm.spec | ||
llama-cpp.srpm.spec | ||
main-cuda.Dockerfile | ||
main-intel.Dockerfile | ||
main-rocm.Dockerfile | ||
main-vulkan.Dockerfile | ||
main.Dockerfile | ||
server-cuda.Dockerfile | ||
server-intel.Dockerfile | ||
server-rocm.Dockerfile | ||
server-vulkan.Dockerfile | ||
server.Dockerfile | ||
tools.sh |