llama.cpp/ggml
2024-07-19 13:45:00 +03:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include CUDA: fix partial offloading for ne0 % 256 != 0 (#8572) 2024-07-18 23:48:47 +02:00
src gguf : handle null name during init 2024-07-19 13:45:00 +03:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake : install all ggml public headers (#8480) 2024-07-18 17:47:12 +03:00