llama.cpp/ggml
2024-07-22 10:56:45 +03:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include CUDA: fix partial offloading for ne0 % 256 != 0 (#8572) 2024-07-18 23:48:47 +02:00
src ggml: fix compile error for RISC-V (#8623) 2024-07-22 10:56:45 +03:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake : install all ggml public headers (#8480) 2024-07-18 17:47:12 +03:00