llama.cpp/ggml
2024-12-04 13:16:03 +01:00
..
include GGUF: backend support, fixed-width I/O, misc fixes 2024-12-04 13:16:03 +01:00
src GGUF: backend support, fixed-width I/O, misc fixes 2024-12-04 13:16:03 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00