llama.cpp/ggml
2024-09-08 11:05:55 +03:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
src cuda : mark BF16 CONT as unsupported 2024-09-08 11:05:55 +03:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Improve Vulkan shader build system (#9239) 2024-09-06 08:56:17 +02:00