llama.cpp/ggml
Francis Couture-Harpin aff96920f9 llama : fix Mamba-2 conv state saving
* ggml : make the ggml_mul fast broadcast path more consistently formatted
2024-08-21 18:00:34 -04:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include llama : initial Mamba-2 support 2024-08-21 18:00:34 -04:00
src llama : fix Mamba-2 conv state saving 2024-08-21 18:00:34 -04:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Vulkan Optimizations and Fixes (#8959) 2024-08-14 18:32:53 +02:00