llama.cpp/ggml
2024-09-03 11:27:22 +05:30
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include llama : support RWKV v6 models (#8980) 2024-09-01 17:38:17 +03:00
src Implemented vector length agnostic SVE using switch case for 512-bit, 256-bit, 128-bit vector lengths 2024-09-03 11:27:22 +05:30
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Vulkan Optimizations and Fixes (#8959) 2024-08-14 18:32:53 +02:00