llama.cpp/ggml
Francis Couture-Harpin 8d61607656 ggml ; remove unused ggml_mul special case
It would otherwise conflict with the more general
optimization coming with Mamba-2.

* ggml : handle TQ1_0 and TQ2_0 in dequantization-based operators
2024-09-04 13:50:08 -04:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include Merge branch 'master' into compilade/bitnet-ternary 2024-09-04 13:26:50 -04:00
src ggml ; remove unused ggml_mul special case 2024-09-04 13:50:08 -04:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Vulkan Optimizations and Fixes (#8959) 2024-08-14 18:32:53 +02:00