llama.cpp/ggml
0cc4m c3f9d25706
Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (#11161)
* Vulkan: Remove float16 use in shaders

* Fix validation error about subgroup_size_control extension
2025-01-10 06:39:33 +01:00
..
include llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
src Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (#11161) 2025-01-10 06:39:33 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00