llama.cpp/ggml
Alberto Cabrera Pérez 2e82ffa4af
sycl : Fixes to broken builds and test-backend-ops (#10257)
* Fixes broken build for the SYCL CUDA backend caused by non-explicit gemm call in outprod (merged in with RWKV6 in
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration #10133)

* Marks permuted MUL_MAT as unsupported to be able to run test-backend-ops

* Fixes asserts in norm to fix debug builds.
2024-11-13 09:40:57 +00:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include metal : optimize FA kernels (#10171) 2024-11-08 13:47:22 +02:00
src sycl : Fixes to broken builds and test-backend-ops (#10257) 2024-11-13 09:40:57 +00:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt metal : opt-in compile flag for BF16 (#10218) 2024-11-08 21:59:46 +02:00