llama.cpp/ggml
luoyu-intel 1731d4238f
[SYCL] Add oneDNN primitive support (#9091)
* add onednn

* add sycl_f16

* add dnnl stream

* add engine map

* use dnnl for intel only

* use fp16fp16fp16

* update doc
2024-08-22 12:50:10 +08:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include llama : simplify Mamba with advanced batch splits (#8526) 2024-08-21 17:58:11 -04:00
src [SYCL] Add oneDNN primitive support (#9091) 2024-08-22 12:50:10 +08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Vulkan Optimizations and Fixes (#8959) 2024-08-14 18:32:53 +02:00