llama.cpp/ggml
luoyu-intel a9554e20b6
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266)
* fix group_norm ut

* split softmax

* fix softmax

* add concat support condition

* revert debug code

* move QK_WARP_SIZE to presets.hpp
2024-07-05 13:06:13 +08:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
src [SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266) 2024-07-05 13:06:13 +08:00
CMakeLists.txt ggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CUDA_FORCE_CUBLAS (cmake) (#8140) 2024-06-26 21:34:14 +02:00
ggml_vk_generate_shaders.py llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00