llama.cpp/ggml
luoyu-intel d08c20edde
[SYCL] Fix the sub group size of Intel (#8106)
* use warp_size macro for all sycl kernels

* fix mask of permute_sub_group_by_xor

* fix rms_norm with correct warp number

* fix rms_norm_f32/group_norm_f32

* move norm to norm.cpp file

* fix quantize bug

* fix mmvq's batch size
2024-07-02 10:16:00 +08:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
src [SYCL] Fix the sub group size of Intel (#8106) 2024-07-02 10:16:00 +08:00
CMakeLists.txt ggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CUDA_FORCE_CUBLAS (cmake) (#8140) 2024-06-26 21:34:14 +02:00
ggml_vk_generate_shaders.py llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00