llama.cpp/ggml
Nicholai Tukanov 368645698a
ggml : add NVPL BLAS support (#8329) (#8425)
* ggml : add NVPL BLAS support

* ggml : replace `<BLASLIB>_ENABLE_CBLAS` with `GGML_BLAS_USE_<BLASLIB>`

---------

Co-authored-by: ntukanov <ntukanov@nvidia.com>
2024-07-11 18:49:15 +02:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780) 2024-07-10 15:14:51 +03:00
src ggml : add NVPL BLAS support (#8329) (#8425) 2024-07-11 18:49:15 +02:00
CMakeLists.txt ggml : move sgemm sources to llamafile subfolder (#8394) 2024-07-10 15:23:29 +03:00
ggml_vk_generate_shaders.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00