llama.cpp/ggml/src/ggml-cpu
2024-11-17 10:39:22 +02:00
..
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile llamafile : fix include path (#0) 2024-11-16 20:36:26 +02:00
CMakeLists.txt Make updates to fix issues with clang-cl builds while using AVX512 flags (#10314) 2024-11-15 22:27:00 +01:00
ggml-cpu-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324) 2024-11-16 01:53:37 +01:00
ggml-cpu-aarch64.h backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921) 2024-11-15 01:28:50 +01:00
ggml-cpu-impl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-quants.c AVX BF16 and single scale quant optimizations (#10212) 2024-11-15 12:47:58 +01:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu.c ggml : fix undefined reference to 'getcpu' (#10354) 2024-11-17 10:39:22 +02:00
ggml-cpu.cpp backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921) 2024-11-15 01:28:50 +01:00