llama.cpp/ggml/src/ggml-cpu
Charles Xu 25669aa92c
ggml-cpu: cmake add arm64 cpu feature check for macos (#10487)
* ggml-cpu: cmake add arm64 cpu feature check for macos

* use vmmlaq_s32 for compile option i8mm check
2024-11-26 13:37:05 +02:00
..
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile llamafile : fix include path (#0) 2024-11-16 20:36:26 +02:00
CMakeLists.txt ggml-cpu: cmake add arm64 cpu feature check for macos (#10487) 2024-11-26 13:37:05 +02:00
ggml-cpu-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324) 2024-11-16 01:53:37 +01:00
ggml-cpu-aarch64.h backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921) 2024-11-15 01:28:50 +01:00
ggml-cpu-impl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-quants.c AVX BF16 and single scale quant optimizations (#10212) 2024-11-15 12:47:58 +01:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu.c ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cpu.cpp ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00