mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-14 23:09:53 +00:00
60ce97c9d8
add intel amx isa detection add vnni kernel for gemv cases add vnni and amx kernel support for block_q8_0 code cleanup fix packing B issue enable openmp fine tune amx kernel switch to aten parallel pattern add error message for nested parallelism code cleanup add f16 support in ggml-amx add amx kernels for QK_K quant formats: Q4_K, Q5_K, Q6_K and IQ4_XS update CMakeList update README fix some compilation warning fix compiler warning when amx is not enabled minor change ggml-ci move ggml_amx_init from ggml.c to ggml-amx/mmq.cpp ggml-ci update CMakeLists with -mamx-tile, -mamx-int8 and -mamx-bf16 ggml-ci add amx as an ggml-backend update header file, the old path for immintrin.h has changed to ggml-cpu-impl.h minor change update CMakeLists.txt minor change apply weight prepacking in set_tensor method in ggml-backend fix compile error ggml-ci minor change ggml-ci update CMakeLists.txt ggml-ci add march dependency minor change ggml-ci change ggml_backend_buffer_is_host to return false for amx backend ggml-ci fix supports_op use device reg for AMX backend ggml-ci minor change ggml-ci minor change fix rebase set .buffer_from_host_ptr to be false for AMX backend |
||
---|---|---|
.. | ||
ggml-amx | ||
ggml-cann | ||
ggml-cuda | ||
ggml-sycl | ||
kompute@4565194ed7 | ||
kompute-shaders | ||
llamafile | ||
vulkan-shaders | ||
CMakeLists.txt | ||
ggml-aarch64.c | ||
ggml-aarch64.h | ||
ggml-alloc.c | ||
ggml-amx.cpp | ||
ggml-backend-impl.h | ||
ggml-backend.cpp | ||
ggml-blas.cpp | ||
ggml-cann.cpp | ||
ggml-common.h | ||
ggml-cpu-impl.h | ||
ggml-cuda.cu | ||
ggml-impl.h | ||
ggml-kompute.cpp | ||
ggml-metal.m | ||
ggml-metal.metal | ||
ggml-quants.c | ||
ggml-quants.h | ||
ggml-rpc.cpp | ||
ggml-sycl.cpp | ||
ggml-vulkan.cpp | ||
ggml.c |