llama.cpp/docs
Djip007 19d8762ab6
ggml : refactor online repacking (#10446)
* rename ggml-cpu-aarch64.c to .cpp

* reformat extra cpu backend.

- clean Q4_0_N_M and IQ4_0_N_M
  - remove from "file" tensor type
  - allow only with dynamic repack

- extract cpu extra bufts and convert to C++
  - hbm
  - "aarch64"

- more generic use of extra buffer
  - generalise extra_supports_op
  - new API for "cpu-accel":
     - amx
     - aarch64

* clang-format

* Clean Q4_0_N_M ref

Enable restrict on C++

* add op GGML_OP_MUL_MAT_ID for Q4_0_N_M with runtime repack

* added/corrected control on tensor size for Q4 repacking.

* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add debug logs on repacks.

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-12-07 14:37:50 +02:00
..
backend make : deprecate (#10514) 2024-12-02 21:22:53 +02:00
development docs: fix links in development docs [no ci] (#8481) 2024-07-15 14:46:39 +03:00
android.md docs: fix outdated usage of llama-simple (#10565) 2024-11-28 16:03:11 +01:00
build.md ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
docker.md musa: add docker image support (#9685) 2024-10-10 20:10:37 +02:00
install.md Reorganize documentation pages (#8325) 2024-07-05 18:08:32 +02:00