llama.cpp/examples/gguf-split
Diego Devesa 7cc2d2c889
Some checks failed
flake8 Lint / Lint (push) Has been cancelled
Python Type-Check / pyright type-check (push) Has been cancelled
ggml : move AMX to the CPU backend (#10570)
* ggml : move AMX to the CPU backend

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-11-29 21:54:58 +01:00
..
CMakeLists.txt ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf-split.cpp gguf-split : improve --split and --merge logic (#9619) 2024-10-02 10:21:57 +03:00
README.md Fix --split-max-size (#6655) 2024-04-14 13:12:59 +02:00
tests.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00

GGUF split Example

CLI to split / merge GGUF files.

Command line options:

  • --split: split GGUF to multiple GGUF, default operation.
  • --split-max-size: max size per split in M or G, f.ex. 500M or 2G.
  • --split-max-tensors: maximum tensors in each split: default(128)
  • --merge: merge multiple GGUF to a single GGUF.