llama.cpp/examples/run
Diego Devesa 7cc2d2c889
Some checks failed
flake8 Lint / Lint (push) Has been cancelled
Python Type-Check / pyright type-check (push) Has been cancelled
ggml : move AMX to the CPU backend (#10570)
* ggml : move AMX to the CPU backend

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-11-29 21:54:58 +01:00
..
CMakeLists.txt ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
README.md Introduce llama-run (#10291) 2024-11-25 22:56:24 +01:00
run.cpp Introduce llama-run (#10291) 2024-11-25 22:56:24 +01:00

llama.cpp/example/run

The purpose of this example is to demonstrate a minimal usage of llama.cpp for running models.

./llama-run Meta-Llama-3.1-8B-Instruct.gguf
...