mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-10 10:41:47 +00:00
8 lines
273 B
Markdown
8 lines
273 B
Markdown
# llama.cpp/example/simple-chat
|
|
|
|
The purpose of this example is to demonstrate a minimal usage of llama.cpp to create a simple chat program using the built-in chat template in GGUF files.
|
|
|
|
```bash
|
|
./llama-simple-chat -m ./models/llama-7b-v2/ggml-model-f16.gguf -c 2048
|
|
...
|