# llama.cpp/example/simple-chat The purpose of this example is to demonstrate a minimal usage of llama.cpp to create a simple chat program using the built-in chat template in GGUF files. ```bash ./llama-simple-chat -m ./models/llama-7b-v2/ggml-model-f16.gguf -c 2048 ...