mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 03:44:35 +00:00
d9d54e498d
* speculative : refactor and add a simpler example ggml-ci * speculative : clean-up and add comments and TODOs [no ci] * speculative : manage context in common_speculative ggml-ci * speculative : simplify ggml-ci * speculative : simplify (cont) ggml-ci * speculative : add --draft-min CLI arg * speculative : minor fixup * make : build fixes * speculative : do not redraft previous drafts ggml-ci * speculative : fix the draft sampling ggml-ci * speculative : fix compile warning * common : refactor args ggml-ci * common : change defaults [no ci] * common : final touches ggml-ci
13 lines
422 B
Markdown
13 lines
422 B
Markdown
# llama.cpp/examples/speculative-simple
|
|
|
|
Demonstration of basic greedy speculative decoding
|
|
|
|
```bash
|
|
./bin/llama-speculative-simple \
|
|
-m ../models/qwen2.5-32b-coder-instruct/ggml-model-q8_0.gguf \
|
|
-md ../models/qwen2.5-1.5b-coder-instruct/ggml-model-q4_0.gguf \
|
|
-f test.txt -c 0 -ngl 99 --color \
|
|
--sampling-seq k --top-k 1 -fa --temp 0.0 \
|
|
-ngld 99 --draft-max 16 --draft-min 5 --draft-p-min 0.9
|
|
```
|