llama.cpp/examples/speculative
Georgi Gerganov bc21975084
speculative : fix handling of some input params (#9963)
* speculative : fix batch sizes at initialization

ggml-ci

* speculative : handle params.n_predict == -1

* speculative : limit batch size to llama_n_batch
2024-10-21 09:37:12 +03:00
..
CMakeLists.txt build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
README.md speculative : implement stochastic speculative sampling (#5625) 2024-03-04 20:24:00 +02:00
speculative.cpp speculative : fix handling of some input params (#9963) 2024-10-21 09:37:12 +03:00

llama.cpp/examples/speculative

Demonstration of speculative decoding and tree-based speculative decoding techniques

More info: