llama.cpp/examples/server/tests/features
Georgi Gerganov 1cc0155d04
server : tuning tests (#7388)
* server : don't pass temperature as string

* server : increase timeout

* tests : fix the fix 0.8f -> 0.8

ggml-ci

* tests : set explicit temperature
2024-05-20 10:16:41 +03:00
..
steps server : tuning tests (#7388) 2024-05-20 10:16:41 +03:00
embeddings.feature Improve usability of --model-url & related flags (#6930) 2024-04-30 00:52:50 +01:00
environment.py server tests : more pythonic process management; fix bare except: (#6146) 2024-03-20 06:33:49 +01:00
issues.feature server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
parallel.feature common: llama_load_model_from_url split support (#6192) 2024-03-23 18:07:00 +01:00
passkey.feature server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
results.feature server : tuning tests (#7388) 2024-05-20 10:16:41 +03:00
security.feature json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
server.feature server : add_special option for tokenize endpoint (#7059) 2024-05-08 15:27:58 +03:00
slotsave.feature llama : save and restore kv cache for single seq id (#6341) 2024-04-08 15:43:30 +03:00
wrong_usages.feature server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00