llama.cpp/examples/server/tests/features
Benjamin Findley e586ee4259
change default temperature of OAI compat API from 0 to 1 (#7226)
* change default temperature of OAI compat API from 0 to 1

* make tests explicitly send temperature to OAI API
2024-05-13 12:40:08 +10:00
..
steps change default temperature of OAI compat API from 0 to 1 (#7226) 2024-05-13 12:40:08 +10:00
embeddings.feature Improve usability of --model-url & related flags (#6930) 2024-04-30 00:52:50 +01:00
environment.py server tests : more pythonic process management; fix bare except: (#6146) 2024-03-20 06:33:49 +01:00
issues.feature server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
parallel.feature common: llama_load_model_from_url split support (#6192) 2024-03-23 18:07:00 +01:00
passkey.feature server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
results.feature Server: add tests for batch size, different seeds (#6950) 2024-05-01 17:52:55 +02:00
security.feature json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
server.feature server : add_special option for tokenize endpoint (#7059) 2024-05-08 15:27:58 +03:00
slotsave.feature llama : save and restore kv cache for single seq id (#6341) 2024-04-08 15:43:30 +03:00
wrong_usages.feature server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00