mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 20:04:35 +00:00
0e70ba686e
* server : add "tokens" output ggml-ci * server : update readme ggml-ci * server : return tokens ids only if requested ggml-ci * tests : improve "tokens" type check Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> * server : remove "tokens" from the OAI endpoint ggml-ci --------- Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> |
||
---|---|---|
.. | ||
test_basic.py | ||
test_chat_completion.py | ||
test_completion.py | ||
test_ctx_shift.py | ||
test_embedding.py | ||
test_infill.py | ||
test_lora.py | ||
test_rerank.py | ||
test_security.py | ||
test_slot_save.py | ||
test_speculative.py | ||
test_tokenize.py |