mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 20:04:35 +00:00
152610eda9
* server : add "tokens" output ggml-ci * server : output embeddings for all tokens when pooling = none ggml-ci * server : update readme [no ci] * server : fix spacing [no ci] Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> * server : be explicit about the pooling type in the tests ggml-ci * server : update /embeddings and /v1/embeddings endpoints ggml-ci * server : do not normalize embeddings when there is no pooling ggml-ci * server : update readme ggml-ci * server : fixes * tests : update server tests ggml-ci * server : update readme [no ci] * server : remove rebase artifact --------- Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com> |
||
---|---|---|
.. | ||
test_basic.py | ||
test_chat_completion.py | ||
test_completion.py | ||
test_ctx_shift.py | ||
test_embedding.py | ||
test_infill.py | ||
test_lora.py | ||
test_rerank.py | ||
test_security.py | ||
test_slot_save.py | ||
test_speculative.py | ||
test_tokenize.py |