llama.cpp/examples/server/tests/unit
Xuan Son Nguyen 0da5d86026
Some checks failed
flake8 Lint / Lint (push) Waiting to run
Python Type-Check / pyright type-check (push) Waiting to run
Python check requirements.txt / check-requirements (push) Has been cancelled
server : allow using LoRA adapters per-request (#10994)
* slot.can_batch_with

* lora per request

* test: force disable cache prompt

* move can_batch_with check

* fix condition

* add slow test with llama 8b

* update docs

* move lora change task to queue

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* lora_base

* remove redundant check

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-01-02 15:05:18 +01:00
..
test_basic.py server : add flag to disable the web-ui (#10762) (#10751) 2024-12-10 18:22:34 +01:00
test_chat_completion.py server : clean up built-in template detection (#11026) 2024-12-31 15:22:01 +01:00
test_completion.py server : add OAI compat for /v1/completions (#10974) 2024-12-31 12:34:13 +01:00
test_ctx_shift.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
test_embedding.py server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967) 2024-12-24 21:33:04 +01:00
test_infill.py server : fix format_infill (#10724) 2024-12-08 23:04:29 +01:00
test_lora.py server : allow using LoRA adapters per-request (#10994) 2025-01-02 15:05:18 +01:00
test_rerank.py server : fill usage info in embeddings and rerank responses (#10852) 2024-12-17 18:00:24 +02:00
test_security.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
test_slot_save.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00
test_speculative.py server : allow using LoRA adapters per-request (#10994) 2025-01-02 15:05:18 +01:00
test_tokenize.py server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00