llama.cpp/examples/server/tests
Pierrick Hymbert a016026a3a
server: continuous performance monitoring and PR comment (#6283)
* server: bench: init

* server: bench: reduce list of GPU nodes

* server: bench: fix graph, fix output artifact

* ci: bench: add mermaid in case of image cannot be uploaded

* ci: bench: more resilient, more metrics

* ci: bench: trigger build

* ci: bench: fix duration

* ci: bench: fix typo

* ci: bench: fix mermaid values, markdown generated

* typo on the step name

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* ci: bench: trailing spaces

* ci: bench: move images in a details section

* ci: bench: reduce bullet point size

---------

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-03-27 20:26:49 +01:00
..
features server: continuous performance monitoring and PR comment (#6283) 2024-03-27 20:26:49 +01:00
README.md common: llama_load_model_from_url using --model-url (#6098) 2024-03-17 19:12:37 +01:00
requirements.txt server tests : more pythonic process management; fix bare except: (#6146) 2024-03-20 06:33:49 +01:00
tests.sh server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00

Server tests

Python based server tests scenario using BDD and behave:

Tests target GitHub workflows job runners with 4 vCPU.

Requests are using aiohttp, asyncio based http client.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
mkdir build
cd build
cmake ../
cmake --build . --target server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/server
DEBUG "ON" to enable steps and server verbose mode --verbose
SERVER_LOG_FORMAT_JSON if set switch server logs to json format
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers

Run @bug, @wip or @wrong_usage annotated scenario

Feature or Scenario must be annotated with @llama.cpp to be included in the default scope.

  • @bug annotation aims to link a scenario with a GitHub issue.
  • @wrong_usage are meant to show user issue that are actually an expected behavior
  • @wip to focus on a scenario working in progress
  • @slow heavy test, disabled by default

To run a scenario annotated with @bug, start:

DEBUG=ON ./tests.sh --no-skipped --tags bug --stop

After changing logic in steps.py, ensure that @bug and @wrong_usage scenario are updated.

./tests.sh --no-skipped --tags bug,wrong_usage || echo "should failed but compile"