llama.cpp/examples/server/tests
Georgi Gerganov faf69d4237
llama : sanitize invalid tokens (#9357)
* common : do not add null tokens during warmup

ggml-ci

* llama : check that the input tokens are valid

ggml-ci

* tests : fix batch size of bert model

ggml-ci
2024-09-08 00:33:13 +03:00
..
features llama : sanitize invalid tokens (#9357) 2024-09-08 00:33:13 +03:00
README.md build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
requirements.txt server : add lora hotswap endpoint (WIP) (#8857) 2024-08-06 17:33:39 +02:00
tests.sh tests : minor bash stuff (#6902) 2024-04-25 14:27:20 +03:00

Server tests

Python based server tests scenario using BDD and behave:

Tests target GitHub workflows job runners with 4 vCPU.

Requests are using aiohttp, asyncio based http client.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
cmake -B build -DLLAMA_CURL=ON
cmake --build build --target llama-server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/llama-server
DEBUG "ON" to enable steps and server verbose mode --verbose
SERVER_LOG_FORMAT_JSON if set switch server logs to json format
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers

Run @bug, @wip or @wrong_usage annotated scenario

Feature or Scenario must be annotated with @llama.cpp to be included in the default scope.

  • @bug annotation aims to link a scenario with a GitHub issue.
  • @wrong_usage are meant to show user issue that are actually an expected behavior
  • @wip to focus on a scenario working in progress
  • @slow heavy test, disabled by default

To run a scenario annotated with @bug, start:

DEBUG=ON ./tests.sh --no-skipped --tags bug --stop

After changing logic in steps.py, ensure that @bug and @wrong_usage scenario are updated.

./tests.sh --no-skipped --tags bug,wrong_usage || echo "should failed but compile"