mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 03:14:35 +00:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit
|
||
---|---|---|
.. | ||
features | ||
README.md | ||
requirements.txt | ||
tests.sh |
Server tests
Python based server tests scenario using BDD and behave:
- issues.feature Pending issues scenario
- parallel.feature Scenario involving multi slots and concurrent requests
- security.feature Security, CORS and API Key
- server.feature Server base scenario: completion, embedding, tokenization, etc...
Tests target GitHub workflows job runners with 4 vCPU.
Requests are using aiohttp, asyncio based http client.
Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail.
To mitigate it, you can increase values in n_predict
, kv_size
.
Install dependencies
pip install -r requirements.txt
Run tests
- Build the server
cd ../../..
cmake -B build -DLLAMA_CURL=ON
cmake --build build --target llama-server
- Start the test:
./tests.sh
It's possible to override some scenario steps values with environment variables:
variable | description |
---|---|
PORT |
context.server_port to set the listening port of the server during scenario, default: 8080 |
LLAMA_SERVER_BIN_PATH |
to change the server binary path, default: ../../../build/bin/llama-server |
DEBUG |
"ON" to enable steps and server verbose mode --verbose |
SERVER_LOG_FORMAT_JSON |
if set switch server logs to json format |
N_GPU_LAYERS |
number of model layers to offload to VRAM -ngl --n-gpu-layers |
Run @bug, @wip or @wrong_usage annotated scenario
Feature or Scenario must be annotated with @llama.cpp
to be included in the default scope.
@bug
annotation aims to link a scenario with a GitHub issue.@wrong_usage
are meant to show user issue that are actually an expected behavior@wip
to focus on a scenario working in progress@slow
heavy test, disabled by default
To run a scenario annotated with @bug
, start:
DEBUG=ON ./tests.sh --no-skipped --tags bug --stop
After changing logic in steps.py
, ensure that @bug
and @wrong_usage
scenario are updated.
./tests.sh --no-skipped --tags bug,wrong_usage || echo "should failed but compile"