llama.cpp/examples/server/tests/features
Xuan Son Nguyen 0b3bf966f4
Some checks failed
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Waiting to run
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
Nix aarch64 builds / nix-build-aarch64 (push) Has been cancelled
Python Type-Check / pyright type-check (push) Has been cancelled
server : add --no-context-shift option (#9607)
* server : add --no-context-shift option

* small fix

* Update examples/server/tests/features/embeddings.feature

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* tests : minor fix

* revert usage of GGML_ASSERT

* update server documentation

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-09-23 22:23:54 +02:00
..
steps server : add --no-context-shift option (#9607) 2024-09-23 22:23:54 +02:00
ctx_shift.feature server : add --no-context-shift option (#9607) 2024-09-23 22:23:54 +02:00
embeddings.feature server : add --no-context-shift option (#9607) 2024-09-23 22:23:54 +02:00
environment.py server tests : more pythonic process management; fix bare except: (#6146) 2024-03-20 06:33:49 +01:00
issues.feature server: tests: passkey challenge / self-extend with context shift demo (#5832) 2024-03-02 22:00:14 +01:00
lora.feature server : add lora hotswap endpoint (WIP) (#8857) 2024-08-06 17:33:39 +02:00
parallel.feature server : simplify state machine for slot (#9283) 2024-09-06 23:21:29 +02:00
passkey.feature server : simplify state machine for slot (#9283) 2024-09-06 23:21:29 +02:00
results.feature server : fix temperature + disable some tests (#7409) 2024-05-20 22:10:03 +10:00
security.feature json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
server.feature server : Add option to return token pieces in /tokenize endpoint (#9108) 2024-09-12 22:30:11 +02:00
slotsave.feature Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425) 2024-05-21 14:39:48 +02:00
wrong_usages.feature server : refactor multitask handling (#9274) 2024-09-02 17:11:51 +02:00