llama.cpp/.devops
compilade 3fd62a6b1c
py : type-check all Python scripts with Pyright (#8341)
* py : type-check all Python scripts with Pyright

* server-tests : use trailing slash in openai base_url

* server-tests : add more type annotations

* server-tests : strip "chat" from base_url in oai_chat_completions

* server-tests : model metadata is a dict

* ci : disable pip cache in type-check workflow

The cache is not shared between branches, and it's 250MB in size,
so it would become quite a big part of the 10GB cache limit of the repo.

* py : fix new type errors from master branch

* tests : fix test-tokenizer-random.py

Apparently, gcc applies optimisations even when pre-processing,
which confuses pycparser.

* ci : only show warnings and errors in python type-check

The "information" level otherwise has entries
from 'examples/pydantic_models_to_grammar.py',
which could be confusing for someone trying to figure out what failed,
considering that these messages can safely be ignored
even though they look like errors.
2024-07-07 15:04:39 -04:00
..
nix py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
full-cuda.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
full-rocm.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
full.Dockerfile docker : add openmp lib (#7780) 2024-06-06 08:17:21 +03:00
llama-cli-cuda.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli-intel.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli-rocm.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli-vulkan.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cli.Dockerfile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-cpp-cuda.srpm.spec devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-cpp.srpm.spec build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-server-cuda.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server-intel.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server-rocm.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server-vulkan.Dockerfile devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139) 2024-06-26 19:32:07 +03:00
llama-server.Dockerfile Add healthchecks to llama-server containers (#8081) 2024-06-25 17:13:27 +02:00
tools.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00