llama.cpp/.devops/nix
compilade 3fd62a6b1c
py : type-check all Python scripts with Pyright (#8341)
* py : type-check all Python scripts with Pyright

* server-tests : use trailing slash in openai base_url

* server-tests : add more type annotations

* server-tests : strip "chat" from base_url in oai_chat_completions

* server-tests : model metadata is a dict

* ci : disable pip cache in type-check workflow

The cache is not shared between branches, and it's 250MB in size,
so it would become quite a big part of the 10GB cache limit of the repo.

* py : fix new type errors from master branch

* tests : fix test-tokenizer-random.py

Apparently, gcc applies optimisations even when pre-processing,
which confuses pycparser.

* ci : only show warnings and errors in python type-check

The "information" level otherwise has entries
from 'examples/pydantic_models_to_grammar.py',
which could be confusing for someone trying to figure out what failed,
considering that these messages can safely be ignored
even though they look like errors.
2024-07-07 15:04:39 -04:00
..
apps.nix build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
devshells.nix flake.nix : rewrite (#4605) 2023-12-29 16:42:26 +02:00
docker.nix nix: init singularity and docker images (#5056) 2024-02-22 11:44:10 -08:00
jetson-support.nix flake.nix: expose full scope in legacyPackages 2023-12-31 13:14:58 -08:00
nixpkgs-instances.nix nix: add a comment on the many nixpkgs-with-cuda instances 2024-01-22 12:19:30 +00:00
package.nix py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
scope.nix nix: init singularity and docker images (#5056) 2024-02-22 11:44:10 -08:00
sif.nix build(nix): Introduce flake.formatter for nix fmt (#5687) 2024-03-01 15:18:26 -08:00