mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-13 14:29:52 +00:00
3fd62a6b1c
* py : type-check all Python scripts with Pyright * server-tests : use trailing slash in openai base_url * server-tests : add more type annotations * server-tests : strip "chat" from base_url in oai_chat_completions * server-tests : model metadata is a dict * ci : disable pip cache in type-check workflow The cache is not shared between branches, and it's 250MB in size, so it would become quite a big part of the 10GB cache limit of the repo. * py : fix new type errors from master branch * tests : fix test-tokenizer-random.py Apparently, gcc applies optimisations even when pre-processing, which confuses pycparser. * ci : only show warnings and errors in python type-check The "information" level otherwise has entries from 'examples/pydantic_models_to_grammar.py', which could be confusing for someone trying to figure out what failed, considering that these messages can safely be ignored even though they look like errors. |
||
---|---|---|
.. | ||
build-info.sh | ||
check-requirements.sh | ||
ci-run.sh | ||
compare-commits.sh | ||
compare-llama-bench.py | ||
debug-test.sh | ||
gen-authors.sh | ||
gen-unicode-data.py | ||
get-flags.mk | ||
get-hellaswag.sh | ||
get-pg.sh | ||
get-wikitext-2.sh | ||
get-wikitext-103.sh | ||
get-winogrande.sh | ||
hf.sh | ||
install-oneapi.bat | ||
pod-llama.sh | ||
qnt-all.sh | ||
run-all-perf.sh | ||
run-all-ppl.sh | ||
run-with-preset.py | ||
server-llm.sh | ||
sync-ggml-am.sh | ||
sync-ggml.last | ||
sync-ggml.sh | ||
verify-checksum-models.py | ||
xxd.cmake |