mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 13:30:35 +00:00
9c4c9cc83f
* Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes
12 lines
449 B
Plaintext
12 lines
449 B
Plaintext
# These requirements include all dependencies for all top-level python scripts
|
|
# for llama.cpp. Avoid adding packages here directly.
|
|
#
|
|
# Package versions must stay compatible across all top-level python scripts.
|
|
#
|
|
|
|
-r ./requirements/requirements-convert-legacy-llama.txt
|
|
|
|
-r ./requirements/requirements-convert-hf-to-gguf.txt
|
|
-r ./requirements/requirements-convert-hf-to-gguf-update.txt
|
|
-r ./requirements/requirements-convert-llama-ggml-to-gguf.txt
|