This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2024-12-26 19:34:35 +00:00
Code
Issues
Actions
4
Packages
Projects
Releases
Wiki
Activity
sl/fix-ppl-seq-max
llama.cpp
/
requirements
/
requirements-convert-llama-ggml-to-gguf.txt
2 lines
43 B
Plaintext
Raw
Permalink
Normal View
History
Unescape
Escape
Move convert.py to examples/convert-legacy-llama.py (#7430) * Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes
2024-05-30 11:40:00 +00:00
-r ./requirements-convert-legacy-llama.txt
Reference in New Issue
Copy Permalink