This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2024-11-11 13:30:35 +00:00
Code
Issues
Actions
11
Packages
Projects
Releases
Wiki
Activity
07786a61a2
llama.cpp
/
requirements
/
requirements-convert_llama_ggml_to_gguf.txt
2 lines
43 B
Plaintext
Raw
Normal View
History
Unescape
Escape
Move convert.py to examples/convert-legacy-llama.py (#7430) * Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes
2024-05-30 11:40:00 +00:00
-r ./requirements-convert-legacy-llama.txt
Reference in New Issue
Copy Permalink