This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2024-12-27 03:44:35 +00:00
Code
Issues
Actions
4
Packages
Projects
Releases
Wiki
Activity
c8771ab5f8
llama.cpp
/
requirements
/
requirements-convert-hf-to-gguf-update.txt
3 lines
56 B
Plaintext
Raw
Normal View
History
Unescape
Escape
Move convert.py to examples/convert-legacy-llama.py (#7430) * Move convert.py to examples/convert-no-torch.py * Fix CI, scripts, readme files * convert-no-torch -> convert-legacy-llama * Move vocab thing to vocab.py * Fix convert-no-torch -> convert-legacy-llama * Fix lost convert.py in ci/run.sh * Fix imports * Fix gguf not imported correctly * Fix flake8 complaints * Fix check-requirements.sh * Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE * Review fixes
2024-05-30 11:40:00 +00:00
-r ./requirements-convert-legacy-llama.txt
requirements : Bump torch and numpy for python3.12 (#8041)
2024-06-20 20:01:15 +00:00
torch~=2.2.1
Reference in New Issue
Copy Permalink