llama.cpp/gguf-py/gguf
Galunid 9c4c9cc83f
Move convert.py to examples/convert-legacy-llama.py (#7430)
* Move convert.py to examples/convert-no-torch.py

* Fix CI, scripts, readme files

* convert-no-torch -> convert-legacy-llama

* Move vocab thing to vocab.py

* Fix convert-no-torch -> convert-legacy-llama

* Fix lost convert.py in ci/run.sh

* Fix imports

* Fix gguf not imported correctly

* Fix flake8 complaints

* Fix check-requirements.sh

* Get rid of ADDED_TOKENS_FILE, FAST_TOKENIZER_FILE

* Review fixes
2024-05-30 21:40:00 +10:00
..
__init__.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
constants.py Add support for DeepseekV2ForCausalLM (#7519) 2024-05-28 17:07:05 +02:00
gguf_reader.py gguf-py : fix and simplify quantized shape round-trip (#7483) 2024-05-25 11:11:48 +10:00
gguf_writer.py Add support for DeepseekV2ForCausalLM (#7519) 2024-05-28 17:07:05 +02:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
lazy.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py gguf-py : fix and simplify quantized shape round-trip (#7483) 2024-05-25 11:11:48 +10:00
tensor_mapping.py Add support for DeepseekV2ForCausalLM (#7519) 2024-05-28 17:07:05 +02:00
vocab.py Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00