llama.cpp/gguf-py/scripts
2024-05-30 02:10:40 +02:00
..
__init__.py convert : support models with multiple chat templates (#6588) 2024-04-18 14:49:01 +03:00
gguf-convert-endian.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
gguf-dump.py convert-hf : save memory with lazy evaluation (#7075) 2024-05-08 18:16:38 -04:00
gguf-new-metadata.py gguf-py : Add tokenizer.ggml.pre to gguf-new-metadata.py (#7627) 2024-05-30 02:10:40 +02:00
gguf-set-metadata.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00