llama.cpp/gguf-py/gguf
Francis Couture-Harpin 8956543c09 convert_hf : simplify modify_tensors for InternLM2
* convert_lora : lazy conversion

* llama : load and use alpha from LoRA adapters
2024-07-15 02:48:24 -04:00
..
__init__.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
constants.py llama : support glm3 and glm4 (#8031) 2024-07-07 15:52:10 +03:00
gguf_reader.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
gguf_writer.py llama : add OpenELM support (#7359) 2024-07-04 20:14:21 +03:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
lazy.py gguf-py : do not use internal numpy types (#7472) 2024-07-09 01:04:49 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py convert_hf : simplify modify_tensors for InternLM2 2024-07-15 02:48:24 -04:00
tensor_mapping.py llama : support glm3 and glm4 (#8031) 2024-07-07 15:52:10 +03:00
vocab.py Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00