llama.cpp/gguf-py/gguf
Francis Couture-Harpin 7ef4254a92 ggml-quants : faster 1.625 bpw AVX2 vec_dot
Not using a lookup table anymore makes it match q4_0 speed.

* gguf-py : fix formatting

* llama : remove spaces on empty line
2024-06-27 02:06:28 -04:00
..
__init__.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
constants.py ggml-quants : 1.625 bpw ternary packing for BitNet 1.58b 2024-06-27 02:06:22 -04:00
gguf_reader.py Gguf dump start data offset via --data-offset and some extra refactor (#8054) 2024-06-25 22:03:25 +10:00
gguf_writer.py Option to split during conversion (#6942) 2024-06-24 19:42:03 +10:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
lazy.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py ggml-quants : faster 1.625 bpw AVX2 vec_dot 2024-06-27 02:06:28 -04:00
tensor_mapping.py gguf-py, convert-hf : model conversion support for T5 and FLAN-T5 model variants (#5763) 2024-06-24 07:06:05 +02:00
vocab.py Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00