llama.cpp/gguf-py/gguf
Xuan Son Nguyen 49122a873f
gemma2: add sliding window mask (#8227)
* gemma2: add sliding window mask

* fix data_swa uninitialized

* better naming

* add co-author

Co-authored-by: Arlo Phoenix <arlo-phoenix@users.noreply.github.com>

* replace list with single tensor

* update

* llama : minor styling

* convert : add sanity check for query_pre_attn_scalar

* fix small typo in README

---------

Co-authored-by: Arlo Phoenix <arlo-phoenix@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-07-01 18:48:34 +02:00
..
__init__.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
constants.py gemma2: add sliding window mask (#8227) 2024-07-01 18:48:34 +02:00
gguf_reader.py Gguf dump start data offset via --data-offset and some extra refactor (#8054) 2024-06-25 22:03:25 +10:00
gguf_writer.py gemma2: add sliding window mask (#8227) 2024-07-01 18:48:34 +02:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
lazy.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py gguf-py : fix and simplify quantized shape round-trip (#7483) 2024-05-25 11:11:48 +10:00
tensor_mapping.py llama: Add support for Gemma2ForCausalLM (#8156) 2024-06-27 21:00:43 -07:00
vocab.py Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00