llama.cpp/gguf-py/gguf
postmasters 580111d42b
llama : add gemma model (#5631)
There are couple things in this architecture:

1. Shared input and output embedding parameters.
2. Key length and value length are not derived from `n_embd`.

More information about the models can be found at
https://ai.google.dev/gemma. GGUFs can be downloaded from
https://huggingface.co/google.
2024-02-21 15:08:22 +02:00
..
__init__.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
constants.py llama : add gemma model (#5631) 2024-02-21 15:08:22 +02:00
gguf_reader.py gguf : fix "general.alignment" type in gguf_reader.py (#5136) 2024-01-26 11:10:28 +02:00
gguf_writer.py Use correct type of pooling for embedding models (#5500) 2024-02-15 12:21:49 -05:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
tensor_mapping.py llama : add support for Nomic Embed (#5468) 2024-02-13 12:03:53 -05:00
vocab.py fix(gguf-py): special tokens are no longer skipped when add_<token>_token is set to false (#5487) 2024-02-15 14:14:37 +01:00