llama.cpp/gguf-py/gguf
runfuture 316c7faf77
llama : add MiniCPM support (#5346)
* support minicpm arch.

* fix tab/space typo.

* convert minicpm model via convert-hf-gguf.py

* try to make tokenizer work

* fix bug for quantize minicpm

* fix for flake8 lint

* remove convert-minicpm.py

* fix for editorconfig

* correct minicpm model type (size)

* constants expanded for minicpm

* Minor change of the constant names for minicpm
2024-02-07 08:15:56 +02:00
..
__init__.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
constants.py llama : add MiniCPM support (#5346) 2024-02-07 08:15:56 +02:00
gguf_reader.py gguf : fix "general.alignment" type in gguf_reader.py (#5136) 2024-01-26 11:10:28 +02:00
gguf_writer.py llama : support InternLM2 (#5184) 2024-02-01 11:19:51 +02:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
tensor_mapping.py llama : support InternLM2 (#5184) 2024-02-01 11:19:51 +02:00
vocab.py py : open merges file as 'utf-8' (#4566) 2023-12-21 19:07:34 +02:00