From 1af511fc22cba4959dd8bced5501df9e8af6ddf9 Mon Sep 17 00:00:00 2001 From: Galunid Date: Fri, 31 May 2024 10:09:20 +0200 Subject: [PATCH] Add convert.py removal to hot topics (#7662) --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 60e7aaf2c..eeeb64919 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics -- **Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021** +- **`convert.py` has been deprecated and moved to `examples/convert-legacy-llama.py`, please use `convert-hf-to-gguf.py` https://github.com/ggerganov/llama.cpp/pull/7430 +- Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021 - BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920 - MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387 - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404