mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 19:04:35 +00:00
fix typo in readme (#5399)
Co-authored-by: Ebey Abraham <ebeyabraham@microsoft.com>
This commit is contained in:
parent
b906596bb7
commit
8c933b70c2
@ -680,7 +680,7 @@ python3 -m pip install -r requirements.txt
|
||||
python3 convert.py models/mymodel/
|
||||
|
||||
# [Optional] for models using BPE tokenizers
|
||||
python convert.py models/mymodel/ --vocabtype bpe
|
||||
python convert.py models/mymodel/ --vocab-type bpe
|
||||
|
||||
# quantize the model to 4-bits (using Q4_K_M method)
|
||||
./quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M
|
||||
|
Loading…
Reference in New Issue
Block a user