Update README.md to include steps to run cmake

This commit is contained in:
Amit Kumar Jha 2024-07-17 17:20:07 +04:00 committed by GitHub
parent 1bdd8ae19f
commit ee1c6a4d89
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -4,6 +4,11 @@ You can also use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-
Note: It is synced from llama.cpp `main` every 6 hours.
Using llama-quantize, needs cmake to create excutables. Install cmake as per your operating system.
https://cmake.org/download/
Example usage:
```bash
@ -17,6 +22,18 @@ ls ./models
ls ./models
<folder containing weights and tokenizer json>
#clone git repository llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# create build directory and run cmake
mkdir build && cd build && cmake ../examples/quantize
# build files are created in root directory(llama.cpp)
# run make in root directory to create executables
cd .. && make
# install Python dependencies
python3 -m pip install -r requirements.txt