llama.cpp/examples/convert-llama2c-to-ggml
2024-06-08 13:42:01 +01:00
..
CMakeLists.txt Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
convert-llama2c-to-ggml.cpp train : change default FA argument (#7528) 2024-05-25 15:22:35 +03:00
README.md Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00

Convert llama2.c model to ggml

This example reads weights from project llama2.c and saves them in ggml compatible format. The vocab that is available in models/ggml-vocab.bin is used by default.

To convert the model first download the models from the llama2.c repository:

$ make -j

After successful compilation, following usage options are available:

usage: ./llama-convert-llama2c-to-ggml [options]

options:
  -h, --help                       show this help message and exit
  --copy-vocab-from-model FNAME    path of gguf llama model or llama2.c vocabulary from which to copy vocab (default 'models/7B/ggml-model-f16.gguf')
  --llama2c-model FNAME            [REQUIRED] model path from which to load Karpathy's llama2.c model
  --llama2c-output-model FNAME     model path to save the converted llama2.c model (default ak_llama_model.bin')

An example command using a model from karpathy/tinyllamas is as follows:

$ ./llama-convert-llama2c-to-ggml --copy-vocab-from-model llama-2-7b-chat.gguf.q2_K.bin --llama2c-model stories42M.bin --llama2c-output-model stories42M.gguf.bin

Note: The vocabulary for stories260K.bin should be its own tokenizer tok512.bin found in karpathy/tinyllamas/stories260K.

Now you can use the model with a command like:

$ ./llama -m stories42M.gguf.bin -p "One day, Lily met a Shoggoth" -n 500 -c 256