llama.cpp/examples/llava
2023-10-14 03:12:10 +03:00
..
clip.cpp examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
clip.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
CMakeLists.txt examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
convert-image-encoder-to-gguf.py examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
llava-surgery.py examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
llava-utils.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
llava.cpp Honor -ngl option for Cuda offloading in llava 2023-10-14 03:12:10 +03:00
README.md examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00

LLaVA

Currently this implementation supports llava-v1.5 variants.

The pre-converted 7b and 13b models are available.

After API is confirmed, more models will be supported / uploaded.

Usage

Build with cmake or run make llava to build it.

After building, run: ./llava to see the usage. For example:

./llava -m llava-v1.5-7b/ggml-model-q5_k.gguf --mmproj llava-v1.5-7b/mmproj-model-f16.gguf --image path/to/an/image.jpg

note: A lower temperature like 0.1 is recommended for better quality. add --temp 0.1 to the command to do so.

Model conversion

  • Clone llava-v15-7b`` and clip-vit-large-patch14-336`` locally:
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b

git clone https://huggingface.co/openai/clip-vit-large-patch14-336
  1. Use llava-surgery.py to split the LLaVA model to LLaMA and multimodel projector constituents:
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
  1. Use convert-image-encoder-to-gguf.py to convert the LLaVA image encoder to GGUF:
python ./examples/llava/convert-image-encoder-to-gguf -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
  1. Use convert.py to convert the LLaMA part of LLaVA to GGUF:
python ./convert.py ../llava-v1.5-7b

Now both the LLaMA part and the image encoder is in the llava-v1.5-7b directory.

TODO

  • Support server mode.
  • Support non-CPU backend for the image encoding part.
  • Support different sampling methods.
  • Support more model variants.