llama.cpp/examples/llava
Daniel Bevenius e00d2a62dd
llava : add requirements.txt and update README.md (#5428)
* llava: add requirements.txt and update README.md

This commit adds a `requirements.txt` file to the `examples/llava`
directory. This file contains the required Python packages to run the
scripts in the `examples/llava` directory.

The motivation of this to make it easier for users to run the scripts in
`examples/llava`. This will avoid users from having to possibly run into
missing package issues if the packages are not installed on their system.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* llava: fix typo in llava-surgery.py output

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

---------

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-02-09 15:00:59 +02:00
..
android llava : MobileVLM support (#4954) 2024-01-22 15:09:35 +02:00
clip.cpp llava : support for Yi-VL and fix for mobileVLM (#5093) 2024-01-27 17:09:18 +02:00
clip.h clip : refactor + bug fixes (#4696) 2023-12-30 23:24:42 +02:00
CMakeLists.txt clip : enable gpu backend (#4205) 2023-12-29 18:52:15 +02:00
convert-image-encoder-to-gguf.py llava : MobileVLM support (#4954) 2024-01-22 15:09:35 +02:00
llava-cli.cpp llava-cli : always tokenize special tokens (#5382) 2024-02-07 10:17:25 +02:00
llava-surgery.py llava : add requirements.txt and update README.md (#5428) 2024-02-09 15:00:59 +02:00
llava.cpp clip : refactor + bug fixes (#4696) 2023-12-30 23:24:42 +02:00
llava.h llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
MobileVLM-README.md llava : add MobileVLM support (#5132) 2024-01-31 15:10:15 +02:00
README.md llava : add requirements.txt and update README.md (#5428) 2024-02-09 15:00:59 +02:00
requirements.txt llava : add requirements.txt and update README.md (#5428) 2024-02-09 15:00:59 +02:00

LLaVA

Currently this implementation supports llava-v1.5 variants.

The pre-converted 7b and 13b models are available.

After API is confirmed, more models will be supported / uploaded.

Usage

Build with cmake or run make llava-cli to build it.

After building, run: ./llava-cli to see the usage. For example:

./llava-cli -m ../llava-v1.5-7b/ggml-model-f16.gguf --mmproj ../llava-v1.5-7b/mmproj-model-f16.gguf --image path/to/an/image.jpg

note: A lower temperature like 0.1 is recommended for better quality. add --temp 0.1 to the command to do so.

Model conversion

  • Clone llava-v15-7b and clip-vit-large-patch14-336 locally:
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b

git clone https://huggingface.co/openai/clip-vit-large-patch14-336
  1. Install the required Python packages:
pip install -r examples/llava/requirements.txt
  1. Use llava-surgery.py to split the LLaVA model to LLaMA and multimodel projector constituents:
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
  1. Use convert-image-encoder-to-gguf.py to convert the LLaVA image encoder to GGUF:
python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
  1. Use convert.py to convert the LLaMA part of LLaVA to GGUF:
python ./convert.py ../llava-v1.5-7b

Now both the LLaMA part and the image encoder is in the llava-v1.5-7b directory.

TODO

  • Support non-CPU backend for the image encoding part.
  • Support different sampling methods.
  • Support more model variants.