llama.cpp/examples/llava
JidongZhang-THU 15606309a0
llava : add MobileVLM support (#5132)
* New Feature:
    1. Sum_Rows:
        fix cuda kernel overflow
        fix block shape error when nrows too big
    2. Im2Col:
        Support Batch in cuda
        Support f32 to f32 both in cpu && cuda
    3. DepthWiseConv:
        Support by Im2Col && MulMat
    4. Pool_2d:
        Supoort avg pooling in cuda
    5. HardSigmoid:
        Imp in cuda
    6. HardSwish:
        Imp in cuda

* fix tabs instead of spaces

* code clean

* CUDA POOL2D

* ADD POOL2D test case in test-backend-ops.cpp

* code clean

* fix pool2d_kernel

nits

* fix bug in pool2d kernel

* fix avg pooling, count_include_pad

nits

* test-backend-ops : add more pool_2d tests

* cuda : fix warnings and formatting

* ggml : check types in release builds too in pool_2d

* test-backend-ops : remove f16 pool_2d tests

* cuda : more style fixes

* Add assert in ggml_cuda_op_pool2d

* pool2d float padding fallback

* test-backend-ops : add dst_type to im2col

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-01-31 15:10:15 +02:00
..
android llava : MobileVLM support (#4954) 2024-01-22 15:09:35 +02:00
clip.cpp llava : support for Yi-VL and fix for mobileVLM (#5093) 2024-01-27 17:09:18 +02:00
clip.h clip : refactor + bug fixes (#4696) 2023-12-30 23:24:42 +02:00
CMakeLists.txt clip : enable gpu backend (#4205) 2023-12-29 18:52:15 +02:00
convert-image-encoder-to-gguf.py llava : MobileVLM support (#4954) 2024-01-22 15:09:35 +02:00
llava-cli.cpp llava : support for Yi-VL and fix for mobileVLM (#5093) 2024-01-27 17:09:18 +02:00
llava-surgery.py multimodal : add BakLLaVA conversion support (#3682) 2023-10-19 19:40:41 +03:00
llava.cpp clip : refactor + bug fixes (#4696) 2023-12-30 23:24:42 +02:00
llava.h llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
MobileVLM-README.md llava : add MobileVLM support (#5132) 2024-01-31 15:10:15 +02:00
README.md llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00

LLaVA

Currently this implementation supports llava-v1.5 variants.

The pre-converted 7b and 13b models are available.

After API is confirmed, more models will be supported / uploaded.

Usage

Build with cmake or run make llava-cli to build it.

After building, run: ./llava-cli to see the usage. For example:

./llava-cli -m llava-v1.5-7b/ggml-model-q5_k.gguf --mmproj llava-v1.5-7b/mmproj-model-f16.gguf --image path/to/an/image.jpg

note: A lower temperature like 0.1 is recommended for better quality. add --temp 0.1 to the command to do so.

Model conversion

  • Clone llava-v15-7b`` and clip-vit-large-patch14-336`` locally:
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b

git clone https://huggingface.co/openai/clip-vit-large-patch14-336
  1. Use llava-surgery.py to split the LLaVA model to LLaMA and multimodel projector constituents:
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
  1. Use convert-image-encoder-to-gguf.py to convert the LLaVA image encoder to GGUF:
python ./examples/llava/convert-image-encoder-to-gguf -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
  1. Use convert.py to convert the LLaMA part of LLaVA to GGUF:
python ./convert.py ../llava-v1.5-7b

Now both the LLaMA part and the image encoder is in the llava-v1.5-7b directory.

TODO

  • Support non-CPU backend for the image encoding part.
  • Support different sampling methods.
  • Support more model variants.