mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-24 02:14:35 +00:00
aa23412989
* Create llava-survery-v2.py * Update convert-image-encoder-to-gguf.py * Update convert-image-encoder-to-gguf.py * Rename llava-survery-v2.py to llava-surgery-v2.py * Update convert-image-encoder-to-gguf.py will now search for projector * Update convert-image-encoder-to-gguf.py whoops * Update llava-surgery-v2.py * Clip: Bugfix for normalization (it did not loat the 3 std and mean values) Clip: bicubic resize function Clip: added save-to-bmp/pil for debugging and conversion from/to 32/8 images Clip: added normalization with FP16 precision simulation (image tensors match HF implementation, can be switched off, only used for llava-1.6) Clip: added newline tensor, mergetype kv, image-grid kv, new resize-pad function with resolution from gridpoints Clip: clip_image_preprocess now returns a float * vector instead of float, this way llava 1.5 and 1.6 is supported llava: added ggml cpu graph for embedding patching, added spatial_unpad preliminary support, added a lot of comments that need to be cleaned when all is final convert-image-encoder: fixed image-grid flattening * whitespace corrections * ws * Tensors are now properly permuted. Before the embeddings were inserted 1:1, now they are split into the 24x24 patches as in reference. * ws * added verbose_prompt support into cli added stopwords for llava-1.6 into cli * moved llava functions to llava.cpp, made clip.h C compatible API, replaced vector style functions with pointers, added a debug define to remove functions from compilation while not needed * ws * convert : skip unknown tensors (need for LLaVA) * llava : update readme * llava : fix compile warnings * llava : style * convert : add --skip-unknown CLI arg * server : remove clip structs * bugfix for non llava-1.6 It should now work with llava-1.5 as well * clip : minor code rearrange * llava : update readme a bit --------- Co-authored-by: John <cmt-nct@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
android | ||
clip.cpp | ||
clip.h | ||
CMakeLists.txt | ||
convert-image-encoder-to-gguf.py | ||
llava-cli.cpp | ||
llava-surgery-v2.py | ||
llava-surgery.py | ||
llava.cpp | ||
llava.h | ||
MobileVLM-README.md | ||
README.md | ||
requirements.txt |
LLaVA
Currently this implementation supports llava-v1.5 variants.
The pre-converted 7b and 13b models are available.
After API is confirmed, more models will be supported / uploaded.
Usage
Build with cmake or run make llava-cli
to build it.
After building, run: ./llava-cli
to see the usage. For example:
./llava-cli -m ../llava-v1.5-7b/ggml-model-f16.gguf --mmproj ../llava-v1.5-7b/mmproj-model-f16.gguf --image path/to/an/image.jpg
note: A lower temperature like 0.1 is recommended for better quality. add --temp 0.1
to the command to do so.
LLaVA 1.5
- Clone a LLaVA and a CLIP model (available options). For example:
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b
git clone https://huggingface.co/openai/clip-vit-large-patch14-336
- Install the required Python packages:
pip install -r examples/llava/requirements.txt
- Use
llava-surgery.py
to split the LLaVA model to LLaMA and multimodel projector constituents:
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
- Use
convert-image-encoder-to-gguf.py
to convert the LLaVA image encoder to GGUF:
python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
- Use
convert.py
to convert the LLaMA part of LLaVA to GGUF:
python ./convert.py ../llava-v1.5-7b
Now both the LLaMA part and the image encoder is in the llava-v1.5-7b
directory.
LLaVA 1.6
-
Use
llava-surgery-v2.py
-
TODO: add detailed instructions
TODO
- Support non-CPU backend for the image encoding part.
- Support different sampling methods.
- Support more model variants.