llava : add --skip-unknown to 1.6 convert.py (#5632)

This commit adds the `--skip-unknown` option to the convert.py script
and removes the saving of the updated checkpoints to avoid updating
possibly checked out files.

The motivation for this change is that this was done for 1.5
in Commit fc0c8d286a ("llava :
update surgery script to not remove tensors") and makes the examples
more consistent.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
This commit is contained in:
Daniel Bevenius 2024-02-21 14:36:57 +01:00 committed by GitHub
parent 580111d42b
commit cc6cac08e3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 6 additions and 19 deletions

View File

@ -63,13 +63,12 @@ Now both the LLaMA part and the image encoder is in the `llava-v1.5-7b` director
```console ```console
git clone https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b git clone https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b
``` ```
2) Backup your pth/safetensor model files as llava-surgery modifies them 2) Use `llava-surgery-v2.py` which also supports llava-1.5 variants pytorch as well as safetensor models:
3) Use `llava-surgery-v2.py` which also supports llava-1.5 variants pytorch as well as safetensor models:
```console ```console
python examples/llava/llava-surgery-v2.py -C -m ../llava-v1.6-vicuna-7b/ python examples/llava/llava-surgery-v2.py -C -m ../llava-v1.6-vicuna-7b/
``` ```
- you will find a llava.projector and a llava.clip file in your model directory - you will find a llava.projector and a llava.clip file in your model directory
4) Copy the llava.clip file into a subdirectory (like vit), rename it to pytorch_model.bin and add a fitting vit configuration to the directory: 3) Copy the llava.clip file into a subdirectory (like vit), rename it to pytorch_model.bin and add a fitting vit configuration to the directory:
```console ```console
mkdir vit mkdir vit
cp ../llava-v1.6-vicuna-7b/llava.clip vit/pytorch_model.bin cp ../llava-v1.6-vicuna-7b/llava.clip vit/pytorch_model.bin
@ -77,18 +76,18 @@ cp ../llava-v1.6-vicuna-7b/llava.projector vit/
curl -s -q https://huggingface.co/cmp-nct/llava-1.6-gguf/raw/main/config_vit.json -o vit/config.json curl -s -q https://huggingface.co/cmp-nct/llava-1.6-gguf/raw/main/config_vit.json -o vit/config.json
``` ```
5) Create the visual gguf model: 4) Create the visual gguf model:
```console ```console
python ./examples/llava/convert-image-encoder-to-gguf.py -m vit --llava-projector vit/llava.projector --output-dir vit --clip-model-is-vision python ./examples/llava/convert-image-encoder-to-gguf.py -m vit --llava-projector vit/llava.projector --output-dir vit --clip-model-is-vision
``` ```
- This is similar to llava-1.5, the difference is that we tell the encoder that we are working with the pure vision model part of CLIP - This is similar to llava-1.5, the difference is that we tell the encoder that we are working with the pure vision model part of CLIP
6) Then convert the model to gguf format: 5) Then convert the model to gguf format:
```console ```console
python ./convert.py ../llava-v1.6-vicuna-7b/ python ./convert.py ../llava-v1.6-vicuna-7b/ --skip-unknown
``` ```
7) And finally we can run the llava-cli using the 1.6 model version: 6) And finally we can run the llava-cli using the 1.6 model version:
```console ```console
./llava-cli -m ../llava-v1.6-vicuna-7b/ggml-model-f16.gguf --mmproj vit/mmproj-model-f16.gguf --image some-image.jpg -c 4096 ./llava-cli -m ../llava-v1.6-vicuna-7b/ggml-model-f16.gguf --mmproj vit/mmproj-model-f16.gguf --image some-image.jpg -c 4096
``` ```

View File

@ -65,9 +65,7 @@ def clean_vision_tower_from_checkpoint(checkpoint_path):
for name in clip_tensors: for name in clip_tensors:
del checkpoint[name] del checkpoint[name]
# Save the updated checkpoint
checkpoint_path = checkpoint_path checkpoint_path = checkpoint_path
save_model(checkpoint, checkpoint_path, file_type)
return True return True
return False return False
@ -152,16 +150,6 @@ for name in first_mm_tensors:
if len(projector) > 0: if len(projector) > 0:
save_model(projector, f"{args.model}/llava.projector", 'pytorch') save_model(projector, f"{args.model}/llava.projector", 'pytorch')
for name in mm_tensors:
del last_checkpoint[name]
for name in first_mm_tensors:
del first_checkpoint[name]
if len(mm_tensors) > 0:
save_model(last_checkpoint, projector_checkpoint_path, file_type)
if len(first_mm_tensors) > 0:
save_model(first_checkpoint, newline_checkpoint_path, file_type)
print("Done!") print("Done!")
print(f"Now you can convert {args.model} to a a regular LLaMA GGUF file.") print(f"Now you can convert {args.model} to a a regular LLaMA GGUF file.")
print(f"Also, use {args.model}/llava.projector to prepare a llava-encoder.gguf file.") print(f"Also, use {args.model}/llava.projector to prepare a llava-encoder.gguf file.")