llava : add requirements.txt and update README.md (#5428)

* llava: add requirements.txt and update README.md

This commit adds a `requirements.txt` file to the `examples/llava`
directory. This file contains the required Python packages to run the
scripts in the `examples/llava` directory.

The motivation of this to make it easier for users to run the scripts in
`examples/llava`. This will avoid users from having to possibly run into
missing package issues if the packages are not installed on their system.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* llava: fix typo in llava-surgery.py output

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

---------

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
This commit is contained in:
Daniel Bevenius 2024-02-09 14:00:59 +01:00 committed by GitHub
parent 7c777fcd5d
commit e00d2a62dd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 13 additions and 4 deletions

View File

@ -29,19 +29,25 @@ git clone https://huggingface.co/liuhaotian/llava-v1.5-7b
git clone https://huggingface.co/openai/clip-vit-large-patch14-336 git clone https://huggingface.co/openai/clip-vit-large-patch14-336
``` ```
2. Use `llava-surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents: 2. Install the required Python packages:
```sh
pip install -r examples/llava/requirements.txt
```
3. Use `llava-surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents:
```sh ```sh
python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
``` ```
3. Use `convert-image-encoder-to-gguf.py` to convert the LLaVA image encoder to GGUF: 4. Use `convert-image-encoder-to-gguf.py` to convert the LLaVA image encoder to GGUF:
```sh ```sh
python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
``` ```
4. Use `convert.py` to convert the LLaMA part of LLaVA to GGUF: 5. Use `convert.py` to convert the LLaMA part of LLaVA to GGUF:
```sh ```sh
python ./convert.py ../llava-v1.5-7b python ./convert.py ../llava-v1.5-7b

View File

@ -42,5 +42,5 @@ if len(clip_tensors) > 0:
torch.save(checkpoint, path) torch.save(checkpoint, path)
print("Done!") print("Done!")
print(f"Now you can convert {args.model} to a a regular LLaMA GGUF file.") print(f"Now you can convert {args.model} to a regular LLaMA GGUF file.")
print(f"Also, use {args.model}/llava.projector to prepare a llava-encoder.gguf file.") print(f"Also, use {args.model}/llava.projector to prepare a llava-encoder.gguf file.")

View File

@ -0,0 +1,3 @@
-r ../../requirements/requirements-convert.txt
pillow~=10.2.0
torch~=2.1.1