mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-14 06:49:54 +00:00
cb6c44c5e0
* Do not use _GNU_SOURCE gratuitously. What is needed to build llama.cpp and examples is availability of stuff defined in The Open Group Base Specifications Issue 6 (https://pubs.opengroup.org/onlinepubs/009695399/) known also as Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions, plus some stuff from BSD that is not specified in POSIX.1. Well, that was true until NUMA support was added recently, so enable GNU libc extensions for Linux builds to cover that. Not having feature test macros in source code gives greater flexibility to those wanting to reuse it in 3rd party app, as they can build it with FTMs set by Makefile here or other FTMs depending on their needs. It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2. * make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK * make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK * make : use BSD-specific FTMs to enable alloca on BSDs * make : fix OpenBSD build by exposing newer POSIX definitions * cmake : follow recent FTM improvements from Makefile |
||
---|---|---|
.. | ||
.gitignore | ||
CMakeLists.txt | ||
embd_input.py | ||
embd-input-lib.cpp | ||
embd-input-test.cpp | ||
embd-input.h | ||
llava.py | ||
minigpt4.py | ||
panda_gpt.py | ||
README.md |
Examples for input embedding directly
Requirement
build libembdinput.so
run the following comman in main dir (../../).
make
LLaVA example (llava.py)
- Obtian LLaVA model (following https://github.com/haotian-liu/LLaVA/ , use https://huggingface.co/liuhaotian/LLaVA-13b-delta-v1-1/).
- Convert it to ggml format.
llava_projection.pth
is pytorch_model-00003-of-00003.bin.
import torch
bin_path = "../LLaVA-13b-delta-v1-1/pytorch_model-00003-of-00003.bin"
pth_path = "./examples/embd-input/llava_projection.pth"
dic = torch.load(bin_path)
used_key = ["model.mm_projector.weight","model.mm_projector.bias"]
torch.save({k: dic[k] for k in used_key}, pth_path)
- Check the path of LLaVA model and
llava_projection.pth
inllava.py
.
PandaGPT example (panda_gpt.py)
- Obtian PandaGPT lora model from https://github.com/yxuansu/PandaGPT. Rename the file to
adapter_model.bin
. Use convert-lora-to-ggml.py to convert it to ggml format. Theadapter_config.json
is
{
"peft_type": "LORA",
"fan_in_fan_out": false,
"bias": null,
"modules_to_save": null,
"r": 32,
"lora_alpha": 32,
"lora_dropout": 0.1,
"target_modules": ["q_proj", "k_proj", "v_proj", "o_proj"]
}
- Papare the
vicuna
v0 model. - Obtain the ImageBind model.
- Clone the PandaGPT source.
git clone https://github.com/yxuansu/PandaGPT
- Install the requirement of PandaGPT.
- Check the path of PandaGPT source, ImageBind model, lora model and vicuna model in panda_gpt.py.
MiniGPT-4 example (minigpt4.py)
- Obtain MiniGPT-4 model from https://github.com/Vision-CAIR/MiniGPT-4/ and put it in
embd-input
. - Clone the MiniGPT-4 source.
git clone https://github.com/Vision-CAIR/MiniGPT-4/
- Install the requirement of PandaGPT.
- Papare the
vicuna
v0 model. - Check the path of MiniGPT-4 source, MiniGPT-4 model and vicuna model in
minigpt4.py
.