llama.cpp/examples
Olivier Chafik 230d46c723
examples : update llama2.c converter to read vocab and write models in GGUF format (#2751)
* llama2.c: direct gguf output (WIP)

* Simplify vector building logic

* llama2.c gguf conversion: fix token types in converter

* llama2.c: support copying vocab from a llama gguf model file

* llama2.c: update default path for vocab model + readme

* llama2.c: use defines for gguf keys

* llama2.c: escape whitespaces w/ U+2581 in vocab converter the llama.cpp way

* llama2.c converter: cleanups + take n_ff from config
2023-08-27 17:13:31 +03:00
..
baby-llama Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) 2023-07-25 18:35:53 +03:00
beam_search llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
benchmark cmake : install targets (#2256) 2023-07-19 10:01:11 +03:00
convert-llama2c-to-ggml examples : update llama2.c converter to read vocab and write models in GGUF format (#2751) 2023-08-27 17:13:31 +03:00
embd-input llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
embedding llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
gguf gguf : add 64-bit support (GGUF v2) (#2821) 2023-08-27 14:19:54 +03:00
gptneox-wip gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
jeopardy chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
llama-bench llama-bench : add model sizes (#2771) 2023-08-25 15:16:19 +02:00
main llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
metal gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
perplexity llama : speedup tokenization (#2831) 2023-08-27 16:50:33 +03:00
quantize Fix values shown in the quantize tool help (#2735) 2023-08-23 12:57:12 +03:00
quantize-stats gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
save-load-state llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
server llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
simple llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
train-text-from-scratch llama : more tokenizer fixes (#2810) 2023-08-27 14:19:19 +03:00
alpaca.sh alpaca.sh : update model file name (#2074) 2023-07-06 19:17:50 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh chat-persistent.sh : use bracket expressions in grep (#1564) 2023-05-24 09:16:22 +03:00
chat-vicuna.sh examples : add chat-vicuna.sh (#1854) 2023-06-15 21:05:53 +03:00
chat.sh If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
CMakeLists.txt llama : add llama_beam_search() (#2267) 2023-08-25 18:18:48 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
json-schema-to-grammar.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
llama2-13b.sh gitignore : changes for Poetry users + chat examples (#2284) 2023-07-21 13:53:27 +03:00
llama2.sh gitignore : changes for Poetry users + chat examples (#2284) 2023-07-21 13:53:27 +03:00
llama.vim vim : streaming and more (#2495) 2023-08-08 14:44:48 +03:00
llm.vim llm.vim : multiline autocompletion, get rid of "^@" (#2543) 2023-08-08 15:07:02 +03:00
make-ggml.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
Miku.sh MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287) 2023-07-21 11:13:18 +03:00
reason-act.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
server-llama2-13B.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00