llama.cpp/examples
2023-09-18 11:08:15 +03:00
..
baby-llama ggml : ggml_rope now takes a vector with positions instead of n_past 2023-09-17 21:17:10 +03:00
beam-search llama : unified KV cache + batch inference API 2023-09-18 11:08:15 +03:00
benchmark cmake : install targets (#2256) 2023-07-19 10:01:11 +03:00
convert-llama2c-to-ggml fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
embd-input build : do not use _GNU_SOURCE gratuitously (#2035) 2023-09-08 15:09:21 +03:00
embedding examples : make n_ctx warning work again (#3066) 2023-09-08 11:43:35 -04:00
gguf examples : replace fprintf to stdout with printf (#3017) 2023-09-05 15:10:27 -04:00
gptneox-wip fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
jeopardy chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
llama-bench sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192) 2023-09-15 19:06:03 +03:00
main llama : unified KV cache + batch inference API 2023-09-18 11:08:15 +03:00
main-cmake-pkg cmake : add relocatable Llama package (#2960) 2023-09-14 20:04:40 +03:00
metal gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
perplexity llama : unified KV cache + batch inference API 2023-09-18 11:08:15 +03:00
quantize fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
quantize-stats fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
save-load-state fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
server fix some warnings from gcc and clang-tidy (#3038) 2023-09-07 13:22:29 -04:00
simple llama : unified KV cache + batch inference API 2023-09-18 11:08:15 +03:00
speculative speculative : add heuristic algorithm (#3006) 2023-09-14 19:14:44 +03:00
train-text-from-scratch ggml : ggml_rope now takes a vector with positions instead of n_past 2023-09-17 21:17:10 +03:00
alpaca.sh alpaca.sh : update model file name (#2074) 2023-07-06 19:17:50 +03:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-persistent.sh chat-persistent.sh : use bracket expressions in grep (#1564) 2023-05-24 09:16:22 +03:00
chat-vicuna.sh examples : add chat-vicuna.sh (#1854) 2023-06-15 21:05:53 +03:00
chat.sh main : log file (#2748) 2023-08-30 09:29:32 +03:00
CMakeLists.txt speculative : PoC for speeding-up inference via speculative sampling (#2926) 2023-09-03 15:12:08 +03:00
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 2023-04-13 16:03:39 +03:00
json-schema-to-grammar.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
llama2-13b.sh gitignore : changes for Poetry users + chat examples (#2284) 2023-07-21 13:53:27 +03:00
llama2.sh gitignore : changes for Poetry users + chat examples (#2284) 2023-07-21 13:53:27 +03:00
llama.vim vim : streaming and more (#2495) 2023-08-08 14:44:48 +03:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
make-ggml.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
Miku.sh MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287) 2023-07-21 11:13:18 +03:00
reason-act.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
server-llama2-13B.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00