llama.cpp/examples
2024-06-10 17:38:36 +01:00
..
baby-llama rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
batched Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
batched-bench Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
batched.swift rm bin files 2024-06-08 14:16:32 +01:00
benchmark Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
convert-llama2c-to-ggml rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
embedding Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
eval-callback fix test-eval-callback 2024-06-10 16:21:14 +01:00
export-lora Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
finetune rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
gbnf-validator address gbnf-validator unused fread warning (switched to C++ / ifstream) 2024-06-10 17:38:36 +01:00
gguf Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
gguf-split rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
gritlm Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
imatrix Merge remote-tracking branch 'origin/master' into bins 2024-06-10 15:38:41 +01:00
infill Prefix all example bins w/ llama- 2024-06-08 13:42:01 +01:00
jeopardy rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
llama-bench main: update refs -> llama 2024-06-06 15:44:51 +01:00
llama.android android : module (#7502) 2024-05-25 11:11:33 +03:00
llama.swiftui llama : add option to render special/control tokens (#6807) 2024-04-21 18:36:45 +03:00
llava rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
lookahead prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
lookup prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
main more llama-cli(.exe) 2024-06-10 16:08:06 +01:00
main-cmake-pkg rename: llama-cli-cmake-pkg(.exe) 2024-06-10 16:23:45 +01:00
parallel prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
passkey prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
perplexity prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
quantize rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
quantize-stats prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
retrieval prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
rpc Merge remote-tracking branch 'origin/master' into bins 2024-06-10 15:38:41 +01:00
save-load-state prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
server Merge remote-tracking branch 'origin/master' into bins 2024-06-10 15:38:41 +01:00
simple prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
speculative prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
sycl rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
tokenize prefix more cmake targets w/ llama- 2024-06-08 14:05:34 +01:00
train-text-from-scratch rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
base-translate.sh rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
chat-persistent.sh rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
chat-vicuna.sh rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
chat.sh rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
CMakeLists.txt sort cmake example subdirs 2024-06-08 14:09:28 +01:00
convert-legacy-llama.py ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
json_schema_to_grammar.py rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
json-schema-pydantic-example.py server: update refs -> llama-server 2024-06-06 15:44:40 +01:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
pydantic_models_to_grammar.py grammars: x{min,max} repetition operator (#6640) 2024-06-06 10:07:06 +01:00
pydantic-models-to-grammar-examples.py examples : make pydantic scripts pass mypy and support py3.8 (#5099) 2024-01-25 14:51:24 -05:00
reason-act.sh rename llama|main -> llama-cli; consistent RPM bin prefixes 2024-06-10 15:34:14 +01:00
regex-to-grammar.py JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00
server-embd.py server : refactor (#5882) 2024-03-07 11:41:53 +02:00
server-llama2-13B.sh server: update refs -> llama-server 2024-06-06 15:44:40 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00