llama.cpp/examples
Georgi Gerganov e6e7c75d94
server : fix extra BOS in infill endpoint (#11106)
* server : fix extra BOS in infill endpoing

ggml-ci

* server : update infill tests
2025-01-06 15:36:08 +02:00
..
batched llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
batched-bench llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
batched.swift llama : llama_perf + option to disable timings during decode (#9355) 2024-09-13 09:53:38 +03:00
convert-llama2c-to-ggml llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
cvector-generator llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
deprecation-warning Update deprecation-warning.cpp (#10619) 2024-12-04 23:19:20 +01:00
embedding llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
eval-callback llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
export-lora examples, ggml : fix GCC compiler warnings (#10983) 2024-12-26 14:59:11 +01:00
gbnf-validator llama : minor grammar refactor (#10897) 2024-12-19 17:42:13 +02:00
gen-docs ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf-hash ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf-split llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
gritlm llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
imatrix llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
infill llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
llama.android android : fix llama_batch free (#11014) 2024-12-30 14:35:13 +02:00
llama.swiftui llama : use cmake for swift build (#10525) 2024-12-08 13:14:54 +02:00
llava llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
lookahead llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
lookup llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
main llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
main-cmake-pkg ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
parallel llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
passkey llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
perplexity llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
quantize Update README.md (#10772) 2024-12-11 16:16:32 +01:00
quantize-stats llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
retrieval llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
rpc rpc-server : add support for the SYCL backend (#10934) 2024-12-23 10:39:30 +02:00
run llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
save-load-state llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
server server : fix extra BOS in infill endpoint (#11106) 2025-01-06 15:36:08 +02:00
simple llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
simple-chat llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
speculative llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
speculative-simple llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
sycl [SYCL]set context default value to avoid memory issue, update guide (#9476) 2024-09-18 08:30:31 +08:00
tokenize llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
tts llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh scripts : fix pattern and get n_tokens in one go (#10221) 2024-11-09 09:06:54 +02:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt tts : add OuteTTS support (#10784) 2024-12-18 19:27:21 +02:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
llama.vim llama.vim : bump generation time limit to 3s [no ci] 2024-10-23 17:16:56 +03:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) 2024-07-20 22:09:17 -04:00
pydantic_models_to_grammar.py pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
server_embd.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00