llama.cpp/examples
Mathijs Henquet 78203641fe
Some checks failed
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Waiting to run
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
Python Type-Check / pyright type-check (push) Has been cancelled
server : Add option to return token pieces in /tokenize endpoint (#9108)
* server : added with_pieces functionality to /tokenize endpoint

* server : Add tokenize with pieces tests to server.feature

* Handle case if tokenizer splits along utf8 continuation bytes

* Add example of token splitting

* Remove trailing ws

* Fix trailing ws

* Maybe fix ci

* maybe this fix windows ci?

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-09-12 22:30:11 +02:00
..
baby-llama Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
batched common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
batched-bench batched-bench : remove unused code (#9305) 2024-09-11 10:03:54 +03:00
batched.swift llama : minor sampling refactor (2) (#9386) 2024-09-09 17:10:46 +02:00
benchmark ggml : hide ggml_object, ggml_cgraph, ggml_hash_set (#9408) 2024-09-12 14:23:49 +03:00
convert-llama2c-to-ggml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cvector-generator ggml : hide ggml_object, ggml_cgraph, ggml_hash_set (#9408) 2024-09-12 14:23:49 +03:00
deprecation-warning examples : remove finetune and train-text-from-scratch (#8669) 2024-07-25 10:39:04 +02:00
embedding llama : move random seed generation to the samplers (#9398) 2024-09-10 18:04:25 +02:00
eval-callback common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
export-lora ggml : hide ggml_object, ggml_cgraph, ggml_hash_set (#9408) 2024-09-12 14:23:49 +03:00
gbnf-validator llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
gen-docs common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
gguf gguf : handle null name during init (#8587) 2024-07-20 17:15:42 +03:00
gguf-hash gguf-hash : update clib.json to point to original xxhash repo (#8491) 2024-07-16 10:14:16 +03:00
gguf-split build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
gritlm common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
imatrix common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
infill llama : move random seed generation to the samplers (#9398) 2024-09-10 18:04:25 +02:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama.android llama : minor sampling refactor (2) (#9386) 2024-09-09 17:10:46 +02:00
llama.swiftui llama : minor sampling refactor (2) (#9386) 2024-09-09 17:10:46 +02:00
llava llava : fix the script error in MobileVLM README (#9054) 2024-09-12 14:34:22 +03:00
lookahead common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
lookup common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
main llama : move random seed generation to the samplers (#9398) 2024-09-10 18:04:25 +02:00
main-cmake-pkg Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
parallel common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
passkey common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
perplexity llama : move random seed generation to the samplers (#9398) 2024-09-10 18:04:25 +02:00
quantize cmake : fixed the order of linking libraries for llama-quantize (#9450) 2024-09-12 14:27:14 +03:00
quantize-stats llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
retrieval common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
rpc readme : add LLMUnity to UI projects (#9381) 2024-09-09 14:21:38 +03:00
save-load-state common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
server server : Add option to return token pieces in /tokenize endpoint (#9108) 2024-09-12 22:30:11 +02:00
simple common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
speculative common : move arg parser code to arg.cpp (#9388) 2024-09-09 23:36:09 +02:00
sycl enhance run script to be easy to change the parameters (#9448) 2024-09-12 17:44:17 +08:00
tokenize common : remove duplicate function llama_should_add_bos_token (#8778) 2024-08-15 10:23:23 +03:00
base-translate.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt examples : remove finetune and train-text-from-scratch (#8669) 2024-07-25 10:39:04 +02:00
convert_legacy_llama.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
json_schema_pydantic_example.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) 2024-07-20 22:09:17 -04:00
pydantic_models_to_grammar.py pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
server_embd.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00