llama.cpp/examples
compilade 9bc6db28d0
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
* ggml-quants : 1.625 bpw ternary packing for BitNet 1.58b

* ggml-quants : faster 1.625 bpw AVX2 vec_dot

Not using a lookup table anymore makes it match q4_0 speed.

* gguf-py : fix formatting

* llama : remove spaces on empty line

* ggml-quants : subtract 1 when back in epi8

This makes the 1.625 bpw type go faster than q4_0. Still not the fastest.

* ggml-quants : Q2_2 now faster than Q4_K on with AVX2

* ggml-quants : cleanup Q1_3 code formatting

* ggml-quants : ARM NEON vec_dot for q2_2 and q1_3

* ggml-quants : use ceiling division when quantizing q1_3

* convert-hf : simplify BitNet pre-quantization

This still results in the exact same tensor weights and scales,
but it reveals some weirdness in the current algorithm.

* convert-hf : allow converting the weird BitNet 1.3B

Its FFN size is 5460 which is not convenient.
The offending tensors are kept in F16,
which makes the final model 5.01 bpw.

* bitnet : replace 1.58b with b1.58, as in the paper

* ggml-quants : fix build failure on Windows

* ggml-quants : attempt to fix Arm 32-bit support

* ggml : add some informative comments in q1_3 vec_dot

* ggml : add TQ1_0 and TQ2_0 ternary quantization types

* ggml : even faster TQ2_0

* ggml : also faster TQ1_0

Same optimization as for TQ2_0 by offsetting the sum instead of the weights.
This makes TQ1_0 almost as fast as Q8_0 on AVX2.

* ggml : fix build issues in certain environments

* ggml : add NEON vec_dot implementation for TQ1_0 and TQ2_0

* ggml : avoid directly using vmlal_high_s8, for 32-bit ARM compat

The compiler seems smart enough to use the same instruction
even when using vget_high_s8 instead.

* ggml : remove q1_3 and q2_2

No more 1.625 bpw and 2.000 bpw,
now instead using 1.6875 bpw and 2.0625 bpw
with TQ1_0 and TQ2_0, respectively.

* llama : remove the separate scale tensors of BitNet b1.58

They won't be needed, since the remaining ternary quant types have
built-in scales.

* ggml-quants : rename fields of TQ1_0 and TQ2_0 structs for consistency

* ggml-quants : allow using vdotq_s32 in TQ2_0 vec_dot

Not yet tested on hardware which supports it,
might not work or might not even compile. But also it might.
It should make the performance better on recent ARM CPUs.

* ggml-quants : remove comment about possible format change of TQ2_0

Making it slightly more convenient for AVX512
but less convenient for everything else is not worth the trouble.

* gguf-py : Numpy (de)quantization for TQ1_0 and TQ2_0

* ggml-quants : use roundf instead of nearest_int for TQ1_0 and TQ2_0

This does not change anything for ternary models,
since their values should never end up being in halfway cases anyway.

* convert : allow direct conversion to TQ1_0 and TQ2_0

The token embeddings and output tensors are kept in F16
to allow quantizing them to Q4_K and Q6_K with llama-quantize.

* llama : handle fallback for TQ1_0 and TQ2_0 with Q4_0

Q4_0 is not completely symmetric (so not lossless for ternary models),
but it should be good enough.

* ggml-quants : allow using ARM dot product instructions for TQ1_0

* ggml-quants : deduplicate TQ1_0 and TQ2_0 __ARM_FEATURE_DOTPROD support

* ggml : remove unused ggml_mul special case

It would otherwise conflict with the more general
optimization coming with Mamba-2.

* ggml : handle TQ1_0 and TQ2_0 in dequantization-based operators

* test-backend-ops : add TQ1_0 and TQ2_0 comments for later

Not yet adding uncommented, because some backends like SYCL and Metal
do not properly handle unknown types in supports_op for GGML_OP_MUL_MAT.
(and Metal also doesn't handle it with GGML_OP_GET_ROWS)
Support for TQ1_0 and TQ2_0 for other backends than CPU
will be added in follow-up pull requests.
2024-09-05 21:48:47 -04:00
..
baby-llama Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
batched batched: fix n_predict parameter (#8527) 2024-07-17 10:34:28 +03:00
batched-bench batched-bench : handle empty -npl (#8839) 2024-08-04 13:55:03 +03:00
batched.swift Detokenizer fixes (#8039) 2024-07-05 19:01:35 +02:00
benchmark Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
convert-llama2c-to-ggml build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cvector-generator Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
deprecation-warning examples : remove finetune and train-text-from-scratch (#8669) 2024-07-25 10:39:04 +02:00
embedding Add support for encoder-only T5 models (#8900) 2024-08-10 11:43:26 +02:00
eval-callback common : remove duplicate function llama_should_add_bos_token (#8778) 2024-08-15 10:23:23 +03:00
export-lora Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
gbnf-validator llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00
gguf gguf : handle null name during init (#8587) 2024-07-20 17:15:42 +03:00
gguf-hash gguf-hash : update clib.json to point to original xxhash repo (#8491) 2024-07-16 10:14:16 +03:00
gguf-split build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
gritlm llama : allow pooled embeddings on any model (#7477) 2024-06-21 08:38:22 +03:00
imatrix common : remove duplicate function llama_should_add_bos_token (#8778) 2024-08-15 10:23:23 +03:00
infill common : remove duplicate function llama_should_add_bos_token (#8778) 2024-08-15 10:23:23 +03:00
jeopardy build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench llama-bench : fix NUL terminators in CPU name (#9313) 2024-09-05 02:19:39 +02:00
llama.android examples: fix android example cannot be generated continuously (#8621) 2024-07-22 09:54:42 +03:00
llama.swiftui Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
llava llava : the function "clip" should be int (#9237) 2024-08-30 07:21:57 +02:00
lookahead common : Changed tuple to struct (TODO fix) (#8823) 2024-08-05 18:14:10 +02:00
lookup common : Changed tuple to struct (TODO fix) (#8823) 2024-08-05 18:14:10 +02:00
main llama-cli : remove duplicated log message (#9275) 2024-09-02 15:36:43 +03:00
main-cmake-pkg Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
parallel common : Changed tuple to struct (TODO fix) (#8823) 2024-08-05 18:14:10 +02:00
passkey passkey : add short intro to README.md [no-ci] (#8317) 2024-07-05 09:14:24 +03:00
perplexity common : remove duplicate function llama_should_add_bos_token (#8778) 2024-08-15 10:23:23 +03:00
quantize ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
quantize-stats ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
retrieval retrieval : fix memory leak in retrieval query handling (#8955) 2024-08-15 10:40:12 +03:00
rpc Merge commit from fork 2024-08-09 23:03:21 +03:00
save-load-state common : Changed tuple to struct (TODO fix) (#8823) 2024-08-05 18:14:10 +02:00
server server : test script : add timeout for all requests (#9282) 2024-09-02 22:08:38 +02:00
simple simple : update name of executable to llama-simple (#8885) 2024-08-06 16:44:35 +02:00
speculative Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
sycl [SYCL] Updated SYCL device filtering (#8901) 2024-08-07 11:25:36 +01:00
tokenize common : remove duplicate function llama_should_add_bos_token (#8778) 2024-08-15 10:23:23 +03:00
base-translate.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-vicuna.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
CMakeLists.txt examples : remove finetune and train-text-from-scratch (#8669) 2024-07-25 10:39:04 +02:00
convert_legacy_llama.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
json_schema_pydantic_example.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
llama.vim llama.vim : added api key support (#5090) 2024-01-23 08:51:27 +02:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
Miku.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar_examples.py examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) 2024-07-20 22:09:17 -04:00
pydantic_models_to_grammar.py pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
reason-act.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
server_embd.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
server-llama2-13B.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00