llama.cpp/examples/embedding
Georgi Gerganov afa8a9ec9b
llama : add llama_vocab, functions -> methods, naming (#11110)
* llama : functions -> methods (#11110)

* llama : add struct llama_vocab to the API (#11156)

ggml-ci

* hparams : move vocab params to llama_vocab (#11159)

ggml-ci

* vocab : more pimpl (#11165)

ggml-ci

* vocab : minor tokenization optimizations (#11160)

ggml-ci

Co-authored-by: Diego Devesa <slarengh@gmail.com>

* lora : update API names (#11167)

ggml-ci

* llama : update API names to use correct prefix (#11174)

* llama : update API names to use correct prefix

ggml-ci

* cont

ggml-ci

* cont

ggml-ci

* minor [no ci]

* vocab : llama_vocab_add_[be]os -> llama_vocab_get_add_[be]os (#11174)

ggml-ci

* vocab : llama_vocab_n_vocab -> llama_vocab_n_tokens (#11174)

ggml-ci

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-01-12 11:32:42 +02:00
..
CMakeLists.txt ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
embedding.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
README.md embedding : add --pooling option to README.md [no ci] (#8934) 2024-08-09 09:33:30 +03:00

llama.cpp/example/embedding

This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.

Quick Start

To get started right away, run the following command, making sure to use the correct path for the model you have:

Unix-based systems (Linux, macOS, etc.):

./llama-embedding -m ./path/to/model --pooling mean --log-disable -p "Hello World!" 2>/dev/null

Windows:

llama-embedding.exe -m ./path/to/model --pooling mean --log-disable -p "Hello World!" 2>$null

The above command will output space-separated float values.

extra parameters

--embd-normalize integer

integer description formula
-1 none
0 max absolute int16 \Large{{32760 * x_i} \over\max \lvert x_i\rvert}
1 taxicab \Large{x_i \over\sum \lvert x_i\rvert}
2 euclidean (default) \Large{x_i \over\sqrt{\sum x_i^2}}
>2 p-norm \Large{x_i \over\sqrt[p]{\sum \lvert x_i\rvert^p}}

--embd-output-format 'string'

'string' description
'' same as before (default)
'array' single embeddings [[x_1,...,x_n]]
multiple embeddings [[x_1,...,x_n],[x_1,...,x_n],...,[x_1,...,x_n]]
'json' openai style
'json+' add cosine similarity matrix

--embd-separator "string"

"string"
"\n" (default)
"<#embSep#>" for exemple
"<#sep#>" other exemple

examples

Unix-based systems (Linux, macOS, etc.):

./llama-embedding -p 'Castle<#sep#>Stronghold<#sep#>Dog<#sep#>Cat' --pooling mean --embd-separator '<#sep#>' --embd-normalize 2  --embd-output-format '' -m './path/to/model.gguf' --n-gpu-layers 99 --log-disable 2>/dev/null

Windows:

llama-embedding.exe -p 'Castle<#sep#>Stronghold<#sep#>Dog<#sep#>Cat' --pooling mean --embd-separator '<#sep#>' --embd-normalize 2  --embd-output-format '' -m './path/to/model.gguf' --n-gpu-layers 99 --log-disable 2>/dev/null