mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 03:14:35 +00:00
5b2c04f492
This commit adds the `--pooling` option to the README.md file in the `examples/embedding` directory. The motivation for adding this options is that currently if the model used does not specify a pooling type the embedding example will fail with the following error message: ```console main: error: pooling type NONE not supported ``` This commit also updates the name of the executable in the examples section.
61 lines
2.2 KiB
Markdown
61 lines
2.2 KiB
Markdown
# llama.cpp/example/embedding
|
|
|
|
This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.
|
|
|
|
## Quick Start
|
|
|
|
To get started right away, run the following command, making sure to use the correct path for the model you have:
|
|
|
|
### Unix-based systems (Linux, macOS, etc.):
|
|
|
|
```bash
|
|
./llama-embedding -m ./path/to/model --pooling mean --log-disable -p "Hello World!" 2>/dev/null
|
|
```
|
|
|
|
### Windows:
|
|
|
|
```powershell
|
|
llama-embedding.exe -m ./path/to/model --pooling mean --log-disable -p "Hello World!" 2>$null
|
|
```
|
|
|
|
The above command will output space-separated float values.
|
|
|
|
## extra parameters
|
|
### --embd-normalize $integer$
|
|
| $integer$ | description | formula |
|
|
|-----------|---------------------|---------|
|
|
| $-1$ | none |
|
|
| $0$ | max absolute int16 | $\Large{{32760 * x_i} \over\max \lvert x_i\rvert}$
|
|
| $1$ | taxicab | $\Large{x_i \over\sum \lvert x_i\rvert}$
|
|
| $2$ | euclidean (default) | $\Large{x_i \over\sqrt{\sum x_i^2}}$
|
|
| $>2$ | p-norm | $\Large{x_i \over\sqrt[p]{\sum \lvert x_i\rvert^p}}$
|
|
|
|
### --embd-output-format $'string'$
|
|
| $'string'$ | description | |
|
|
|------------|------------------------------|--|
|
|
| '' | same as before | (default)
|
|
| 'array' | single embeddings | $[[x_1,...,x_n]]$
|
|
| | multiple embeddings | $[[x_1,...,x_n],[x_1,...,x_n],...,[x_1,...,x_n]]$
|
|
| 'json' | openai style |
|
|
| 'json+' | add cosine similarity matrix |
|
|
|
|
### --embd-separator $"string"$
|
|
| $"string"$ | |
|
|
|--------------|-|
|
|
| "\n" | (default)
|
|
| "<#embSep#>" | for exemple
|
|
| "<#sep#>" | other exemple
|
|
|
|
## examples
|
|
### Unix-based systems (Linux, macOS, etc.):
|
|
|
|
```bash
|
|
./llama-embedding -p 'Castle<#sep#>Stronghold<#sep#>Dog<#sep#>Cat' --pooling mean --embd-separator '<#sep#>' --embd-normalize 2 --embd-output-format '' -m './path/to/model.gguf' --n-gpu-layers 99 --log-disable 2>/dev/null
|
|
```
|
|
|
|
### Windows:
|
|
|
|
```powershell
|
|
llama-embedding.exe -p 'Castle<#sep#>Stronghold<#sep#>Dog<#sep#>Cat' --pooling mean --embd-separator '<#sep#>' --embd-normalize 2 --embd-output-format '' -m './path/to/model.gguf' --n-gpu-layers 99 --log-disable 2>/dev/null
|
|
```
|