llama.cpp/examples/embedding
Douglas Hanley 03bf161eb6
llama : support batched embeddings (#5466)
* batched embedding: pool outputs by sequence id. updated embedding example

* bring back non-causal attention

* embd : minor improvements

* llama : minor

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-13 14:06:58 +02:00
..
CMakeLists.txt build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
embedding.cpp llama : support batched embeddings (#5466) 2024-02-13 14:06:58 +02:00
README.md embedding : update README.md (#3224) 2023-09-21 11:57:40 +03:00

llama.cpp/example/embedding

This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.

Quick Start

To get started right away, run the following command, making sure to use the correct path for the model you have:

Unix-based systems (Linux, macOS, etc.):

./embedding -m ./path/to/model --log-disable -p "Hello World!" 2>/dev/null

Windows:

embedding.exe -m ./path/to/model --log-disable -p "Hello World!" 2>$null

The above command will output space-separated float values.