diff --git a/examples/server/README.md b/examples/server/README.md index 898e32bb6..bd369444b 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -26,8 +26,8 @@ Command line options: - `--embedding`: Enable embedding extraction, Default: disabled. - `-np N`, `--parallel N`: Set the number of slots for process requests (default: 1) - `-cb`, `--cont-batching`: enable continuous batching (a.k.a dynamic batching) (default: disabled) -- `-spf FNAME`, `--system-prompt-file FNAME` Set a file to load a system prompt (initial prompt of all slots), this is useful for chat applications. [See more](#change-system-prompt-on-runtime) -- +- `-spf FNAME`, `--system-prompt-file FNAME` Set a file to load "a system prompt (initial prompt of all slots), this is useful for chat applications. [See more](#change-system-prompt-on-runtime) +- `--mmproj MMPROJ_FILE`: Path to a multimodal projector file for LLaVA. ## Build @@ -162,6 +162,8 @@ node index.js `n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token (default: 0) + `image_data`: An array of objects to hold base64-encoded image `data` and its `id`s to be reference in `prompt`. You can determine the place of the image in the prompt as in the following: `USER:[img-12]Describe the image in detail.\nASSISTANT:` In this case, `[img-12]` will be replaced by the embeddings of the image id 12 in the following `image_data` array: `{..., "image_data": ["data": "", "id": 12]}`. Use `image_data` only with multimodal models, e.g., LLaVA. + *Result JSON:* Note: When using streaming mode (`stream`) only `content` and `stop` will be returned until end of completion.