update llama-rpc-server bin name + doc

This commit is contained in:
Olivier Chafik 2024-06-11 14:42:03 +01:00
parent ee3a086fdf
commit e474ef1df4
3 changed files with 20 additions and 23 deletions

View File

@ -106,7 +106,7 @@ ifeq ($(UNAME_S),Darwin)
endif endif
ifdef LLAMA_RPC ifdef LLAMA_RPC
BUILD_TARGETS += rpc-server BUILD_TARGETS += llama-rpc-server
endif endif
default: $(BUILD_TARGETS) default: $(BUILD_TARGETS)
@ -699,7 +699,7 @@ ggml-rpc.o: ggml-rpc.cpp ggml-rpc.h
rpc-server.o: examples/rpc/rpc-server.cpp ggml-rpc.h rpc-server.o: examples/rpc/rpc-server.cpp ggml-rpc.h
$(CXX) $(CXXFLAGS) -c $< -o $@ $(CXX) $(CXXFLAGS) -c $< -o $@
rpc-server: rpc-server.o ggml.o llama.o $(COMMON_DEPS) $(OBJS) llama-rpc-server: rpc-server.o ggml.o llama.o $(COMMON_DEPS) $(OBJS)
$(CXX) $(CXXFLAGS) $^ -o $@ $(LDFLAGS) $(CXX) $(CXXFLAGS) $^ -o $@ $(LDFLAGS)
endif # LLAMA_RPC endif # LLAMA_RPC

View File

@ -1,2 +1,3 @@
add_executable(rpc-server rpc-server.cpp) set(TARGET llama-rpc-server)
target_link_libraries(rpc-server PRIVATE ggml llama) add_executable(${TARGET} rpc-server.cpp)
target_link_libraries(${TARGET} PRIVATE ggml llama)

View File

@ -1,7 +1,7 @@
## Overview ## Overview
The `rpc-server` allows running `ggml` backend on a remote host. `llama-rpc-server` allows running `ggml` backend on a remote host.
The RPC backend communicates with one or several instances of `rpc-server` and offloads computations to them. The RPC backend communicates with one or several instances of `llama-rpc-server` and offloads computations to them.
This can be used for distributed LLM inference with `llama.cpp` in the following way: This can be used for distributed LLM inference with `llama.cpp` in the following way:
```mermaid ```mermaid
@ -10,13 +10,13 @@ flowchart TD
rpcb---|TCP|srvb rpcb---|TCP|srvb
rpcb-.-|TCP|srvn rpcb-.-|TCP|srvn
subgraph hostn[Host N] subgraph hostn[Host N]
srvn[rpc-server]-.-backend3["Backend (CUDA,Metal,etc.)"] srvn[llama-rpc-server]-.-backend3["Backend (CUDA,Metal,etc.)"]
end end
subgraph hostb[Host B] subgraph hostb[Host B]
srvb[rpc-server]---backend2["Backend (CUDA,Metal,etc.)"] srvb[llama-rpc-server]---backend2["Backend (CUDA,Metal,etc.)"]
end end
subgraph hosta[Host A] subgraph hosta[Host A]
srva[rpc-server]---backend["Backend (CUDA,Metal,etc.)"] srva[llama-rpc-server]---backend["Backend (CUDA,Metal,etc.)"]
end end
subgraph host[Main Host] subgraph host[Main Host]
ggml[llama.cpp]---rpcb[RPC backend] ggml[llama.cpp]---rpcb[RPC backend]
@ -25,7 +25,7 @@ flowchart TD
``` ```
Each host can run a different backend, e.g. one with CUDA and another with Metal. Each host can run a different backend, e.g. one with CUDA and another with Metal.
You can also run multiple `rpc-server` instances on the same host, each with a different backend. You can also run multiple `llama-rpc-server` instances on the same host, each with a different backend.
## Usage ## Usage
@ -33,16 +33,14 @@ On each host, build the corresponding backend with `cmake` and add `-DLLAMA_RPC=
For example, to build the CUDA backend with RPC support: For example, to build the CUDA backend with RPC support:
```bash ```bash
mkdir build-rpc-cuda cmake -B build-rpc-cuda -DLLAMA_CUDA=ON -DLLAMA_RPC=ON
cd build-rpc-cuda cmake --build build-rpc-cuda --config Release
cmake .. -DLLAMA_CUDA=ON -DLLAMA_RPC=ON
cmake --build . --config Release
``` ```
Then, start the `rpc-server` with the backend: Then, start `llama-rpc-server` with the backend:
```bash ```bash
$ bin/rpc-server -p 50052 $ bin/llama-rpc-server -p 50052
create_backend: using CUDA backend create_backend: using CUDA backend
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
@ -53,21 +51,19 @@ Starting RPC server on 0.0.0.0:50052
When using the CUDA backend, you can specify the device with the `CUDA_VISIBLE_DEVICES` environment variable, e.g.: When using the CUDA backend, you can specify the device with the `CUDA_VISIBLE_DEVICES` environment variable, e.g.:
```bash ```bash
$ CUDA_VISIBLE_DEVICES=0 bin/rpc-server -p 50052 $ CUDA_VISIBLE_DEVICES=0 bin/llama-rpc-server -p 50052
``` ```
This way you can run multiple `rpc-server` instances on the same host, each with a different CUDA device. This way you can run multiple `llama-rpc-server` instances on the same host, each with a different CUDA device.
On the main host build `llama.cpp` only with `-DLLAMA_RPC=ON`: On the main host build `llama.cpp` only with `-DLLAMA_RPC=ON`:
```bash ```bash
mkdir build-rpc cmake -B build-rpc -DLLAMA_RPC=ON
cd build-rpc cmake --build build-rpc --config Release -t -j
cmake .. -DLLAMA_RPC=ON
cmake --build . --config Release
``` ```
Finally, use the `--rpc` option to specify the host and port of each `rpc-server`: Finally, use the `--rpc` option to specify the host and port of each `llama-rpc-server`:
```bash ```bash
$ bin/llama-cli -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 192.168.88.10:50052,192.168.88.11:50052 -ngl 99 $ bin/llama-cli -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 192.168.88.10:50052,192.168.88.11:50052 -ngl 99