mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 13:30:35 +00:00
5e31828d3e
* ggml : add RPC backend The RPC backend proxies all operations to a remote server which runs a regular backend (CPU, CUDA, Metal, etc). * set TCP_NODELAY * add CI workflows * Address review comments * fix warning * implement llama_max_devices() for RPC * Address review comments * Address review comments * wrap sockfd into a struct * implement get_alignment and get_max_size * add get_device_memory * fix warning * win32 support * add README * readme : trim trailing whitespace * Address review comments * win32 fix * Address review comments * fix compile warnings on macos |
||
---|---|---|
.. | ||
bench.yml | ||
build.yml | ||
close-issue.yml | ||
code-coverage.yml | ||
docker.yml | ||
editorconfig.yml | ||
gguf-publish.yml | ||
nix-ci-aarch64.yml | ||
nix-ci.yml | ||
nix-flake-update.yml | ||
nix-publish-flake.yml | ||
python-check-requirements.yml | ||
python-lint.yml | ||
server.yml | ||
zig-build.yml |