Add llmaz as another platform to run llama.cpp on Kubernetes

Signed-off-by: kerthcet <kerthcet@gmail.com>
This commit is contained in:
kerthcet 2024-08-20 10:43:41 +08:00
parent cfac111e2b
commit 7323304092

View File

@ -191,6 +191,7 @@ Unless otherwise noted these projects are open-source with permissive licensing:
**Infrastructure:**
- [llmaz](https://github.com/InftyAI/llmaz) - ☸️ Effortlessly serve state-of-the-art LLMs on Kubernetes, see [llama.cpp example](https://github.com/InftyAI/llmaz/tree/main/docs/examples/llamacpp) here.
- [Paddler](https://github.com/distantmagic/paddler) - Stateful load balancer custom-tailored for llama.cpp
- [GPUStack](https://github.com/gpustack/gpustack) - Manage GPU clusters for running LLMs