mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-13 14:29:52 +00:00
9e359a4f47
* server: #5655 - continue to update other slots on embedding concurrent request. * server: tests: add multi users embeddings as fixed * server: tests: adding OAI compatible embedding concurrent endpoint * server: tests: adding OAI compatible embedding with multiple inputs |
||
---|---|---|
.. | ||
steps | ||
environment.py | ||
issues.feature | ||
parallel.feature | ||
security.feature | ||
server.feature | ||
wrong_usages.feature |