llama.cpp/.github
Pierrick Hymbert 75cd4c7729
ci: bench: support sse and fix prompt processing time / server: add tokens usage in stream OAI response (#6495)
* ci: bench: support sse and fix prompt processing time
server: add tokens usage in stream mode

* ci: bench: README.md EOL

* ci: bench: remove total pp and tg as it is not accurate

* ci: bench: fix case when there is no token generated

* ci: bench: change to the 95 percentile for pp and tg as it is closer to what the server exports in metrics

* ci: bench: fix finish reason rate
2024-04-06 05:40:47 +02:00
..
ISSUE_TEMPLATE server: init functional tests (#5566) 2024-02-24 12:28:55 +01:00
workflows ci: bench: support sse and fix prompt processing time / server: add tokens usage in stream OAI response (#6495) 2024-04-06 05:40:47 +02:00