Commit Graph

19 Commits

Author SHA1 Message Date
Georgi Gerganov
b2debf65f2
parallel : add disabled experimental batch chunking in powers of two 2023-09-20 20:14:05 +03:00
Georgi Gerganov
ded9b43cad
parallel : fix cases where the input prompts can overflow the batch 2023-09-20 19:09:25 +03:00
Georgi Gerganov
ee1d670cc6 parallel : fix bug (extra BOS) + smaller token_prev array 2023-09-20 17:32:21 +03:00
Georgi Gerganov
b377bf2266
simple : add parallel decoding support 2023-09-20 13:06:34 +03:00
Georgi Gerganov
addae65fd4
llama : improve llama_batch API + simplify parallel example 2023-09-20 11:03:18 +03:00
Georgi Gerganov
a1327c71c6
parallel : rename hot-plug to continuous-batching 2023-09-20 09:24:41 +03:00
Georgi Gerganov
7b7472ee26
parallel : minor 2023-09-20 00:35:10 +03:00
Georgi Gerganov
6028879f56 parallel : print misses on each request 2023-09-19 23:50:05 +03:00
Georgi Gerganov
eed3fd4234 parallel : count cache misses 2023-09-19 23:47:47 +03:00
Georgi Gerganov
8a9aca37c1
parallel : remove question with short answers 2023-09-19 23:34:30 +03:00
Georgi Gerganov
4b5f3cd6bf
parallel : process system prompt once + configurable paramters + llama API 2023-09-19 17:00:42 +03:00
Georgi Gerganov
82e20e9ba0 parallel : remove new line from prompt 2023-09-19 13:54:41 +03:00
Georgi Gerganov
16090a5dde
parallel : fix sequence termination criteria 2023-09-19 13:29:29 +03:00
Georgi Gerganov
806d397c1a
parallel : try smaller batches when the KV cache is fragmented 2023-09-19 13:21:36 +03:00
Georgi Gerganov
36714e16d0
parallel : various improvements 2023-09-19 12:29:37 +03:00
Georgi Gerganov
fa0e677820
llama : extend batch API to select which logits to output 2023-09-19 00:24:13 +03:00
Georgi Gerganov
897caccdf4
fixes : speculative KV cache + llama worst-case graph 2023-09-18 22:32:28 +03:00
Georgi Gerganov
466b513851
parallel : disable hot-plug to avoid cache fragmentation 2023-09-18 21:34:20 +03:00
Georgi Gerganov
0161372b9a
parallel : example for serving multiple users in parallel 2023-09-18 20:37:28 +03:00