Miwa / Ensan
5c9f90cba1
swift : fix prompt tokenization logic ( #4321 )
2023-12-04 15:43:45 +02:00
Miwa / Ensan
b220222a64
swift : fix token_to_piece implementation ( #4278 )
...
* Fix token_to_piece implementation in Swift
* Fix errors
2023-12-01 20:19:45 +02:00
eastriver
2568a4bf54
main.swift : fix eos checking ( #4197 )
...
llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.
2023-11-24 11:25:10 +02:00
Georgi Gerganov
0e89203b51
speculative : add tree-based sampling example ( #3624 )
...
* sampling : one sequence per sampling context
ggml-ci
* speculative : add tree-based sampling support
ggml-ci
* speculative : reuse the n_parallel CLI param
* speculative : refactor sampling
* examples : fix build after sampling refactoring
ggml-ci
* batched : fix n_seq_id
* sampling : fix malloc
ggml-ci
* swift : fix build
ggml-ci
* swift : try to fix build
ggml-ci
* prompts : add assistant.txt
* common : add llama_batch_add() and llama_batch_clear() helpers
* speculative : minor refactor
ggml-ci
* minor : comments + rename
ggml-ci
* speculative : fix off-by-one for n_drafted
* speculative : fix the n_drafted fix + p constants
2023-10-18 16:21:57 +03:00
staviq
1a159553f9
tokenizer : special token handling ( #3538 )
...
* Rewrite special token handling from #1931
* shorten param name, add st verification by type
* use offsets instead of copy by substr
* formatting, remove copying iterator on delete
* llama : normalize code-style
* swift fix
* print pfx/sfx if verb, main: split pfx input sfx
* dont add space when using special tokens
* minor : comment + spacing
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-10-17 18:11:01 +03:00
Zane Shannon
24ba3d829e
examples : add batched.swift + improve CI for swift ( #3562 )
2023-10-11 06:14:05 -05:00