llama.cpp/examples/batched.swift
Pedro Cuenca b97bc3966e
llama : support Llama 3 HF conversion (#6745)
* Support Llama 3 conversion

The tokenizer is BPE.

* style

* Accept suggestion

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* llama : add llama_token_is_eog()

ggml-ci

* llama : auto-detect more EOT tokens when missing in KV data

* convert : replacing EOS token is a hack

* llama : fix codegemma EOT token + add TODOs

* llama : fix model type string for 8B model

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-21 14:50:41 +03:00
..
Sources llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
.gitignore examples : add batched.swift + improve CI for swift (#3562) 2023-10-11 06:14:05 -05:00
Makefile examples : add batched.swift + improve CI for swift (#3562) 2023-10-11 06:14:05 -05:00
Package.swift examples : add batched.swift + improve CI for swift (#3562) 2023-10-11 06:14:05 -05:00
README.md batched.swift : update README.md (#4214) 2023-11-30 23:45:17 +02:00

This is a swift clone of examples/batched.

$ make $ ./batched_swift MODEL_PATH [PROMPT] [PARALLEL]