llama.cpp/examples/llama.swiftui
Pedro Cuenca b97bc3966e
llama : support Llama 3 HF conversion (#6745)
* Support Llama 3 conversion

The tokenizer is BPE.

* style

* Accept suggestion

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>

* llama : add llama_token_is_eog()

ggml-ci

* llama : auto-detect more EOT tokens when missing in KV data

* convert : replacing EOS token is a hack

* llama : fix codegemma EOT token + add TODOs

* llama : fix model type string for 8B model

---------

Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-21 14:50:41 +03:00
..
llama.cpp.swift llama : support Llama 3 HF conversion (#6745) 2024-04-21 14:50:41 +03:00
llama.swiftui llama.swiftui : update models layout (#4826) 2024-01-12 14:48:00 +02:00
llama.swiftui.xcodeproj llama.swiftui : update models layout (#4826) 2024-01-12 14:48:00 +02:00
.gitignore llama.swiftui : add bench functionality (#4483) 2023-12-17 19:38:41 +02:00
README.md llama.swiftui : update readme 2024-01-08 15:57:36 +02:00

llama.cpp/examples/llama.swiftui

Local inference of llama.cpp on an iPhone. This is a sample app that can be used as a starting point for more advanced projects.

For usage instructions and performance stats, check the following discussion: https://github.com/ggerganov/llama.cpp/discussions/4508

image

Video demonstration:

https://github.com/bachittle/llama.cpp/assets/39804642/e290827a-4edb-4093-9642-2a5e399ec545