Zay
e790eef21c
llama.swiftui : update models layout ( #4826 )
...
* Updated Models Layout
- Added a models drawer
- Added downloading directly from Hugging Face
- Load custom models from local folder
- Delete models by swiping left
* trimmed trailing white space
* Updated Models Layout
2024-01-12 14:48:00 +02:00
Georgi Gerganov
42ea63c5a3
llama.swiftui : update readme
2024-01-08 15:57:36 +02:00
Alex Azarov
72d8407b36
llama.swiftui : use llama.cpp as SPM package ( #4804 )
2024-01-07 10:20:50 +02:00
Alex Azarov
3418c03ecc
llama.swiftui : add visionOS target ( #4805 )
2024-01-07 09:46:55 +02:00
Daniel Illescas Romero
c75ca5d96f
llama.swiftui : use correct pointer for llama_token_eos ( #4797 )
2024-01-06 17:12:59 +02:00
Georgi Gerganov
91d38876df
metal : switch back to default.metallib (ggml/681)
...
ggml-ci
2024-01-05 18:02:06 +02:00
singularity
3c0b585561
llama.swiftui : support loading custom model from file picker ( #4767 )
...
* swiftui: support load model from file picker
* swiftui: remove trailing whitespace
2024-01-04 10:22:38 +02:00
singularity
46cea79e1f
llama.swiftui : fix build of ggml.metallib ( #4754 )
...
* metal: fix metal backend init failure in swiftui
* metal: build ggml.metallib instead of copy src
* llama.swift : remove debug flags from metallib build
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-01-04 09:58:16 +02:00
Peter Sugihara
afd997ab60
llama.swiftui : fix infinite loop, ouput timings, buff UI ( #4674 )
...
* fix infinite loop
* slight UI simplification, clearer UX
* clearer UI text, add timings to completion log
2023-12-29 15:58:56 +02:00
Georgi Gerganov
0e18b2e7d0
llama.swiftui : add tinyllama 1.1B F16
2023-12-18 20:17:43 +02:00
Georgi Gerganov
6ff39b129d
llama.swiftui : add more models
2023-12-18 20:05:12 +02:00
Georgi Gerganov
800a489e4a
llama.swiftui : add bench functionality ( #4483 )
...
* llama.swiftui : add bench button
* llama.swiftui : initial bench functionality
* force to use n_gpu_layers on simulator
* add download buttons & expose llamaState.loadModel
* update project.pbxproj
* comment #Preview & fix editorconfig check
* gitignore : xcode stuff
* llama.swiftui : UX improvements
* llama.swiftui : avoid data copy via "downloadTask"
* llama.swiftui : remove model from project
* llama : remove "mostly" from model infos
* llama.swiftui : improve bench
---------
Co-authored-by: jhen <developer@jhen.me>
2023-12-17 19:38:41 +02:00
Miwa / Ensan
d208995c6d
swift : fix concatenation method to avoid invalid UTF8 stringfication ( #4325 )
2023-12-04 18:03:49 +02:00
Miwa / Ensan
5c9f90cba1
swift : fix prompt tokenization logic ( #4321 )
2023-12-04 15:43:45 +02:00
Miwa / Ensan
b220222a64
swift : fix token_to_piece implementation ( #4278 )
...
* Fix token_to_piece implementation in Swift
* Fix errors
2023-12-01 20:19:45 +02:00
Bailey Chittle
bb03290c17
examples : iOS example with swift ui ( #4159 )
...
* copy to llama.cpp as subdir
* attempt enabling metal, fails
* ggml metal compiles!
* Update README.md
* initial conversion to new format, utf8 errors?
* bug fixes, but now has an invalid memory access :(
* added O3, now has insufficient memory access
* begin sync with master
* update to match latest code, new errors
* fixed it!
* fix for loop conditionals, increase result size
* fix current workflow errors
* attempt a llama.swiftui workflow
* Update .github/workflows/build.yml
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-27 16:56:52 +02:00