Block a user
root
synced commits to gg/ggml-hide-structs at root/llama.cpp from mirror
2024-09-11 15:56:16 +00:00
ee154457dd
ggml : fix compiler warnings
root
synced commits to gg/ggml-rework-cgraph at root/llama.cpp from mirror
2024-09-11 15:56:16 +00:00
root
synced new reference gg/ggml-rework-cgraph to root/llama.cpp from mirror
2024-09-11 15:56:16 +00:00
5449c1720d
cont : update all examples except server
d206f87698
cont : llama-cli + common [no ci]
c1845a9512
cont : fixes + add test [no ci]
26816380fd
common : reimplement the logger (wip) [no ci]
5bb2c5dbd2
files : remove accidentally added
lora_test
submodule (#9430)
1b28061400
llama : skip token bounds check when evaluating embeddings (#9437)
8db003a19d
py : support converting local models (#7547)
0996c5597f
llava : correct args for minicpmv-cli (#9429)
5bb2c5dbd2
files : remove accidentally added
lora_test
submodule (#9430)
67155ab7f5
feat: Implements retrying logic for downloading models using --model-url flag (#9255)
98038bdc8a
shutil added to imports
810dc7d034
Merge conflict solved
5bb2c5dbd2
files : remove accidentally added
lora_test
submodule (#9430)
67155ab7f5
feat: Implements retrying logic for downloading models using --model-url flag (#9255)
5af118efda
CUDA: fix --split-mode row race condition (#9413)