This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2024-09-22 21:16:20 +00:00
Code
Issues
Actions
16
Packages
Projects
Releases
Wiki
Activity
All Workflows
build.yml
close-issue.yml
docker.yml
editorconfig.yml
gguf-publish.yml
labeler.yml
nix-ci-aarch64.yml
nix-ci.yml
nix-flake-update.yml
nix-publish-flake.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
server.yml
Actor
All actors
root
Status
All status
success
failure
waiting
running
quantize : improve type name parsing (#9570)
#188
:
Scheduled
master
2024-09-22 00:42:17 +00:00
0s
server : clean-up completed tasks from waiting list (#9531)
#175
:
Scheduled
master
2024-09-21 00:42:17 +00:00
0s
ggml : fix n_threads_cur initialization with one thread (#9538)
#165
:
Scheduled
master
2024-09-20 00:42:17 +00:00
0s
arg : add env variable for parallel (#9513)
#143
:
Scheduled
master
2024-09-19 00:42:17 +00:00
0s
ggml : move common CPU backend impl to new header (#9509)
#123
:
Scheduled
master
2024-09-18 00:42:17 +00:00
0s
common : reimplement logging (#9418)
#93
:
Scheduled
master
2024-09-17 00:42:14 +00:00
0s
ggml : ggml_type_name return "NONE" for invalid values (#9458)
#79
:
Scheduled
master
2024-09-16 00:42:14 +00:00
0s
server : add loading html page while model is loading (#9468)
#62
:
Scheduled
master
2024-09-15 00:42:14 +00:00
0s
server : Add option to return token pieces in /tokenize endpoint (#9108)
#45
:
Scheduled
master
2024-09-14 00:42:14 +00:00
0s
llama : skip token bounds check when evaluating embeddings (#9437)
#17
:
Scheduled
master
2024-09-13 00:42:14 +00:00
0s