This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2024-09-22 21:16:20 +00:00
Code
Issues
Actions
16
Packages
Projects
Releases
Wiki
Activity
All Workflows
build.yml
close-issue.yml
docker.yml
editorconfig.yml
gguf-publish.yml
labeler.yml
nix-ci-aarch64.yml
nix-ci.yml
nix-flake-update.yml
nix-publish-flake.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
server.yml
Actor
All actors
root
Status
All status
success
failure
waiting
running
Update CUDA graph on scale change plus clear nodes/params (#9550)
#195
:
Scheduled
master
2024-09-22 12:26:17 +00:00
0s
examples : flush log upon ctrl+c (#9559)
#182
:
Scheduled
master
2024-09-21 12:26:17 +00:00
0s
server : clean-up completed tasks from waiting list (#9531)
#173
:
Scheduled
master
2024-09-20 12:26:17 +00:00
0s
server : match OAI structured output response (#9527)
#157
:
Scheduled
master
2024-09-19 12:26:17 +00:00
0s
llama : fix n_vocab init for 'no_vocab' case (#9511)
#137
:
Scheduled
master
2024-09-18 12:26:17 +00:00
0s
convert : identify missing model files (#9397)
#108
:
Scheduled
master
2024-09-17 12:26:14 +00:00
0s
convert : identify missing model files (#9397)
#102
:
Commit
d54c21df7e
pushed by
root
master
2024-09-17 12:26:14 +00:00
0s
py : add "LLaMAForCausalLM" conversion support (#9485)
#87
:
Scheduled
master
2024-09-16 12:26:14 +00:00
0s
cmake : use list(APPEND ...) instead of set() + dedup linker (#9463)
#69
:
Scheduled
master
2024-09-15 12:26:14 +00:00
0s
llama : llama_perf + option to disable timings during decode (#9355)
#53
:
Scheduled
master
2024-09-14 12:26:14 +00:00
0s
cann: Fix error when running a non-exist op (#9424)
#27
:
Scheduled
master
2024-09-13 12:26:14 +00:00
0s