This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2024-11-11 13:30:35 +00:00
Code
Issues
Actions
11
Packages
Projects
Releases
Wiki
Activity
All Workflows
build.yml
close-issue.yml
docker.yml
editorconfig.yml
gguf-publish.yml
labeler.yml
nix-ci-aarch64.yml
nix-ci.yml
nix-flake-update.yml
nix-publish-flake.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
server.yml
Actor
All actors
root
Status
All status
success
failure
waiting
running
metal : more precise Q*K in FA vec kernel (#10247)
#1017
:
Commit
b0cefea58a
pushed by
root
b4066
2024-11-11 13:30:37 +00:00
0s
server : enable KV cache defrag by default (#10233)
#1016
:
Commit
b141e5f6ef
pushed by
root
b4065
2024-11-11 13:30:37 +00:00
0s
metal : more precise Q*K in FA vec kernel (#10247)
#1015
:
Commit
b0cefea58a
pushed by
root
master
2024-11-11 13:30:36 +00:00
0s
wip
#1011
:
Commit
ec9f74a2c2
pushed by
root
sl/dl-backend
2024-11-11 05:20:35 +00:00
0s
ggml : add ggml-metal-impl.h
#1006
:
Commit
ab6a3b7c36
pushed by
root
gg/metal-refactor-args
2024-11-10 21:10:37 +00:00
0s
metal : more precise Q*K in FA vec kernel
#1005
:
Commit
0f6f1c789c
pushed by
root
gg/metal-fa-f32-dot
2024-11-10 21:10:37 +00:00
0s