This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-05 16:24:34 +00:00
Code
Issues
Actions
9
Packages
Projects
Releases
Wiki
Activity
391
Commits
365
Branches
2,887
Tags
326
MiB
master-e0305ea
Commit Graph
1 Commits
Author
SHA1
Message
Date
slaren
02d6988121
Improve cuBLAS performance by dequantizing on the GPU (
#1065
)
2023-04-20 03:14:14 +02:00