This website requires JavaScript.
Explore
Help
Sign In
root
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-06 16:51:45 +00:00
Code
Issues
Actions
7
Packages
Projects
Releases
Wiki
Activity
1,283
Commits
362
Branches
2,897
Tags
328
MiB
ba15dfd0be
Commit Graph
101 Commits
Author
SHA1
Message
Date
slaren
02d6988121
Improve cuBLAS performance by dequantizing on the GPU (
#1065
)
2023-04-20 03:14:14 +02:00
First
Previous
1
2
3
Next
Last