mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-13 12:10:18 +00:00
46c69e0e75
* ci : faster CUDA toolkit installation method and use ccache * remove fetch-depth * only pack CUDA runtime on master |
||
---|---|---|
.. | ||
ISSUE_TEMPLATE | ||
workflows | ||
labeler.yml | ||
pull_request_template.md |