mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 19:34:35 +00:00
46c69e0e75
* ci : faster CUDA toolkit installation method and use ccache * remove fetch-depth * only pack CUDA runtime on master |
||
---|---|---|
.. | ||
bench.yml.disabled | ||
build.yml | ||
close-issue.yml | ||
docker.yml | ||
editorconfig.yml | ||
gguf-publish.yml | ||
labeler.yml | ||
python-check-requirements.yml | ||
python-lint.yml | ||
python-type-check.yml | ||
server.yml |