llama.cpp/.github/workflows
Diego Devesa 46c69e0e75
ci : faster CUDA toolkit installation method and use ccache (#10537)
* ci : faster CUDA toolkit installation method and use ccache

* remove fetch-depth

* only pack CUDA runtime on master
2024-11-27 11:03:25 +01:00
..
bench.yml.disabled ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
build.yml ci : faster CUDA toolkit installation method and use ccache (#10537) 2024-11-27 11:03:25 +01:00
close-issue.yml ci : fine-grant permission (#9710) 2024-10-04 11:47:19 +02:00
docker.yml ci : publish the docker images created during scheduled runs (#10515) 2024-11-26 13:05:20 +01:00
editorconfig.yml ci: exempt master branch workflows from getting cancelled (#6486) 2024-04-04 18:30:53 +02:00
gguf-publish.yml ci : update checkout, setup-python and upload-artifact to latest (#6456) 2024-04-03 21:01:13 +03:00
labeler.yml labeler.yml: Use settings from ggerganov/llama.cpp [no ci] (#7363) 2024-05-19 20:51:03 +10:00
python-check-requirements.yml py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00
python-lint.yml ci : add ubuntu cuda build, build with one arch on windows (#10456) 2024-11-26 13:05:07 +01:00
python-type-check.yml ci : reduce severity of unused Pyright ignore comments (#9697) 2024-09-30 14:13:16 -04:00
server.yml server : replace behave with pytest (#10416) 2024-11-26 16:20:18 +01:00