mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-29 04:44:34 +00:00
d01bccde9f
* ci : run ctest ggml-ci * ci : add open llama 3B-v2 tests ggml-ci * ci : disable wget progress output ggml-ci * ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations ggml-ci * tests : try to fix tail free sampling test ggml-ci * ci : add K-quants ggml-ci * ci : add short perplexity tests ggml-ci * ci : add README.md * ppl : add --chunks argument to limit max number of chunks ggml-ci * ci : update README
21 lines
880 B
Markdown
21 lines
880 B
Markdown
# CI
|
|
|
|
In addition to [Github Actions](https://github.com/ggerganov/llama.cpp/actions) `llama.cpp` uses a custom CI framework:
|
|
|
|
https://github.com/ggml-org/ci
|
|
|
|
It monitors the `master` branch for new commits and runs the
|
|
[ci/run.sh](https://github.com/ggerganov/llama.cpp/blob/master/ci/run.sh) script on dedicated cloud instances. This allows us
|
|
to execute heavier workloads compared to just using Github Actions. Also with time, the cloud instances will be scaled
|
|
to cover various hardware architectures, including GPU and Apple Silicon instances.
|
|
|
|
Collaborators can optionally trigger the CI run by adding the `ggml-ci` keyword to their commit message.
|
|
Only the branches of this repo are monitored for this keyword.
|
|
|
|
It is a good practice, before publishing changes to execute the full CI locally on your machine:
|
|
|
|
```bash
|
|
mkdir tmp
|
|
bash ./ci/run.sh ./tmp/results ./tmp/mnt
|
|
```
|