llama.cpp/.github/workflows
Georgi Gerganov f66f582927
llama : refactor src/llama.cpp (#10902)
* llama : scatter llama.cpp into multiple modules (wip)

* llama : control-vector -> adapter

* llama : arch

* llama : mmap

ggml-ci

* ci : remove BUILD_SHARED_LIBS=OFF

ggml-ci

* llama : arch (cont)

ggml-ci

* llama : chat

ggml-ci

* llama : model

ggml-ci

* llama : hparams

ggml-ci

* llama : adapter

ggml-ci

* examples : fix

ggml-ci

* rebase

ggml-ci

* minor

* llama : kv cache

ggml-ci

* llama : impl

ggml-ci

* llama : batch

ggml-ci

* cont

ggml-ci

* llama : context

ggml-ci

* minor

* llama : context (cont)

ggml-ci

* llama : model loader

ggml-ci

* common : update lora

ggml-ci

* llama : quant

ggml-ci

* llama : quant (cont)

ggml-ci

* minor [no ci]
2025-01-03 10:18:53 +02:00
..
bench.yml.disabled ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
build.yml llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
close-issue.yml ci : fine-grant permission (#9710) 2024-10-04 11:47:19 +02:00
docker.yml devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00
editorconfig.yml ci: exempt master branch workflows from getting cancelled (#6486) 2024-04-04 18:30:53 +02:00
gguf-publish.yml ci : update checkout, setup-python and upload-artifact to latest (#6456) 2024-04-03 21:01:13 +03:00
labeler.yml labeler.yml: Use settings from ggerganov/llama.cpp [no ci] (#7363) 2024-05-19 20:51:03 +10:00
python-check-requirements.yml py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00
python-lint.yml ci : add ubuntu cuda build, build with one arch on windows (#10456) 2024-11-26 13:05:07 +01:00
python-type-check.yml ci : reduce severity of unused Pyright ignore comments (#9697) 2024-09-30 14:13:16 -04:00
server.yml ci : pin nodejs to 22.11.0 (#10779) 2024-12-11 14:59:41 +01:00