mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 21:39:52 +00:00
4760e7cc0b
* sync : ggml (backend v2) (wip) * sync : migrate examples and llama.cpp to dynamic graphs (wip) * sync : update tests + fix max op params to 64 ggml-ci * sync : ggml-cuda ggml-ci * llama : fix save/load state context size ggml-ci * sync : try to fix build on tvOS * sync : pass custom graph sizes in training examples * sync : update graph copies to new ggml API * sync : update sync-ggml.sh with new files * scripts : fix header in sync script * train : fix context size calculations * llama : increase inference graph size up to 4096 nodes * train : allocate grads for backward graphs * train : allocate grads for gb_tmp |
||
---|---|---|
.. | ||
build-info.cmake | ||
build-info.sh | ||
convert-gg.sh | ||
get-wikitext-2.sh | ||
LlamaConfig.cmake.in | ||
qnt-all.sh | ||
run-all-perf.sh | ||
run-all-ppl.sh | ||
server-llm.sh | ||
sync-ggml.sh | ||
verify-checksum-models.py |