mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 03:14:35 +00:00
3b169441df
* ggml-alloc : v3 (ggml/727) * ggml-alloc v3 ggml-ci * fix ci ggml-ci * whisper : check for backend buffer allocation failures * whisper : avoid leaks when initialization fails * cleanup ggml-ci * style fixes ggml-ci * sync : ggml * update llama.cpp, clip.cpp, export-lora.cpp * update finetune.cpp, train-text-from-scratch.cpp ggml-ci * ggml-backend : reduce alignment to 32 to match gguf and fix mmap --------- Co-authored-by: slaren <slarengh@gmail.com> |
||
---|---|---|
.. | ||
build-info.cmake | ||
build-info.sh | ||
check-requirements.sh | ||
ci-run.sh | ||
compare-llama-bench.py | ||
convert-gg.sh | ||
gen-build-info-cpp.cmake | ||
get-flags.mk | ||
get-hellaswag.sh | ||
get-pg.sh | ||
get-wikitext-2.sh | ||
get-winogrande.sh | ||
install-oneapi.bat | ||
LlamaConfig.cmake.in | ||
qnt-all.sh | ||
run-all-perf.sh | ||
run-all-ppl.sh | ||
run-with-preset.py | ||
server-llm.sh | ||
sync-ggml-am.sh | ||
sync-ggml.last | ||
sync-ggml.sh | ||
verify-checksum-models.py |