mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 03:14:35 +00:00
fe680e3d10
* sync : ggml (part 1)
* sync : ggml (part 2, CUDA)
* sync : ggml (part 3, Metal)
* ggml : build fixes
ggml-ci
* cuda : restore lost changes
* cuda : restore lost changes (StableLM rope)
* cmake : enable separable compilation for CUDA
ggml-ci
* ggml-cuda : remove device side dequantize
* Revert "cmake : enable separable compilation for CUDA"
This reverts commit
|
||
---|---|---|
.. | ||
build-info.cmake | ||
build-info.sh | ||
convert-gg.sh | ||
gen-build-info-cpp.cmake | ||
get-wikitext-2.sh | ||
LlamaConfig.cmake.in | ||
qnt-all.sh | ||
run-all-perf.sh | ||
run-all-ppl.sh | ||
server-llm.sh | ||
sync-ggml.sh | ||
verify-checksum-models.py |