llama.cpp/scripts
2024-04-07 16:08:12 +03:00
..
build-info.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
build-info.sh build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
check-requirements.sh python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
ci-run.sh ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
compare-commits.sh cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
compare-llama-bench.py compare-llama-bench.py: fix long hexsha args (#6424) 2024-04-01 13:30:43 +02:00
convert-gg.sh scripts : helper convert script 2023-08-27 15:24:58 +03:00
gen-build-info-cpp.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
get-flags.mk build : pass all warning flags to nvcc via -Xcompiler (#5570) 2024-02-18 16:21:52 -05:00
get-hellaswag.sh scripts : add get-winogrande.sh 2024-01-18 20:45:39 +02:00
get-pg.sh scripts : improve get-pg.sh (#4838) 2024-01-09 19:21:13 +02:00
get-wikitext-2.sh ci : fix wikitext url + compile warnings (#5569) 2024-02-18 22:39:30 +02:00
get-wikitext-103.sh lookup: complement data from context with general text statistics (#5479) 2024-03-23 01:24:36 +01:00
get-winogrande.sh scripts : add get-winogrande.sh 2024-01-18 20:45:39 +02:00
hf.sh scripts : add hf.sh helper script (#5501) 2024-02-15 15:41:15 +02:00
install-oneapi.bat support SYCL backend windows build (#5208) 2024-01-31 08:08:07 +05:30
LlamaConfig.cmake.in cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
pod-llama.sh cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
qnt-all.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-perf.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-ppl.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-with-preset.py scripts : move run-with-preset.py from root to scripts folder 2024-01-26 17:09:44 +02:00
server-llm.sh cuda : rename build flag to LLAMA_CUDA (#6299) 2024-03-26 01:16:01 +01:00
sync-ggml-am.sh scripts : sync ggml-cuda folder 2024-04-07 16:08:12 +03:00
sync-ggml.last sync : ggml 2024-04-06 18:27:46 +03:00
sync-ggml.sh sync : ggml (#6351) 2024-03-29 17:45:46 +02:00
verify-checksum-models.py scripts : use /usr/bin/env in shebang (#3313) 2023-09-22 23:52:23 -04:00