llama.cpp/scripts
2024-01-18 11:44:49 +02:00
..
build-info.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
build-info.sh build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
check-requirements.sh python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
compare-llama-bench.py compare-llama-bench: tweak output format (#4910) 2024-01-13 15:52:53 +01:00
convert-gg.sh scripts : helper convert script 2023-08-27 15:24:58 +03:00
gen-build-info-cpp.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
get-flags.mk build : detect host compiler and cuda compiler separately (#4414) 2023-12-13 12:10:10 -05:00
get-hellaswag.sh scritps : add helper script to get hellaswag data in txt format 2024-01-18 11:44:49 +02:00
get-pg.sh scripts : improve get-pg.sh (#4838) 2024-01-09 19:21:13 +02:00
get-wikitext-2.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
LlamaConfig.cmake.in cmake : fix transient definitions in find pkg (#3411) 2023-10-02 12:51:49 +03:00
qnt-all.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-perf.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-ppl.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
server-llm.sh scripts : add server-llm.sh (#3868) 2023-11-01 11:29:07 +02:00
sync-ggml-am.sh scripts : sync-ggml-am.sh option to skip commits 2024-01-14 11:08:41 +02:00
sync-ggml.last sync : ggml 2024-01-17 20:54:50 +02:00
sync-ggml.sh sync : ggml (new ops, tests, backend, etc.) (#4359) 2023-12-07 22:26:54 +02:00
verify-checksum-models.py scripts : use /usr/bin/env in shebang (#3313) 2023-09-22 23:52:23 -04:00