llama.cpp/scripts
Georgi Gerganov f3f65429c4
llama : reorganize source code + improve CMake (#8006)
* scripts : update sync [no ci]

* files : relocate [no ci]

* ci : disable kompute build [no ci]

* cmake : fixes [no ci]

* server : fix mingw build

ggml-ci

* cmake : minor [no ci]

* cmake : link math library [no ci]

* cmake : build normal ggml library (not object library) [no ci]

* cmake : fix kompute build

ggml-ci

* make,cmake : fix LLAMA_CUDA + replace GGML_CDEF_PRIVATE

ggml-ci

* move public backend headers to the public include directory (#8122)

* move public backend headers to the public include directory

* nix test

* spm : fix metal header

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* scripts : fix sync paths [no ci]

* scripts : sync ggml-blas.h [no ci]

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-06-26 18:33:02 +03:00
..
build-info.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
check-requirements.sh Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00
ci-run.sh ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
compare-commits.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
compare-llama-bench.py ggml : remove OpenCL (#7735) 2024-06-04 21:23:20 +03:00
convert-gg.sh Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00
debug-test.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
gen-authors.sh license : update copyright notice + add AUTHORS (#6405) 2024-04-09 09:23:19 +03:00
gen-unicode-data.py tokenizer : BPE fixes (#7530) 2024-06-18 18:40:52 +02:00
get-flags.mk build : pass all warning flags to nvcc via -Xcompiler (#5570) 2024-02-18 16:21:52 -05:00
get-hellaswag.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
get-pg.sh scripts : improve get-pg.sh (#4838) 2024-01-09 19:21:13 +02:00
get-wikitext-2.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
get-wikitext-103.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
get-winogrande.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
hf.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
install-oneapi.bat support SYCL backend windows build (#5208) 2024-01-31 08:08:07 +05:30
pod-llama.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
qnt-all.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
run-all-perf.sh scripts : add pipefail 2023-08-29 10:50:30 +03:00
run-all-ppl.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
run-with-preset.py build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
server-llm.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
sync-ggml-am.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
sync-ggml.last ggml : sync 2024-06-18 09:50:45 +03:00
sync-ggml.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
verify-checksum-models.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
xxd.cmake build: generate hex dump of server assets during build (#6661) 2024-04-21 18:48:53 +01:00