llama.cpp/tests
Xuan Son Nguyen a46f50747b
server : fallback to chatml, add AlphaMonarch chat template (#5628)
* server: fallback to chatml

* add new chat template

* server: add AlphaMonarch to test chat template

* server: only check model template if there is no custom tmpl

* remove TODO
2024-02-22 10:33:24 +02:00
..
.gitignore tests : .gitignore obj files 2024-02-08 09:46:47 +02:00
CMakeLists.txt llama : add llama_chat_apply_template() (#5538) 2024-02-19 10:23:37 +02:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590) 2024-02-21 11:39:52 +02:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-template.cpp server : fallback to chatml, add AlphaMonarch chat template (#5628) 2024-02-22 10:33:24 +02:00
test-double-float.cpp ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861) 2023-10-30 19:19:15 +02:00
test-grad0.cpp cuda : improve cuda pool efficiency using virtual memory (#4606) 2023-12-24 14:34:22 +01:00
test-grammar-parser.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-llama-grammar.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
test-quantize-fns.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-quantize-perf.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-rope.cpp llama : custom attention mask + parallel decoding + no context swaps (#3228) 2023-09-28 19:04:36 +03:00
test-sampling.cpp sampling: fix top_k <= 0 (#5388) 2024-02-08 09:46:30 +01:00
test-tokenizer-0-falcon.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-falcon.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-0-llama.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-0-llama.py ci : add flake8 to github actions (python linting) (#4129) 2023-11-20 11:35:47 +01:00
test-tokenizer-1-bpe.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-tokenizer-1-llama.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00