llama.cpp/tests
Tristan Druyen 007489e895
Fix phi3 chat template confusion with zephyr (#7449)
* Fix phi3 template matching vs zephyr

* Add regression test for new phi3 chat template

* Implement review suggestions

* Fix phi3 jinja test templates & match by <|end|>

* Apply suggestion

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* Add all phi3 template variants in tests

* Remove unneeded message trimming

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* Fix tests to not expect trimmed messages

---------

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-05-23 16:15:15 +02:00
..
.gitignore tests : gitignore ggml-common.h 2024-03-09 14:17:11 +02:00
CMakeLists.txt llama : lookup word in vocab before doing BPE merges (#7193) 2024-05-11 11:12:06 +03:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp cuda : fix rope + add tests (#7452) 2024-05-22 11:01:35 +03:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-template.cpp Fix phi3 chat template confusion with zephyr (#7449) 2024-05-23 16:15:15 +02:00
test-double-float.cpp ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861) 2023-10-30 19:19:15 +02:00
test-grad0.cpp ggml : remove ggml_flash_attn and ggml_flash_ff (#7463) 2024-05-23 10:00:44 +03:00
test-grammar-integration.cpp Add left recursion check: quit early instead of going into an infinite loop (#7083) 2024-05-14 15:25:56 +10:00
test-grammar-parser.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-json-schema-to-grammar.cpp JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143) 2024-05-08 21:53:08 +02:00
test-llama-grammar.cpp ggml, common, examples, tests : fixed type arguments in printf (#5528) 2024-02-18 18:20:12 +02:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
test-quantize-fns.cpp tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (#6303) 2024-03-25 19:33:15 +02:00
test-quantize-perf.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-rope.cpp llama : custom attention mask + parallel decoding + no context swaps (#3228) 2023-09-28 19:04:36 +03:00
test-sampling.cpp sampling: fix top_k <= 0 (#5388) 2024-02-08 09:46:30 +01:00
test-tokenizer-0.cpp tests : add test-tokenizer-0.sh + fix some tokenizers (#7036) 2024-05-04 08:32:32 +03:00
test-tokenizer-0.py py : logging and flake8 suppression refactoring (#7081) 2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh tests : test-tokenizer-0.sh print more info (#7402) 2024-05-21 19:53:48 +03:00
test-tokenizer-1-bpe.cpp llama : lookup word in vocab before doing BPE merges (#7193) 2024-05-11 11:12:06 +03:00
test-tokenizer-1-spm.cpp llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
test-tokenizer-random.py Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425) 2024-05-21 14:39:48 +02:00