.. |
.gitignore
|
tests : gitignore ggml-common.h
|
2024-03-09 14:17:11 +02:00 |
CMakeLists.txt
|
ggml : fix loongson compile warnings (#7537)
|
2024-05-31 14:17:10 +03:00 |
get-model.cpp
|
ci : add model tests + script wrapper (#4586)
|
2024-01-26 14:18:00 +02:00 |
get-model.h
|
ci : add model tests + script wrapper (#4586)
|
2024-01-26 14:18:00 +02:00 |
run-json-schema-to-grammar.mjs
|
json-schema-to-grammar improvements (+ added to server) (#5978)
|
2024-03-21 11:50:43 +00:00 |
test-autorelease.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |
test-backend-ops.cpp
|
Fix FlashAttention debug test, FP32 assert (#7684)
|
2024-06-01 23:26:10 +02:00 |
test-c.c
|
Nomic Vulkan backend (#4456)
|
2024-01-29 15:50:50 -05:00 |
test-chat-template.cpp
|
Fix phi3 chat template confusion with zephyr (#7449)
|
2024-05-23 16:15:15 +02:00 |
test-double-float.cpp
|
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
|
2023-10-30 19:19:15 +02:00 |
test-grad0.cpp
|
ggml : remove ggml_flash_attn and ggml_flash_ff (#7463)
|
2024-05-23 10:00:44 +03:00 |
test-grammar-integration.cpp
|
Add left recursion check: quit early instead of going into an infinite loop (#7083)
|
2024-05-14 15:25:56 +10:00 |
test-grammar-parser.cpp
|
ggml, common, examples, tests : fixed type arguments in printf (#5528)
|
2024-02-18 18:20:12 +02:00 |
test-json-schema-to-grammar.cpp
|
JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143)
|
2024-05-08 21:53:08 +02:00 |
test-llama-grammar.cpp
|
ggml, common, examples, tests : fixed type arguments in printf (#5528)
|
2024-02-18 18:20:12 +02:00 |
test-model-load-cancel.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |
test-opt.cpp
|
code : normalize enum names (#5697)
|
2024-02-25 12:09:09 +02:00 |
test-quantize-fns.cpp
|
tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (#6303)
|
2024-03-25 19:33:15 +02:00 |
test-quantize-perf.cpp
|
ggml : add mmla kernels for quantized GEMM (#4966)
|
2024-02-11 15:22:33 +02:00 |
test-rope.cpp
|
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2023-09-28 19:04:36 +03:00 |
test-sampling.cpp
|
sampling: fix top_k <= 0 (#5388)
|
2024-02-08 09:46:30 +01:00 |
test-tokenizer-0.cpp
|
tests : add test-tokenizer-0.sh + fix some tokenizers (#7036)
|
2024-05-04 08:32:32 +03:00 |
test-tokenizer-0.py
|
py : logging and flake8 suppression refactoring (#7081)
|
2024-05-05 08:07:48 +03:00 |
test-tokenizer-0.sh
|
tests : fix test-tokenizer-0.sh
|
2024-05-28 15:04:09 +03:00 |
test-tokenizer-1-bpe.cpp
|
llama : lookup word in vocab before doing BPE merges (#7193)
|
2024-05-11 11:12:06 +03:00 |
test-tokenizer-1-spm.cpp
|
llama : fix BPE pre-tokenization (#6920)
|
2024-04-29 16:58:41 +03:00 |
test-tokenizer-random.py
|
Tokenizer WPM fixes (#7500)
|
2024-05-28 21:46:34 +02:00 |