.. |
CMakeLists.txt
|
llama : Add test for model load cancellation
|
2023-12-14 04:47:54 -05:00 |
test-backend-ops.cpp
|
sync : ggml (SD ops, tests, kernels) (#4444)
|
2023-12-13 21:54:54 +02:00 |
test-c.c
|
tests : add a C compliance test (#2848)
|
2023-08-30 09:20:26 +03:00 |
test-double-float.cpp
|
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
|
2023-10-30 19:19:15 +02:00 |
test-grad0.cpp
|
english : use typos to fix comments and logs (#4354)
|
2023-12-12 11:53:36 +02:00 |
test-grammar-parser.cpp
|
gguf : new file format with flexible meta data (beta) (#2398)
|
2023-08-21 23:07:43 +03:00 |
test-llama-grammar.cpp
|
gguf : new file format with flexible meta data (beta) (#2398)
|
2023-08-21 23:07:43 +03:00 |
test-model-load-cancel.cpp
|
Fix bool return in llama_model_load, remove std::ignore use
|
2023-12-14 16:29:05 -05:00 |
test-opt.cpp
|
sync : ggml (backend v2) (#3912)
|
2023-11-13 14:16:23 +02:00 |
test-quantize-fns.cpp
|
ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861)
|
2023-10-30 19:19:15 +02:00 |
test-quantize-perf.cpp
|
english : use typos to fix comments and logs (#4354)
|
2023-12-12 11:53:36 +02:00 |
test-rope.cpp
|
llama : custom attention mask + parallel decoding + no context swaps (#3228)
|
2023-09-28 19:04:36 +03:00 |
test-sampling.cpp
|
sampling : refactor init to use llama_sampling_params (#3696)
|
2023-10-20 21:07:23 +03:00 |
test-tokenizer-0-falcon.cpp
|
Minor improvements in GPT2 tokenizer (#3567)
|
2023-10-10 18:59:52 +02:00 |
test-tokenizer-0-falcon.py
|
ci : add flake8 to github actions (python linting) (#4129)
|
2023-11-20 11:35:47 +01:00 |
test-tokenizer-0-llama.cpp
|
Minor improvements in GPT2 tokenizer (#3567)
|
2023-10-10 18:59:52 +02:00 |
test-tokenizer-0-llama.py
|
ci : add flake8 to github actions (python linting) (#4129)
|
2023-11-20 11:35:47 +01:00 |
test-tokenizer-1-bpe.cpp
|
Add more tokenizer tests (#3742)
|
2023-10-24 09:17:17 +02:00 |
test-tokenizer-1-llama.cpp
|
Work on the BPE tokenizer (#3252)
|
2023-10-03 09:16:26 +02:00 |