..
.gitignore
tests : gitignore ggml-common.h
2024-03-09 14:17:11 +02:00
CMakeLists.txt
common : refactor arg parser ( #9308 )
2024-09-07 20:43:51 +02:00
get-model.cpp
ci : add model tests + script wrapper ( #4586 )
2024-01-26 14:18:00 +02:00
get-model.h
ci : add model tests + script wrapper ( #4586 )
2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs
json-schema-to-grammar improvements (+ added to server) ( #5978 )
2024-03-21 11:50:43 +00:00
test-arg-parser.cpp
common : refactor arg parser ( #9308 )
2024-09-07 20:43:51 +02:00
test-autorelease.cpp
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
test-backend-ops.cpp
tests: add gradient tests for all backends (ggml/932)
2024-09-08 11:05:55 +03:00
test-c.c
Nomic Vulkan backend ( #4456 )
2024-01-29 15:50:50 -05:00
test-chat-template.cpp
tests : fix printfs ( #8068 )
2024-07-25 18:58:04 +03:00
test-double-float.cpp
ggml : minor naming changes ( #8433 )
2024-07-12 10:46:02 +03:00
test-grad0.cpp
sync : ggml
2024-08-27 22:41:27 +03:00
test-grammar-integration.cpp
llama : refactor sampling v2 ( #9294 )
2024-09-07 15:16:19 +03:00
test-grammar-parser.cpp
llama : refactor sampling v2 ( #9294 )
2024-09-07 15:16:19 +03:00
test-json-schema-to-grammar.cpp
llama : refactor sampling v2 ( #9294 )
2024-09-07 15:16:19 +03:00
test-llama-grammar.cpp
llama : refactor sampling v2 ( #9294 )
2024-09-07 15:16:19 +03:00
test-lora-conversion-inference.sh
lora : fix llama conversion script with ROPE_FREQS ( #9117 )
2024-08-23 12:58:53 +02:00
test-model-load-cancel.cpp
ggml : add numa options ( #5377 )
2024-02-16 11:31:07 +02:00
test-opt.cpp
code : normalize enum names ( #5697 )
2024-02-25 12:09:09 +02:00
test-quantize-fns.cpp
ggml-quants : ternary packing for TriLMs and BitNet b1.58 ( #8151 )
2024-09-05 21:48:47 -04:00
test-quantize-perf.cpp
ggml : minor naming changes ( #8433 )
2024-07-12 10:46:02 +03:00
test-rope.cpp
Threadpool: take 2 ( #8672 )
2024-08-30 01:20:53 +02:00
test-sampling.cpp
llama : refactor sampling v2 ( #9294 )
2024-09-07 15:16:19 +03:00
test-tokenizer-0.cpp
llama : fix pre-tokenization of non-special added tokens ( #8228 )
2024-07-13 23:35:10 -04:00
test-tokenizer-0.py
py : logging and flake8 suppression refactoring ( #7081 )
2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh
tests : fix test-tokenizer-0.sh
2024-05-28 15:04:09 +03:00
test-tokenizer-1-bpe.cpp
Detokenizer fixes ( #8039 )
2024-07-05 19:01:35 +02:00
test-tokenizer-1-spm.cpp
Detokenizer fixes ( #8039 )
2024-07-05 19:01:35 +02:00
test-tokenizer-random.py
llama : fix pre-tokenization of non-special added tokens ( #8228 )
2024-07-13 23:35:10 -04:00