llama.cpp/tests
Georgi Gerganov e0429d38e4
convert-new.py : output gguf (#2635)
* convert-new.py : output gguf (WIP)

* convert-new.py : add gguf key-value pairs

* llama : add hparams.ctx_train + no longer print ftype

* convert-new.py : minor fixes

* convert-new.py : vocab-only option should work now

* llama : fix tokenizer to use llama_char_to_byte

* tests : add new ggml-vocab-llama.gguf

* convert-new.py : tensor name mapping

* convert-new.py : add map for skipping tensor serialization

* convert-new.py : convert script now works

* gguf.py : pick some of the refactoring from #2644

* convert-new.py : minor fixes
2023-08-17 17:19:52 +03:00
..
CMakeLists.txt convert-new.py : output gguf (#2635) 2023-08-17 17:19:52 +03:00
test-double-float.cpp tests : Fix compilation warnings (Linux/GCC) (#2451) 2023-08-02 11:06:19 +03:00
test-grad0.cpp tests : Fix compilation warnings (Linux/GCC) (#2451) 2023-08-02 11:06:19 +03:00
test-grammar-parser.cpp test : add simple grammar parsing tests (#2594) 2023-08-13 17:00:48 +03:00
test-opt.cpp tests : Fix compilation warnings (Linux/GCC) (#2451) 2023-08-02 11:06:19 +03:00
test-quantize-fns.cpp ggml : generalize quantize_fns for simpler FP16 handling (#1237) 2023-07-05 19:13:06 +03:00
test-quantize-perf.cpp ggml : generalize quantize_fns for simpler FP16 handling (#1237) 2023-07-05 19:13:06 +03:00
test-sampling.cpp ci : integrate with ggml-org/ci (#2250) 2023-07-18 14:24:43 +03:00
test-tokenizer-0.cpp convert-new.py : output gguf (#2635) 2023-08-17 17:19:52 +03:00
test-tokenizer-1.cpp llama : sync gguf-llama with llama (#2613) 2023-08-14 21:33:33 +03:00