llama.cpp/tests
TheNotary 0e41441fa1 moves ggml-vocab.bin into test folder where it's used.
It appears this file is only used during tests as of now.
Removing it from the model folder makes it more flexible
for how users are loading their model data into the project
(e.g. are they using a docker bind-mounts, are they using
symlinks, are they DLing models directly into this folder?)

By moving this, the instructions for getting started can be
safely simplified to:

$  rm models/.gitkeep
$  rm -r models
$  ln -s /mnt/c/ai/models/LLaMA $(pwd)/models

I think it's a good idea because the model files are quite large, and
be useful across multiple projects so symlinks shine in this use case
without creating too much confusion for the onboardee..
2023-04-26 16:20:42 -05:00
..
CMakeLists.txt moves ggml-vocab.bin into test folder where it's used. 2023-04-26 16:20:42 -05:00
ggml-vocab.bin moves ggml-vocab.bin into test folder where it's used. 2023-04-26 16:20:42 -05:00
test-double-float.c all : be more strict about converting float to double (#458) 2023-03-28 19:48:20 +03:00
test-quantize-fns.cpp ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (#1179) 2023-04-25 23:40:51 +03:00
test-quantize-perf.cpp Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122) 2023-04-22 10:54:13 +00:00
test-tokenizer-0.cpp llama : well-defined static initialization of complex objects (#927) 2023-04-17 17:41:53 +03:00