llama.cpp/tests
Daniel Bevenius e6bf007744
llama : return nullptr from llama_grammar_init (#8093)
* llama : return nullptr from llama_grammar_init

This commit updates llama_grammar_init to return nullptr instead of
throwing an exception.

The motivation for this is that this function is declared inside an
extern "C" block and is intended/may be used from C code which will not
be able to handle exceptions thrown, and results in undefined behavior.

On Windows and using MSVC the following warning is currently generated:
```console
C:\llama.cpp\llama.cpp(13998,1): warning C4297: 'llama_grammar_init':
function assumed not to throw an exception but does
C:\llama.cpp\llama.cpp(13998,1): message :
__declspec(nothrow), throw(), noexcept(true), or noexcept was specified
on the function
```

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* squash! llama : return nullptr from llama_grammar_init

Add checks for nullptr when calling llama_grammar_init.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

---------

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
Co-authored-by: Clint Herron <hanclinto@gmail.com>
2024-06-25 15:07:28 -04:00
..
.gitignore tests : gitignore ggml-common.h 2024-03-09 14:17:11 +02:00
CMakeLists.txt ggml : fix loongson compile warnings (#7537) 2024-05-31 14:17:10 +03:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp fix CI failures (#8066) 2024-06-23 13:14:45 +02:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-template.cpp Add chat template support for llama-cli (#8068) 2024-06-25 21:56:49 +10:00
test-double-float.cpp ggml : move FP16 <-> FP32 code to ggml-impl.h (#3861) 2023-10-30 19:19:15 +02:00
test-grad0.cpp ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
test-grammar-integration.cpp llama : return nullptr from llama_grammar_init (#8093) 2024-06-25 15:07:28 -04:00
test-grammar-parser.cpp grammars: x{min,max} repetition operator (#6640) 2024-06-06 10:07:06 +01:00
test-json-schema-to-grammar.cpp json: support integer minimum, maximum, exclusiveMinimum, exclusiveMaximum (#7797) 2024-06-25 20:06:20 +01:00
test-llama-grammar.cpp llama : return nullptr from llama_grammar_init (#8093) 2024-06-25 15:07:28 -04:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
test-quantize-fns.cpp tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (#6303) 2024-03-25 19:33:15 +02:00
test-quantize-perf.cpp ggml : add mmla kernels for quantized GEMM (#4966) 2024-02-11 15:22:33 +02:00
test-rope.cpp ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
test-sampling.cpp sampling: fix top_k <= 0 (#5388) 2024-02-08 09:46:30 +01:00
test-tokenizer-0.cpp tests : add test-tokenizer-0.sh + fix some tokenizers (#7036) 2024-05-04 08:32:32 +03:00
test-tokenizer-0.py py : logging and flake8 suppression refactoring (#7081) 2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh tests : fix test-tokenizer-0.sh 2024-05-28 15:04:09 +03:00
test-tokenizer-1-bpe.cpp llama : lookup word in vocab before doing BPE merges (#7193) 2024-05-11 11:12:06 +03:00
test-tokenizer-1-spm.cpp llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
test-tokenizer-random.py tokenizer : BPE fixes (#7530) 2024-06-18 18:40:52 +02:00