llama.cpp/src
bandoti d6fe7abf04
ggml: unify backend logging mechanism (#9709)
* Add scaffolding for ggml logging macros

* Metal backend now uses GGML logging

* Cuda backend now uses GGML logging

* Cann backend now uses GGML logging

* Add enum tag to parameters

* Use C memory allocation funcs

* Fix compile error

* Use GGML_LOG instead of GGML_PRINT

* Rename llama_state to llama_logger_state

* Prevent null format string

* Fix whitespace

* Remove log callbacks from ggml backends

* Remove cuda log statement
2024-10-03 17:39:03 +02:00
..
CMakeLists.txt llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00
llama-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-grammar.h llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
llama-impl.h log : add CONT level for continuing previous log entry (#9610) 2024-09-24 10:15:35 +03:00
llama-sampling.cpp sampling : avoid expensive softmax during greedy sampling (#9605) 2024-09-24 09:03:17 +03:00
llama-sampling.h llama : refactor samplers internal implementation (#9370) 2024-09-08 15:52:07 +02:00
llama-vocab.cpp llama : add reranking support (#9510) 2024-09-28 17:42:03 +03:00
llama-vocab.h vocab : refactor tokenizer to reduce init overhead (#9449) 2024-09-28 15:10:58 +03:00
llama.cpp ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
unicode-data.cpp llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.h llama : move vocab, grammar and sampling into separate files (#8508) 2024-07-23 13:10:17 +03:00