mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-14 23:09:53 +00:00
d6fe7abf04
* Add scaffolding for ggml logging macros * Metal backend now uses GGML logging * Cuda backend now uses GGML logging * Cann backend now uses GGML logging * Add enum tag to parameters * Use C memory allocation funcs * Fix compile error * Use GGML_LOG instead of GGML_PRINT * Rename llama_state to llama_logger_state * Prevent null format string * Fix whitespace * Remove log callbacks from ggml backends * Remove cuda log statement |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
llama-grammar.cpp | ||
llama-grammar.h | ||
llama-impl.h | ||
llama-sampling.cpp | ||
llama-sampling.h | ||
llama-vocab.cpp | ||
llama-vocab.h | ||
llama.cpp | ||
unicode-data.cpp | ||
unicode-data.h | ||
unicode.cpp | ||
unicode.h |