llama.cpp/src
Georgi Gerganov 30caac3a68
llama : the WPM vocabs use the CLS token as BOS (#10930)
* llama : the WPM vocabs use the CLS token as BOS

ggml-ci

* llama : add comment
2024-12-24 09:44:20 +02:00
..
CMakeLists.txt remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
llama-grammar.cpp llama : minor grammar refactor (#10897) 2024-12-19 17:42:13 +02:00
llama-grammar.h llama : minor grammar refactor (#10897) 2024-12-19 17:42:13 +02:00
llama-impl.h log : add CONT level for continuing previous log entry (#9610) 2024-09-24 10:15:35 +03:00
llama-sampling.cpp sampling : refactor + optimize penalties sampler (#10803) 2024-12-16 12:31:14 +02:00
llama-sampling.h llama : add DRY sampler (#9702) 2024-10-25 19:07:34 +03:00
llama-vocab.cpp llama : the WPM vocabs use the CLS token as BOS (#10930) 2024-12-24 09:44:20 +02:00
llama-vocab.h llama : the WPM vocabs use the CLS token as BOS (#10930) 2024-12-24 09:44:20 +02:00
llama.cpp llama : support InfiniAI Megrez 3b (#10893) 2024-12-23 01:35:44 +01:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp unicode : improve naming style (#10838) 2024-12-16 12:31:45 +02:00
unicode.h unicode : improve naming style (#10838) 2024-12-16 12:31:45 +02:00