llama.cpp/include
Xuan Son Nguyen 642330ac7c
llama : add enum for built-in chat templates (#10623)
* llama : add enum for supported chat templates

* use "built-in" instead of "supported"

* arg: print list of built-in templates

* fix test

* update server README
2024-12-02 22:10:19 +01:00
..
llama-cpp.h Introduce llama-run (#10291) 2024-11-25 22:56:24 +01:00
llama.h llama : add enum for built-in chat templates (#10623) 2024-12-02 22:10:19 +01:00