mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-28 12:24:35 +00:00
5cb04dbc16
* llama : remove LLAMA_MAX_DEVICES from llama.h ggml-ci * Update llama.cpp Co-authored-by: slaren <slarengh@gmail.com> * server : remove LLAMA_MAX_DEVICES ggml-ci * llama : remove LLAMA_SUPPORTS_GPU_OFFLOAD ggml-ci * train : remove LLAMA_SUPPORTS_GPU_OFFLOAD * readme : add deprecation notice * readme : change deprecation notice to "remove" and fix url * llama : remove gpu includes from llama.h ggml-ci --------- Co-authored-by: slaren <slarengh@gmail.com> |
||
---|---|---|
.. | ||
base64.hpp | ||
build-info.cpp.in | ||
CMakeLists.txt | ||
common.cpp | ||
common.h | ||
console.cpp | ||
console.h | ||
grammar-parser.cpp | ||
grammar-parser.h | ||
log.h | ||
sampling.cpp | ||
sampling.h | ||
stb_image.h | ||
train.cpp | ||
train.h |