llama.cpp/ggml
Diego Devesa 10bce0450f
llama : accept a list of devices to use to offload a model (#10497)
* llama : accept a list of devices to use to offload a model

* accept `--dev none` to completely disable offloading

* fix dev list with dl backends

* rename env parameter to LLAMA_ARG_DEVICE for consistency
2024-11-25 19:30:06 +01:00
..
include ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
src llama : accept a list of devices to use to offload a model (#10497) 2024-11-25 19:30:06 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00