llama.cpp/ggml/include
2024-10-02 01:32:35 +02:00
..
ggml-alloc.h Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
ggml-backend.h add device props/caps, fully support async upload for all compatible backends 2024-10-02 01:32:35 +02:00
ggml-blas.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00
ggml-cann.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00
ggml-cuda.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00
ggml-kompute.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00
ggml-rpc.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00
ggml-sycl.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00
ggml-vulkan.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00
ggml.h ggml-backend : add device and backend reg interfaces 2024-10-01 17:24:28 +02:00