llama.cpp/ggml/include
2024-10-03 21:17:26 +03:00
..
ggml-alloc.h Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
ggml-backend.h ggml: refactor cross entropy loss CPU impl. (ggml/976) 2024-10-03 21:17:26 +03:00
ggml-blas.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-cann.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-cuda.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-kompute.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-rpc.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-sycl.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-vulkan.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml.h ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-03 21:17:26 +03:00