llama.cpp/ggml
wangshuai09 bfb4c74981
cann: Fix Multi-NPU execution error (#8710)
* cann: fix multi-npu exec error

* cann: update comment  for ggml_backend_cann_supports_buft
2024-07-27 16:36:44 +08:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
src cann: Fix Multi-NPU execution error (#8710) 2024-07-27 16:36:44 +08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake : install all ggml public headers (#8480) 2024-07-18 17:47:12 +03:00