llama.cpp/ggml
matteo afbb4c1322
ggml-cuda: Adding support for unified memory (#8035)
* Adding support for unified memory

* adding again the documentation about unified memory

* refactoring: Moved the unified memory code in the correct location.

* Fixed compilation error when using hipblas

* cleaning up the documentation

* Updating the documentation

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* adding one more case where the PR should not be enabled

---------

Co-authored-by: matteo serva <matteo.serva@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2024-08-01 23:28:28 +02:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include feat: Support Moore Threads GPU (#8383) 2024-07-28 01:41:25 +02:00
src ggml-cuda: Adding support for unified memory (#8035) 2024-08-01 23:28:28 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cann: update cmake (#8765) 2024-07-30 12:37:35 +02:00