llama.cpp/docs
matteo afbb4c1322
ggml-cuda: Adding support for unified memory (#8035)
* Adding support for unified memory

* adding again the documentation about unified memory

* refactoring: Moved the unified memory code in the correct location.

* Fixed compilation error when using hipblas

* cleaning up the documentation

* Updating the documentation

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* adding one more case where the PR should not be enabled

---------

Co-authored-by: matteo serva <matteo.serva@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2024-08-01 23:28:28 +02:00
..
backend [SYCL] fix multi-gpu issue on sycl (#8554) 2024-07-25 19:45:18 +08:00
development docs: fix links in development docs [no ci] (#8481) 2024-07-15 14:46:39 +03:00
android.md Reorganize documentation pages (#8325) 2024-07-05 18:08:32 +02:00
build.md ggml-cuda: Adding support for unified memory (#8035) 2024-08-01 23:28:28 +02:00
docker.md Reorganize documentation pages (#8325) 2024-07-05 18:08:32 +02:00
install.md Reorganize documentation pages (#8325) 2024-07-05 18:08:32 +02:00