mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 03:14:35 +00:00
Add build command for CUDA with path example
This commit is contained in:
parent
ce8784bdb1
commit
de1bb5a4ad
@ -127,6 +127,12 @@ This provides GPU acceleration using an NVIDIA GPU. Make sure to have the CUDA t
|
||||
cmake --build build --config Release
|
||||
```
|
||||
|
||||
- Using `CMake` with path :
|
||||
|
||||
```bash
|
||||
rm -rf build & /usr/local/bin/cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc
|
||||
/usr/local/bin/cmake --build build --config Release -j
|
||||
```
|
||||
The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used.
|
||||
|
||||
The environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` can be used to enable unified memory in Linux. This allows swapping to system RAM instead of crashing when the GPU VRAM is exhausted. In Windows this setting is available in the NVIDIA control panel as `System Memory Fallback`.
|
||||
|
Loading…
Reference in New Issue
Block a user