mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-14 06:49:54 +00:00
de17e3f745
* disable mmap to fix memcpy crash, add missed cmd in guide, fix softmax * refactor to disable mmap for SYCL backend * fix compile error in other os * refactor the solution, use host buf to fix it, instead of disable mmap * keep to support mmap() * use host buff to reduce malloc times * revert to malloc/free solution, for threaad safe |
||
---|---|---|
.. | ||
build.sh | ||
CMakeLists.txt | ||
ls-sycl-device.cpp | ||
README.md | ||
run-llama2.sh | ||
win-build-sycl.bat | ||
win-run-llama2.bat |
llama.cpp/example/sycl
This example program provide the tools for llama.cpp for SYCL on Intel GPU.
Tool
Tool Name | Function | Status |
---|---|---|
ls-sycl-device | List all SYCL devices with ID, compute capability, max work group size, ect. | Support |
ls-sycl-device
List all SYCL devices with ID, compute capability, max work group size, ect.
-
Build the llama.cpp for SYCL for all targets.
-
Enable oneAPI running environment
source /opt/intel/oneapi/setvars.sh
- Execute
./build/bin/ls-sycl-device
Check the ID in startup log, like:
found 4 SYCL devices:
Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
Device 1: Intel(R) FPGA Emulation Device, compute capability 1.2,
max compute_units 24, max work group size 67108864, max sub group size 64, global mem size 67065057280
Device 2: 13th Gen Intel(R) Core(TM) i7-13700K, compute capability 3.0,
max compute_units 24, max work group size 8192, max sub group size 64, global mem size 67065057280
Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136
Attribute | Note |
---|---|
compute capability 1.3 | Level-zero running time, recommended |
compute capability 3.0 | OpenCL running time, slower than level-zero in most cases |