llama.cpp/examples/sycl
Georgi Gerganov f3f65429c4
llama : reorganize source code + improve CMake (#8006)
* scripts : update sync [no ci]

* files : relocate [no ci]

* ci : disable kompute build [no ci]

* cmake : fixes [no ci]

* server : fix mingw build

ggml-ci

* cmake : minor [no ci]

* cmake : link math library [no ci]

* cmake : build normal ggml library (not object library) [no ci]

* cmake : fix kompute build

ggml-ci

* make,cmake : fix LLAMA_CUDA + replace GGML_CDEF_PRIVATE

ggml-ci

* move public backend headers to the public include directory (#8122)

* move public backend headers to the public include directory

* nix test

* spm : fix metal header

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* scripts : fix sync paths [no ci]

* scripts : sync ggml-blas.h [no ci]

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-06-26 18:33:02 +03:00
..
build.sh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
CMakeLists.txt build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
ls-sycl-device.cpp Support multiple GPUs (split mode) on SYCL backend (#5806) 2024-03-02 19:49:30 +08:00
README.md build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
run-llama2.sh build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
win-build-sycl.bat llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
win-run-llama2.bat update readme sycl for new update (#6151) 2024-03-20 11:21:41 +08:00

llama.cpp/example/sycl

This example program provides the tools for llama.cpp for SYCL on Intel GPU.

Tool

Tool Name Function Status
llama-ls-sycl-device List all SYCL devices with ID, compute capability, max work group size, ect. Support

llama-ls-sycl-device

List all SYCL devices with ID, compute capability, max work group size, ect.

  1. Build the llama.cpp for SYCL for all targets.

  2. Enable oneAPI running environment

source /opt/intel/oneapi/setvars.sh
  1. Execute
./build/bin/llama-ls-sycl-device

Check the ID in startup log, like:

found 4 SYCL devices:
  Device 0: Intel(R) Arc(TM) A770 Graphics,	compute capability 1.3,
    max compute_units 512,	max work group size 1024,	max sub group size 32,	global mem size 16225243136
  Device 1: Intel(R) FPGA Emulation Device,	compute capability 1.2,
    max compute_units 24,	max work group size 67108864,	max sub group size 64,	global mem size 67065057280
  Device 2: 13th Gen Intel(R) Core(TM) i7-13700K,	compute capability 3.0,
    max compute_units 24,	max work group size 8192,	max sub group size 64,	global mem size 67065057280
  Device 3: Intel(R) Arc(TM) A770 Graphics,	compute capability 3.0,
    max compute_units 512,	max work group size 1024,	max sub group size 32,	global mem size 16225243136

Attribute Note
compute capability 1.3 Level-zero running time, recommended
compute capability 3.0 OpenCL running time, slower than level-zero in most cases