mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-12 05:49:52 +00:00
de17e3f745
* disable mmap to fix memcpy crash, add missed cmd in guide, fix softmax * refactor to disable mmap for SYCL backend * fix compile error in other os * refactor the solution, use host buf to fix it, instead of disable mmap * keep to support mmap() * use host buff to reduce malloc times * revert to malloc/free solution, for threaad safe
24 lines
585 B
Bash
Executable File
24 lines
585 B
Bash
Executable File
|
|
# MIT license
|
|
# Copyright (C) 2024 Intel Corporation
|
|
# SPDX-License-Identifier: MIT
|
|
|
|
mkdir -p build
|
|
cd build
|
|
source /opt/intel/oneapi/setvars.sh
|
|
|
|
#for FP16
|
|
#cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=ON # faster for long-prompt inference
|
|
|
|
#for FP32
|
|
cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
|
|
|
|
#build example/main
|
|
#cmake --build . --config Release --target main
|
|
|
|
#build example/llama-bench
|
|
#cmake --build . --config Release --target llama-bench
|
|
|
|
#build all binary
|
|
cmake --build . --config Release -j -v
|