mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-11 21:39:52 +00:00
01684139c3
* support SYCL backend windows build * add windows build in CI * add for win build CI * correct install oneMKL * fix install issue * fix ci * fix install cmd * fix install cmd * fix install cmd * fix install cmd * fix install cmd * fix win build * fix win build * fix win build * restore other CI part * restore as base * rm no new line * fix no new line issue, add -j * fix grammer issue * allow to trigger manually, fix format issue * fix format * add newline * fix format * fix format * fix format issuse --------- Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
14 lines
372 B
Batchfile
14 lines
372 B
Batchfile
:: MIT license
|
|
:: Copyright (C) 2024 Intel Corporation
|
|
:: SPDX-License-Identifier: MIT
|
|
|
|
INPUT2="Building a website can be done in 10 simple steps:\nStep 1:"
|
|
@call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 --force
|
|
|
|
|
|
set GGML_SYCL_DEVICE=0
|
|
rem set GGML_SYCL_DEBUG=1
|
|
.\build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p %INPUT2% -n 400 -e -ngl 33 -s 0
|
|
|
|
|