mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 10:54:36 +00:00
01684139c3
* support SYCL backend windows build * add windows build in CI * add for win build CI * correct install oneMKL * fix install issue * fix ci * fix install cmd * fix install cmd * fix install cmd * fix install cmd * fix install cmd * fix win build * fix win build * fix win build * restore other CI part * restore as base * rm no new line * fix no new line issue, add -j * fix grammer issue * allow to trigger manually, fix format issue * fix format * add newline * fix format * fix format * fix format issuse --------- Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
93 lines
887 B
Plaintext
93 lines
887 B
Plaintext
*.o
|
|
*.a
|
|
*.so
|
|
*.gguf
|
|
*.bin
|
|
*.exe
|
|
*.dll
|
|
*.log
|
|
*.gcov
|
|
*.gcno
|
|
*.gcda
|
|
*.dot
|
|
*.bat
|
|
*.metallib
|
|
.DS_Store
|
|
.build/
|
|
.cache/
|
|
.ccls-cache/
|
|
.direnv/
|
|
.envrc
|
|
.swiftpm
|
|
.venv
|
|
.clang-tidy
|
|
.vs/
|
|
.vscode/
|
|
|
|
lcov-report/
|
|
gcovr-report/
|
|
|
|
build*
|
|
out/
|
|
tmp/
|
|
|
|
models/*
|
|
models-mnt
|
|
|
|
/Pipfile
|
|
/baby-llama
|
|
/beam-search
|
|
/benchmark-matmult
|
|
/convert-llama2c-to-ggml
|
|
/embd-input-test
|
|
/embedding
|
|
/gguf
|
|
/gguf-llama-simple
|
|
/imatrix
|
|
/infill
|
|
/libllama.so
|
|
/llama-bench
|
|
/llava-cli
|
|
/lookahead
|
|
/lookup
|
|
/main
|
|
/metal
|
|
/passkey
|
|
/perplexity
|
|
/q8dot
|
|
/quantize
|
|
/quantize-stats
|
|
/result
|
|
/save-load-state
|
|
/server
|
|
/simple
|
|
/batched
|
|
/batched-bench
|
|
/export-lora
|
|
/finetune
|
|
/speculative
|
|
/parallel
|
|
/train-text-from-scratch
|
|
/tokenize
|
|
/vdot
|
|
/common/build-info.cpp
|
|
arm_neon.h
|
|
compile_commands.json
|
|
CMakeSettings.json
|
|
|
|
__pycache__
|
|
dist
|
|
|
|
zig-out/
|
|
zig-cache/
|
|
|
|
ppl-*.txt
|
|
qnt-*.txt
|
|
perf-*.txt
|
|
|
|
examples/jeopardy/results.txt
|
|
|
|
poetry.lock
|
|
poetry.toml
|
|
nppBackup
|