mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 02:44:36 +00:00
b8a7a5a90f
* readme: cmake . -B build && cmake --build build * build: fix typo Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> * build: drop implicit . from cmake config command * build: remove another superfluous . * build: update MinGW cmake commands * Update README-sycl.md Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> * build: reinstate --config Release as not the default w/ some generators + document how to build Debug * build: revert more --config Release * build: nit / remove -H from cmake example * build: reword debug instructions around single/multi config split --------- Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> Co-authored-by: Neo Zhang Jianyu <jianyu.zhang@intel.com> |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
full-cuda.Dockerfile | ||
full-rocm.Dockerfile | ||
full.Dockerfile | ||
llama-cpp-clblast.srpm.spec | ||
llama-cpp-cuda.srpm.spec | ||
llama-cpp.srpm.spec | ||
main-cuda.Dockerfile | ||
main-intel.Dockerfile | ||
main-rocm.Dockerfile | ||
main-vulkan.Dockerfile | ||
main.Dockerfile | ||
server-cuda.Dockerfile | ||
server-intel.Dockerfile | ||
server-rocm.Dockerfile | ||
server-vulkan.Dockerfile | ||
server.Dockerfile | ||
tools.sh |