.. |
nix
|
nix: cuda: rely on propagatedBuildInputs (#8772)
|
2024-07-30 13:35:30 -07:00 |
cloud-v-pipeline
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
full-cuda.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
full-rocm.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
full.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-cli-cuda.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-cli-intel.Dockerfile
|
Build Llama SYCL Intel with static libs (#8668)
|
2024-07-24 14:36:00 +01:00 |
llama-cli-rocm.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-cli-vulkan.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-cli.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-cpp-cuda.srpm.spec
|
devops : remove clblast + LLAMA_CUDA -> GGML_CUDA (#8139)
|
2024-06-26 19:32:07 +03:00 |
llama-cpp.srpm.spec
|
build : rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
llama-server-cuda.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-server-intel.Dockerfile
|
Build Llama SYCL Intel with static libs (#8668)
|
2024-07-24 14:36:00 +01:00 |
llama-server-rocm.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-server-vulkan.Dockerfile
|
build : Fix docker build warnings (#8535) (#8537)
|
2024-07-17 20:21:55 +02:00 |
llama-server.Dockerfile
|
Install curl in runtime layer (#8693)
|
2024-08-04 20:17:16 +02:00 |
tools.sh
|
examples : remove finetune and train-text-from-scratch (#8669)
|
2024-07-25 10:39:04 +02:00 |