llama.cpp/ggml/src
Johannes Gäßler 4a8ccb37ad
Some checks are pending
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-musa.Dockerfile platforms:linux/amd64 tag:full-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-musa.Dockerfile platforms:linux/amd64 tag:light-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-musa.Dockerfile platforms:linux/amd64 tag:server-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Waiting to run
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
CUDA: no -sm row for very small matrices (#10185)
2024-11-14 13:00:15 +01:00
..
ggml-amx add amx kernel for gemm (#8998) 2024-10-18 13:34:36 +08:00
ggml-cann cann: fix crash when llama-bench is running on multiple cann devices (#9627) 2024-09-25 11:30:38 +08:00
ggml-cuda ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213) 2024-11-09 08:35:46 +01:00
ggml-sycl sycl : Fixes to broken builds and test-backend-ops (#10257) 2024-11-13 09:40:57 +00:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders kompute: add mul_mat_q4_k shader (#10097) 2024-10-31 11:09:52 +02:00
llamafile ggml : optimize llamafile cpu matrix multiplication for ppc64le (#10156) 2024-11-09 09:17:50 +02:00
vulkan-shaders vulkan: Optimize binary ops (#10270) 2024-11-14 06:22:55 +01:00
CMakeLists.txt ggml : optimize llamafile cpu matrix multiplication for ppc64le (#10156) 2024-11-09 09:17:50 +02:00
ggml-aarch64.c ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c ggml-alloc : remove buffer_id from leaf_alloc (ggml/987) 2024-10-16 11:28:01 +03:00
ggml-amx.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-backend-impl.h llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-backend.cpp ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
ggml-blas.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-cann.cpp CANN: adjust backend registry refactor. (#10158) 2024-11-04 19:08:22 +08:00
ggml-common.h ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
ggml-cpu-impl.h ggml : move common CPU backend impl to new header (#9509) 2024-09-16 16:22:07 +02:00
ggml-cpu.c fix q4_0_8_8 format for corrupted tokens issue (#10198) 2024-11-07 09:02:08 +01:00
ggml-cuda.cu CUDA: no -sm row for very small matrices (#10185) 2024-11-14 13:00:15 +01:00
ggml-impl.h ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
ggml-kompute.cpp kompute: add mul_mat_q4_k shader (#10097) 2024-10-31 11:09:52 +02:00
ggml-metal.m metal : fix build and some more comments (#10229) 2024-11-09 11:53:02 +02:00
ggml-metal.metal metal : more precise Q*K in FA vec kernel (#10247) 2024-11-11 08:39:13 +02:00
ggml-quants.c Q6_K AVX improvements (#10118) 2024-11-04 23:06:31 +01:00
ggml-quants.h ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-rpc.cpp ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
ggml-sycl.cpp sycl : Fixes to broken builds and test-backend-ops (#10257) 2024-11-13 09:40:57 +00:00
ggml-vulkan.cpp vulkan: Optimize binary ops (#10270) 2024-11-14 06:22:55 +01:00
ggml.c metal : optimize FA kernels (#10171) 2024-11-08 13:47:22 +02:00