llama.cpp/ggml/src
Sergio López 61408e7fad
Some checks are pending
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-musa.Dockerfile platforms:linux/amd64 tag:full-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-musa.Dockerfile platforms:linux/amd64 tag:light-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-musa.Dockerfile platforms:linux/amd64 tag:server-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Waiting to run
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
kompute: add backend registry / device interfaces (#10045)
Get in line with the other backends by supporting the newer
backend/device registry interfaces.

Signed-off-by: Sergio Lopez <slp@redhat.com>
2024-10-30 17:01:52 +01:00
..
ggml-amx add amx kernel for gemm (#8998) 2024-10-18 13:34:36 +08:00
ggml-cann cann: fix crash when llama-bench is running on multiple cann devices (#9627) 2024-09-25 11:30:38 +08:00
ggml-cuda increase cuda_cpy block size (ggml/996) 2024-10-26 10:33:56 +03:00
ggml-sycl fix mul_mat_vec_q and *_vec_q error (#9939) 2024-10-21 14:26:09 +08:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
llamafile llamafile : extend sgemm.cpp support for Q5_0 models (#10010) 2024-10-25 10:27:41 +03:00
vulkan-shaders ggml: Add POOL2D OP for GPU acceleration to the Vulkan backend in the MobileVLM model. (#9763) 2024-10-29 09:52:56 +01:00
CMakeLists.txt add amx kernel for gemm (#8998) 2024-10-18 13:34:36 +08:00
ggml-aarch64.c ggml : add Q4_0_8_8 RISC-V GEMV and GEMM kernels (#10029) 2024-10-30 09:00:40 +02:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c ggml-alloc : remove buffer_id from leaf_alloc (ggml/987) 2024-10-16 11:28:01 +03:00
ggml-amx.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-backend-impl.h llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-backend.cpp kompute: add backend registry / device interfaces (#10045) 2024-10-30 17:01:52 +01:00
ggml-blas.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-cann.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-common.h ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
ggml-cpu-impl.h ggml : move common CPU backend impl to new header (#9509) 2024-09-16 16:22:07 +02:00
ggml-cuda.cu llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-impl.h fix: use vm_allocate to allocate CPU backend buffer on macOS (#9875) 2024-10-17 00:36:51 +02:00
ggml-kompute.cpp kompute: add backend registry / device interfaces (#10045) 2024-10-30 17:01:52 +01:00
ggml-metal.m llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-metal.metal metal : support permuted matrix multiplicaions (#10033) 2024-10-25 22:26:15 +03:00
ggml-quants.c ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-quants.h ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-rpc.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-sycl.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-vulkan.cpp llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml.c ggml : fix memory leaks when loading invalid gguf files (#10094) 2024-10-30 14:51:21 +01:00