llama.cpp/ggml/include
Diego Devesa 9f40989351
Some checks are pending
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-musa.Dockerfile platforms:linux/amd64 tag:full-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-musa.Dockerfile platforms:linux/amd64 tag:light-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-musa.Dockerfile platforms:linux/amd64 tag:server-musa]) (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Waiting to run
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
ggml : move CPU backend to a separate file (#10144)
2024-11-03 19:34:08 +01:00
..
ggml-alloc.h ggml : fix typo in example usage ggml_gallocr_new (ggml/984) 2024-10-04 18:50:05 +03:00
ggml-amx.h add amx kernel for gemm (#8998) 2024-10-18 13:34:36 +08:00
ggml-backend.h ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
ggml-blas.h ggml : add backend registry / device interfaces to BLAS backend (#9752) 2024-10-07 21:55:08 +02:00
ggml-cann.h [CANN] Adapt to dynamically loadable backends mechanism (#9970) 2024-10-22 16:16:01 +08:00
ggml-cpp.h llama : use smart pointers for ggml resources (#10117) 2024-11-01 23:48:26 +01:00
ggml-cpu.h ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
ggml-cuda.h llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-kompute.h kompute: add backend registry / device interfaces (#10045) 2024-10-30 17:01:52 +01:00
ggml-metal.h ggml : add metal backend registry / device (#9713) 2024-10-07 18:27:51 +03:00
ggml-rpc.h rpc : add backend registry / device interfaces (#9812) 2024-10-10 20:14:55 +02:00
ggml-sycl.h [SYCL] Add SYCL Backend registry, device and Event Interfaces (#9705) 2024-10-18 06:46:16 +01:00
ggml-vulkan.h vulkan : add backend registry / device interfaces (#9721) 2024-10-17 02:46:58 +02:00
ggml.h ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00