llama.cpp/ggml/include
Diego Devesa dca1d4b58a
Some checks failed
Nix CI / nix-eval (macos-latest) (push) Waiting to run
Nix CI / nix-eval (ubuntu-latest) (push) Waiting to run
Nix CI / nix-build (macos-latest) (push) Waiting to run
Nix CI / nix-build (ubuntu-latest) (push) Waiting to run
flake8 Lint / Lint (push) Waiting to run
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full-cuda.Dockerfile platforms:linux/amd64 tag:full-cuda]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/full.Dockerfile platforms:linux/amd64,linux/arm64 tag:full]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-cuda.Dockerfile platforms:linux/amd64 tag:light-cuda]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli-intel.Dockerfile platforms:linux/amd64 tag:light-intel]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-cli.Dockerfile platforms:linux/amd64,linux/arm64 tag:light]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-cuda.Dockerfile platforms:linux/amd64 tag:server-cuda]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server-intel.Dockerfile platforms:linux/amd64 tag:server-intel]) (push) Has been cancelled
Publish Docker image / Push Docker image to Docker Hub (map[dockerfile:.devops/llama-server.Dockerfile platforms:linux/amd64,linux/arm64 tag:server]) (push) Has been cancelled
Python Type-Check / pyright type-check (push) Has been cancelled
ggml : fix BLAS with unsupported types (#9775)
* ggml : do not use BLAS with types without to_float

* ggml : return pointer from ggml_internal_get_type_traits to avoid unnecessary copies

* ggml : rename ggml_internal_get_type_traits -> ggml_get_type_traits

it's not really internal if everybody uses it
2024-10-08 14:21:43 +02:00
..
ggml-alloc.h ggml : fix typo in example usage ggml_gallocr_new (ggml/984) 2024-10-04 18:50:05 +03:00
ggml-backend.h ggml : add backend registry / device interfaces to BLAS backend (#9752) 2024-10-07 21:55:08 +02:00
ggml-blas.h ggml : add backend registry / device interfaces to BLAS backend (#9752) 2024-10-07 21:55:08 +02:00
ggml-cann.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-cuda.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-kompute.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.h ggml : add metal backend registry / device (#9713) 2024-10-07 18:27:51 +03:00
ggml-rpc.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-sycl.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-vulkan.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml.h ggml : fix BLAS with unsupported types (#9775) 2024-10-08 14:21:43 +02:00