.. |
ggml-amx
|
add amx kernel for gemm (#8998)
|
2024-10-18 13:34:36 +08:00 |
ggml-cann
|
cann: fix crash when llama-bench is running on multiple cann devices (#9627)
|
2024-09-25 11:30:38 +08:00 |
ggml-cuda
|
increase cuda_cpy block size (ggml/996)
|
2024-10-26 10:33:56 +03:00 |
ggml-sycl
|
fix mul_mat_vec_q and *_vec_q error (#9939)
|
2024-10-21 14:26:09 +08:00 |
kompute@4565194ed7
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
kompute-shaders
|
ggml : move rope type enum to ggml.h (#8949)
|
2024-08-13 21:13:15 +02:00 |
llamafile
|
llamafile : extend sgemm.cpp support for Q5_0 models (#10010)
|
2024-10-25 10:27:41 +03:00 |
vulkan-shaders
|
ggml: Add POOL2D OP for GPU acceleration to the Vulkan backend in the MobileVLM model. (#9763)
|
2024-10-29 09:52:56 +01:00 |
CMakeLists.txt
|
add amx kernel for gemm (#8998)
|
2024-10-18 13:34:36 +08:00 |
ggml-aarch64.c
|
ggml : add Q4_0_8_8 RISC-V GEMV and GEMM kernels (#10029)
|
2024-10-30 09:00:40 +02:00 |
ggml-aarch64.h
|
ggml : minor naming changes (#8433)
|
2024-07-12 10:46:02 +03:00 |
ggml-alloc.c
|
ggml-alloc : remove buffer_id from leaf_alloc (ggml/987)
|
2024-10-16 11:28:01 +03:00 |
ggml-amx.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-backend-impl.h
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-backend.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-blas.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-cann.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-common.h
|
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
|
2024-09-05 21:48:47 -04:00 |
ggml-cpu-impl.h
|
ggml : move common CPU backend impl to new header (#9509)
|
2024-09-16 16:22:07 +02:00 |
ggml-cuda.cu
|
llama : enable flash attn automatically when supported
|
2024-10-30 23:30:06 +01:00 |
ggml-impl.h
|
fix: use vm_allocate to allocate CPU backend buffer on macOS (#9875)
|
2024-10-17 00:36:51 +02:00 |
ggml-kompute.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-metal.m
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-metal.metal
|
metal : support permuted matrix multiplicaions (#10033)
|
2024-10-25 22:26:15 +03:00 |
ggml-quants.c
|
ggml : add run-time detection of neon, i8mm and sve (#9331)
|
2024-09-28 15:06:16 +03:00 |
ggml-quants.h
|
ggml : add run-time detection of neon, i8mm and sve (#9331)
|
2024-09-28 15:06:16 +03:00 |
ggml-rpc.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-sycl.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml-vulkan.cpp
|
llama : refactor model loader with backend registry (#10026)
|
2024-10-30 02:01:23 +01:00 |
ggml.c
|
ggml : fix memory leaks when loading invalid gguf files (#10094)
|
2024-10-30 14:51:21 +01:00 |