..
ggml-amx
add amx kernel for gemm ( #8998 )
2024-10-18 13:34:36 +08:00
ggml-cann
cann: fix crash when llama-bench is running on multiple cann devices ( #9627 )
2024-09-25 11:30:38 +08:00
ggml-cuda
increase cuda_cpy block size (ggml/996)
2024-10-26 10:33:56 +03:00
ggml-sycl
fix mul_mat_vec_q and *_vec_q error ( #9939 )
2024-10-21 14:26:09 +08:00
kompute @ 4565194ed7
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
kompute-shaders
kompute: add mul_mat_q4_k shader ( #10097 )
2024-10-31 11:09:52 +02:00
llamafile
llamafile : extend sgemm.cpp support for Q5_0 models ( #10010 )
2024-10-25 10:27:41 +03:00
vulkan-shaders
ggml: Add POOL2D OP for GPU acceleration to the Vulkan backend in the MobileVLM model. ( #9763 )
2024-10-29 09:52:56 +01:00
CMakeLists.txt
cmake : make it possible linking ggml as external lib (ggml/1003)
2024-11-04 10:33:11 +02:00
ggml-aarch64.c
ggml : move CPU backend to a separate file ( #10144 )
2024-11-03 19:34:08 +01:00
ggml-aarch64.h
ggml : minor naming changes ( #8433 )
2024-07-12 10:46:02 +03:00
ggml-alloc.c
ggml-alloc : remove buffer_id from leaf_alloc (ggml/987)
2024-10-16 11:28:01 +03:00
ggml-amx.cpp
llama : refactor model loader with backend registry ( #10026 )
2024-10-30 02:01:23 +01:00
ggml-backend-impl.h
llama : refactor model loader with backend registry ( #10026 )
2024-10-30 02:01:23 +01:00
ggml-backend.cpp
ggml : move CPU backend to a separate file ( #10144 )
2024-11-03 19:34:08 +01:00
ggml-blas.cpp
llama : refactor model loader with backend registry ( #10026 )
2024-10-30 02:01:23 +01:00
ggml-cann.cpp
CANN: adjust backend registry refactor. ( #10158 )
2024-11-04 19:08:22 +08:00
ggml-common.h
ggml-quants : ternary packing for TriLMs and BitNet b1.58 ( #8151 )
2024-09-05 21:48:47 -04:00
ggml-cpu-impl.h
ggml : move common CPU backend impl to new header ( #9509 )
2024-09-16 16:22:07 +02:00
ggml-cpu.c
ggml : fix gelu tables initialization ( #10172 )
2024-11-04 20:06:58 +01:00
ggml-cuda.cu
cuda : clear error after changing peer access ( #10153 )
2024-11-04 13:10:23 +01:00
ggml-impl.h
ggml : move CPU backend to a separate file ( #10144 )
2024-11-03 19:34:08 +01:00
ggml-kompute.cpp
kompute: add mul_mat_q4_k shader ( #10097 )
2024-10-31 11:09:52 +02:00
ggml-metal.m
metal : add quantized FA support ( #10149 )
2024-11-06 10:24:23 +02:00
ggml-metal.metal
metal : add quantized FA support ( #10149 )
2024-11-06 10:24:23 +02:00
ggml-quants.c
Q6_K AVX improvements ( #10118 )
2024-11-04 23:06:31 +01:00
ggml-quants.h
ggml : add run-time detection of neon, i8mm and sve ( #9331 )
2024-09-28 15:06:16 +03:00
ggml-rpc.cpp
ggml : move CPU backend to a separate file ( #10144 )
2024-11-03 19:34:08 +01:00
ggml-sycl.cpp
llama : refactor model loader with backend registry ( #10026 )
2024-10-30 02:01:23 +01:00
ggml-vulkan.cpp
vulkan : improve ggml_vk_create_buffer error handling ( #9898 )
2024-11-01 19:33:14 +01:00
ggml.c
ggml : fix arch check in bf16_to_fp32 ( #10164 )
2024-11-04 23:17:01 +01:00