llama.cpp/ggml/src
agray3 13dca2a54a
Vectorize load instructions in dmmv f16 CUDA kernel (#9816)
* Vectorize load instructions in dmmv f16 CUDA kernel

Replaces scalar with vector load instructions, which substantially
improves performance on NVIDIA HBM GPUs, e.g. gives a 1.27X overall
speedup for Meta-Llama-3-8B-Instruct-F16 BS1 inference evaluation on
H100 SXM 80GB HBM3. On GDDR GPUs, there is a slight (1.01X) speedup.

* addressed comment

* Update ggml/src/ggml-cuda/dmmv.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2024-10-14 02:49:08 +02:00
..
ggml-cann cann: fix crash when llama-bench is running on multiple cann devices (#9627) 2024-09-25 11:30:38 +08:00
ggml-cuda Vectorize load instructions in dmmv f16 CUDA kernel (#9816) 2024-10-14 02:49:08 +02:00
ggml-sycl Fixed dequant precision issues in Q4_1 and Q5_1 (#9711) 2024-10-03 07:50:44 +01:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
llamafile ggml : move common CPU backend impl to new header (#9509) 2024-09-16 16:22:07 +02:00
vulkan-shaders vulkan : argsort barriers must be under uniform control flow (ggml/951) 2024-09-29 21:15:37 +03:00
CMakeLists.txt musa: add docker image support (#9685) 2024-10-10 20:10:37 +02:00
ggml-aarch64.c ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c ggml : move more prints to the ggml log system (#9839) 2024-10-11 15:34:45 +02:00
ggml-backend-impl.h ggml : add backend registry / device interfaces to BLAS backend (#9752) 2024-10-07 21:55:08 +02:00
ggml-backend.cpp ggml : move more prints to the ggml log system (#9839) 2024-10-11 15:34:45 +02:00
ggml-blas.cpp ggml : move more prints to the ggml log system (#9839) 2024-10-11 15:34:45 +02:00
ggml-cann.cpp ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-common.h ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
ggml-cpu-impl.h ggml : move common CPU backend impl to new header (#9509) 2024-09-16 16:22:07 +02:00
ggml-cuda.cu ggml : move more prints to the ggml log system (#9839) 2024-10-11 15:34:45 +02:00
ggml-impl.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-kompute.cpp ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-metal.m ggml : add metal backend registry / device (#9713) 2024-10-07 18:27:51 +03:00
ggml-metal.metal metal : use F32 prec for K*Q in vec FA (#9595) 2024-09-23 11:27:47 +03:00
ggml-quants.c ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-quants.h ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-rpc.cpp rpc : add backend registry / device interfaces (#9812) 2024-10-10 20:14:55 +02:00
ggml-sycl.cpp ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-vulkan.cpp ggml : fix BLAS with unsupported types (#9775) 2024-10-08 14:21:43 +02:00
ggml.c ggml : fix BLAS with unsupported types (#9775) 2024-10-08 14:21:43 +02:00