llama.cpp/ggml/src
Changyeon Kim 2f3c1466ff
llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model. (#8984)
* llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model.

- The CLIP model now prioritizes the Vulkan backend over the CPU when vulkan available.
- A GGML_OP_ACC shader has been added.
- The encoding performance of the CLIP model improved from 4.2s on the CPU to 0.9s on the GPU.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

* fix-up coding style.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

* Fix-up the missing initial parameter to resolve the compilation warning.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

* [fix] Add missing parameters.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

* [fix] Use nb1 and nb2 for dst.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

* Fix check results ggml_acc call

---------

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>
Co-authored-by: 0cc4m <picard12@live.de>
2024-08-20 21:00:00 +02:00
..
ggml-cann ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
ggml-cuda ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
ggml-sycl [SYCL] fallback mmvq (#9088) 2024-08-20 23:50:17 +08:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
llamafile ggml : move sgemm sources to llamafile subfolder (#8394) 2024-07-10 15:23:29 +03:00
vulkan-shaders llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model. (#8984) 2024-08-20 21:00:00 +02:00
CMakeLists.txt Vulkan Optimizations and Fixes (#8959) 2024-08-14 18:32:53 +02:00
ggml-aarch64.c ggml : ignore more msvc warnings (ggml/906) 2024-08-08 13:19:31 +03:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-backend-impl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend.c ggml : dynamic ggml_sched_max_splits based on graph_size (#9047) 2024-08-16 04:22:55 +02:00
ggml-blas.cpp ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-cann.cpp [CANN]: Fix ggml_backend_cann_buffer_get_tensor (#8871) 2024-08-06 12:42:42 +08:00
ggml-common.h feat: Support Moore Threads GPU (#8383) 2024-07-28 01:41:25 +02:00
ggml-cuda.cu ggml-backend : fix async copy from CPU (#8897) 2024-08-07 13:29:02 +02:00
ggml-impl.h ggml : reading the runtime sve config of the cpu (#8709) 2024-08-03 18:34:41 +02:00
ggml-kompute.cpp ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-metal.m ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
ggml-metal.metal ggml : fix quant dot product with odd number of blocks (#8549) 2024-07-19 17:17:27 +02:00
ggml-quants.c ggml : reading the runtime sve config of the cpu (#8709) 2024-08-03 18:34:41 +02:00
ggml-quants.h ggml : reading the runtime sve config of the cpu (#8709) 2024-08-03 18:34:41 +02:00
ggml-rpc.cpp rpc : print error message when failed to connect endpoint (#9042) 2024-08-19 10:11:45 +03:00
ggml-sycl.cpp [SYCL] fallback mmvq (#9088) 2024-08-20 23:50:17 +08:00
ggml-vulkan.cpp llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model. (#8984) 2024-08-20 21:00:00 +02:00
ggml.c ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00