llama.cpp/ggml/src
John Balis fde13b3bb9 feat: cuda implementation for ggml_conv_transpose_1d (ggml/854)
* conv transpose 1d passing test for 1d input and kernel

* working for different input and output channel counts, added test for variable stride

* initial draft appears to work with stride other than 1

* working with all old and new conv1d  tests

* added a test for large tensors

* removed use cuda hardcoding

* restored test-conv-transpose.c

* removed unused arugments, and fixed bug where test failure would cause subsequent tests to fail

* fixed accumulator bug

* added test to test-backend-ops

* fixed mistake

* addressed review

* fixed includes

* removed blank lines

* style and warning fixes

* return failure when test fails

* fix supports_op

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-07-08 12:23:00 +03:00
..
ggml-cuda feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
ggml-sycl Enabled more data types for oneMKL gemm_batch (#8236) 2024-07-05 13:23:25 +01:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
vulkan-shaders llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
CMakeLists.txt cmake : add GGML_BUILD and GGML_SHARED macro definitions (#8281) 2024-07-05 17:29:35 +03:00
ggml-alloc.c llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend-impl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend.c llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-blas.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-common.h CUDA: refactor and optimize IQ MMVQ (#8215) 2024-07-01 20:39:06 +02:00
ggml-cuda.cu feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
ggml-impl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-kompute.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.m llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.metal Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
ggml-quants.c llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-quants.h Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
ggml-rpc.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-sycl.cpp Enabled more data types for oneMKL gemm_batch (#8236) 2024-07-05 13:23:25 +01:00
ggml-vulkan-shaders.hpp Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
ggml-vulkan.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml.c fix typo (#8267) 2024-07-03 14:40:16 +02:00
sgemm.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
sgemm.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00