llama.cpp/ggml/include
PAB c2082d93a8
ggml : add GGML_PAD_REFLECT_1D operation (ggml/1034)
* ggml_pad_reflect_1d defined in header

* implemented on CPU

* called the forward pass

* impl Metal kernel

* added Metal kernel

* added OP_PAD_REFLECT_1D in test-backend-ops.cpp

* add test-pad-reflect-1d test case

* test case support multiple backend
2024-12-05 13:27:31 +02:00
..
ggml-alloc.h ggml : fix typo in example usage ggml_gallocr_new (ggml/984) 2024-10-04 18:50:05 +03:00
ggml-backend.h ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-blas.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cann.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpp.h llama : use smart pointers for ggml resources (#10117) 2024-11-01 23:48:26 +01:00
ggml-cpu.h ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00
ggml-cuda.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-kompute.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-metal.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-opt.h ggml: new optimization interface (ggml/988) 2024-11-17 08:30:29 +02:00
ggml-rpc.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-sycl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-vulkan.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml.h ggml : add GGML_PAD_REFLECT_1D operation (ggml/1034) 2024-12-05 13:27:31 +02:00