llama.cpp/ggml/src
Dan Johansson 6a0f779484
ggml : add run-time detection of neon, i8mm and sve (#9331)
* ggml: Added run-time detection of neon, i8mm and sve

Adds run-time detection of the Arm instructions set features
neon, i8mm and sve for Linux and Apple build targets.

* ggml: Extend feature detection to include non aarch64 Arm arch

* ggml: Move definition of ggml_arm_arch_features to the global data section
2024-09-28 15:06:16 +03:00
..
ggml-cann cann: fix crash when llama-bench is running on multiple cann devices (#9627) 2024-09-25 11:30:38 +08:00
ggml-cuda cuda: add q8_0->f32 cpy operation (#9571) 2024-09-24 02:14:24 +02:00
ggml-sycl Revert "[SYCL] fallback mmvq (#9088)" (#9579) 2024-09-23 11:28:06 +08:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
llamafile ggml : move common CPU backend impl to new header (#9509) 2024-09-16 16:22:07 +02:00
vulkan-shaders Improve Vulkan shader build system (#9239) 2024-09-06 08:56:17 +02:00
CMakeLists.txt ggml : add AVX512DQ requirement for AVX512 builds (#9622) 2024-09-24 11:03:21 +03:00
ggml-aarch64.c ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c ggml-alloc : fix list of allocated tensors with GGML_ALLOCATOR_DEBUG (#9573) 2024-09-21 14:24:23 +02:00
ggml-backend-impl.h ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
ggml-backend.c ggml : fix trailing whitespace (#0) 2024-09-20 21:15:05 +03:00
ggml-blas.cpp ggml : hide ggml_object, ggml_cgraph, ggml_hash_set (#9408) 2024-09-12 14:23:49 +03:00
ggml-cann.cpp ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
ggml-common.h ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
ggml-cpu-impl.h ggml : move common CPU backend impl to new header (#9509) 2024-09-16 16:22:07 +02:00
ggml-cuda.cu mtgpu: enable VMM (#9597) 2024-09-26 03:27:40 +02:00
ggml-impl.h ggml : move common CPU backend impl to new header (#9509) 2024-09-16 16:22:07 +02:00
ggml-kompute.cpp ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
ggml-metal.m ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
ggml-metal.metal metal : use F32 prec for K*Q in vec FA (#9595) 2024-09-23 11:27:47 +03:00
ggml-quants.c ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-quants.h ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00
ggml-rpc.cpp ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
ggml-sycl.cpp Revert "[SYCL] fallback mmvq (#9088)" (#9579) 2024-09-23 11:28:06 +08:00
ggml-vulkan.cpp Enable use to the rebar feature to upload buffers to the device. (#9251) 2024-09-28 12:05:05 +02:00
ggml.c ggml : add run-time detection of neon, i8mm and sve (#9331) 2024-09-28 15:06:16 +03:00