llama.cpp/common
Justine Tunney 8cc91dc63c
ggml : add llamafile sgemm (#6414)
This change upstreams llamafile's cpu matrix multiplication kernels
which improve image and prompt evaluation speed. For starters, Q4_0
and Q8_0 weights should go ~40% faster on CPU. The biggest benefits
are with data types like f16 / f32, which process prompts 2x faster
thus making them faster than quantized data types for prompt evals.

This change also introduces bona fide AVX512 support since tinyBLAS
is able to exploit the larger register file. For example, on my CPU
llama.cpp llava-cli processes an image prompt at 305 tokens/second,
using the Q4_K and Q4_0 types, which has always been faster than if
we used f16 LLaVA weights, which at HEAD go 188 tokens/second. With
this change, f16 LLaVA performance leap frogs to 464 tokens/second.

On Intel Core i9-14900K this change improves F16 prompt perf by 5x.
For example, using llama.cpp at HEAD with Mistral 7b f16 to process
a 215 token prompt will go 13 tok/sec. This change has fixes making
it go 52 tok/sec. It's mostly thanks to my vectorized outer product
kernels but also because I added support for correctly counting the
number of cores on Alderlake, so the default thread count discounts
Intel's new efficiency cores. Only Linux right now can count cores.

This work was sponsored by Mozilla who's given permission to change
the license of this code from Apache 2.0 to MIT. To read more about
what's improved, and how it works, see: https://justine.lol/matmul/
2024-04-16 21:55:30 +03:00
..
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
CMakeLists.txt main: add --json-schema / -j flag (#6659) 2024-04-15 18:35:21 +01:00
common.cpp ggml : add llamafile sgemm (#6414) 2024-04-16 21:55:30 +03:00
common.h ggml : add llamafile sgemm (#6414) 2024-04-16 21:55:30 +03:00
console.cpp check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
grammar-parser.cpp grammar : verify parsed state (#5950) 2024-03-10 17:17:43 +02:00
grammar-parser.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
json-schema-to-grammar.cpp JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00
json-schema-to-grammar.h json-schema-to-grammar : fix order of props + non-str const/enum (#6232) 2024-03-22 15:07:44 +02:00
json.hpp json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
log.h [SYCL] fix SYCL backend build on windows is break by LOG() error (#6290) 2024-03-25 15:52:41 +08:00
ngram-cache.cpp Fixed lookup compilation issues on Windows (#6273) 2024-03-24 14:21:17 +01:00
ngram-cache.h lookup: complement data from context with general text statistics (#5479) 2024-03-23 01:24:36 +01:00
sampling.cpp sampling : deduplicated code for probability distribution access (#6240) 2024-03-24 10:54:07 +02:00
sampling.h llama : support negative ith in llama_get_ API (#6519) 2024-04-08 16:02:30 +03:00
stb_image.h examples: support LLaVA v1.5 (multimodal model) (#3436) 2023-10-12 18:23:18 +03:00
train.cpp code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
train.h sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00