llama.cpp/ggml
Djip007 3f2bc659e7 more perfo with llamafile tinyblas on x86_64.
- add bf16 suport
- change dispache strategie (thanks:
https://github.com/ikawrakow/ik_llama.cpp/pull/71 )
- reduce memory bandwidth

simple tinyblas dispache and more cache freindly
2024-12-23 00:23:22 +01:00
..
include tts : add OuteTTS support (#10784) 2024-12-18 19:27:21 +02:00
src more perfo with llamafile tinyblas on x86_64. 2024-12-23 00:23:22 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : fix arm build (#10890) 2024-12-18 23:21:42 +01:00