mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-29 04:44:34 +00:00
2cd43f4900
* more perfo with llamafile tinyblas on x86_64. - add bf16 suport - change dispache strategie (thanks: https://github.com/ikawrakow/ik_llama.cpp/pull/71 ) - reduce memory bandwidth simple tinyblas dispache and more cache freindly * tinyblas dynamic dispaching * sgemm: add M blocs. * - git 2.47 use short id of len 9. - show-progress is not part of GNU Wget2 * remove not stable test |
||
---|---|---|
.. | ||
test_basic.py | ||
test_chat_completion.py | ||
test_completion.py | ||
test_ctx_shift.py | ||
test_embedding.py | ||
test_infill.py | ||
test_lora.py | ||
test_rerank.py | ||
test_security.py | ||
test_slot_save.py | ||
test_speculative.py | ||
test_tokenize.py |