llama.cpp/.github
Eve 3407364776
Q6_K AVX improvements (#10118)
* q6_k instruction reordering attempt

* better subtract method

* should be theoretically faster

small improvement with shuffle lut, likely because all loads are already done at that stage

* optimize bit fiddling

* handle -32 offset separately. bsums exists for a reason!

* use shift

* Update ggml-quants.c

* have to update ci macos version to 13 as 12 doesnt work now. 13 is still x86
2024-11-04 23:06:31 +01:00
..
ISSUE_TEMPLATE Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (#8258) 2024-07-02 12:18:10 -04:00
workflows Q6_K AVX improvements (#10118) 2024-11-04 23:06:31 +01:00
labeler.yml labeler : updated sycl to match docs and code refactor (#8373) 2024-07-08 22:35:17 +02:00
pull_request_template.md github : update pr template 2024-06-16 10:46:51 +03:00